Table of Contents
Data structure and algorithm
About practice
Common crawler libraries
Home Backend Development Python Tutorial Sharing experience on getting started with Python crawler

Sharing experience on getting started with Python crawler

Dec 05, 2017 am 09:53 AM
python share Experience

A web crawler is a program that automatically obtains web content and is an important part of search engines. Web crawlers download web pages from the World Wide Web for search engines. Generally divided into traditional crawlers and focused crawlers.

Learning reptiles is a step-by-step process. As a novice with zero foundation, it can be roughly divided into three stages. The first stage is to get started and master the necessary basic knowledge. The second stage is to imitate and follow others. Learn crawler code and understand every line of code. The third stage is to do it yourself. At this stage, you begin to have your own ideas for solving problems and can independently design a crawler system.

The technologies involved in crawlers include but are not limited to proficiency in a programming language (here, we take Python as an example), knowledge of HTML, basic knowledge of the HTTP/HTTPS protocol, regular expressions, database knowledge, and the use of common packet capture tools. , the use of crawler frameworks, involving large-scale crawlers, also requires understanding of the concept of distribution, message queues, commonly used data structures and algorithms, caching, and even the application of machine learning. Large-scale systems rely on many technologies To support. Crawlers are only for obtaining data. Analysis and mining of these data are the value. Therefore, it can also be extended to data analysis, data mining and other fields to make decisions for enterprises. Therefore, as a crawler engineer, there is a lot to do.

So do I have to learn all the above knowledge before I can start writing a crawler? Of course not, learning is a lifelong thing. As long as you can write Python code, you can start crawling directly. It is like learning to drive. As long as you can drive, you can hit the road. Of course, writing code is much safer than driving.

To write a crawler in Python, you first need to know Python, understand the basic syntax, and know how to use functions, classes, and common methods in common data structures such as list and dict, which is a basic introduction. Then you need to understand HTML. HTML is a document tree structure. There is a 30-minute introductory tutorial on HTML on the Internet that is enough. Then there is the knowledge about HTTP. The basic principle of a crawler is the process of downloading data from a remote server through a network request, and the technology behind this network request is based on the HTTP protocol. As an entry-level crawler, you need to understand the basic principles of the HTTP protocol. Although the HTTP specification cannot be written in one book, the in-depth content can be read later, combining theory with practice.

The network request framework is an implementation of the HTTP protocol. For example, the famous network request library Requests is a network library that simulates a browser sending HTTP requests. After understanding the HTTP protocol, you can specifically learn network-related modules. For example, Python comes with urllib, urllib2 (urllib in Python3), httplib, Cookie, etc. Of course, you can skip these directly. To learn how to use Requests directly, the premise is that you are familiar with the basic content of the HTTP protocol. A book I have to recommend here is "HTTP Illustrated". The data crawled down is mostly HTML text, and a few are data based on XML format or Json format. To process these data correctly, you need to be familiar with the solutions for each data type. For example, JSON data can be directly used in Python. For the module json of HTML data, you can use BeautifulSoup, lxml and other libraries to process it. For xml data, you can use third-party libraries such as untangle and xmltodict.

For entry-level crawlers, it is not necessary to learn regular expressions. You can learn them when you really need them. For example, after you crawl the data back, you need to clean the data. When you find that using regular expressions When the string operation method cannot be processed at all, then you can try to understand regular expressions, which can often get twice the result with half the effort. Python's re module can be used to process regular expressions. Here are also several tutorials recommended: 30-minute introductory tutorial on regular expressions Python regular expression guide Complete guide to regular expressions

After data cleaning, persistent storage must be carried out. You can use file storage, such as CSV files, You can also use database storage, simply use sqlite, more professional use MySQL, or the distributed document database MongoDB. These databases are very friendly to Python and have ready-made library support. Python operates the MySQL database and connects to the database through Python

The basic process from data capture to cleaning to storage has been completed. It can be considered a basic introduction. Next is the time to test your internal skills. Many websites have set up There are anti-crawler strategies. They try every means to prevent you from obtaining data by abnormal means. For example, there will be all kinds of weird verification codes to limit your request operations, limit the request speed, limit the IP, and even encrypt the data. , in short, to increase the cost of obtaining data. At this time, you need to master more knowledge. You need to understand the HTTP protocol in depth, you need to understand common encryption and decryption algorithms, you need to understand cookies in HTTP, HTTP proxy, and various HEADERs in HTTP. Reptiles and anti-reptiles are a pair that love each other and kill each other. Every time the Tao is high, the magic is high. There is no established unified solution for how to deal with anti-crawlers. It depends on your experience and the knowledge system you have mastered. This is not something you can achieve with just a 21-day introductory tutorial.

Data structure and algorithm

To carry out a large-scale crawler, we usually start crawling from a URL, and then add the URL link parsed in the page to the set of URLs to be crawled. We need to use queue or priority The queue is used to differentiate between crawling some websites first and crawling some websites later. Every time a page is crawled, whether to use the depth-first or breadth-first algorithm to crawl the next link. Every time a network request is initiated, a DNS resolution process is involved (converting the URL into an IP). In order to avoid repeated DNS resolution, we need to cache the resolved IP. There are so many URLs. How to determine which URLs have been crawled and which ones have not been crawled? To put it simply, use a dictionary structure to store the URLs that have been crawled. However, if a large number of URLs are encountered, the memory space occupied by the dictionary will be very large. At this time, you need to consider using a Bloom Filter to crawl data one by one with a thread. The efficiency is pitifully low. If you want to improve the crawler efficiency, you should use multi-threads, multi-processes, coroutines, or distributed operations.

About practice

There are a lot of crawler tutorials on the Internet. The principles are basically the same. They just change a different website to crawl. You can follow the online tutorials to learn to simulate logging in to a website and simulate checking in. Sort of like, go to Douban for movies, books, etc. Through continuous practice, from encountering problems to solving them, this kind of gain cannot be compared with reading a book.

Common crawler libraries

  • ##urllib, urlib2 (urllib in Python) python’s built-in network request library

  • urllib3 : Thread-safe HTTP network request library

  • requests: The most widely used network request library, compatible with py2 and py3

  • ##grequests: asynchronous requests
  • BeautifulSoup: HTML and XML operation parsing library
  • lxml: Another way to process HTML and XML
  • tornado: Asynchronous network framework
  • Gevent: Asynchronous network framework
  • ##Scrapy: The most popular crawler framework
  • pyspider: crawler framework
  • xmltodict: convert xml into dictionary
  • pyquery: operate HTML like jQuery
  • Jieba: word segmentation
  • SQLAlchemy: ORM framework
  • celery: message queue
  • rq: Simple message queue
  • python-goose: Extract text from HTML
  • Book recommendations:

"HTTP Illustrated"
  • "The Definitive Guide to HTTP"
  • "Computer Networks: Automation Top-down method》
  • 《Writing a web crawler in Python》
  • 《Python network data collection》
  • 《Mastering Regular Expressions》
  • 《Getting Started with Python to Practice》
  • 《Write your own web crawler》
  • 《Crypto101》
  • 《Illustrated cryptography technology》
  • The above content is about the entry-level experience of Python crawler technology Share, hope it can help everyone.
Related recommendations:

Introduction to Python3 basic crawler

Python’s simplest web crawler tutorial

Practice of python crawler

The above is the detailed content of Sharing experience on getting started with Python crawler. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

PHP and Python: Code Examples and Comparison PHP and Python: Code Examples and Comparison Apr 15, 2025 am 12:07 AM

PHP and Python have their own advantages and disadvantages, and the choice depends on project needs and personal preferences. 1.PHP is suitable for rapid development and maintenance of large-scale web applications. 2. Python dominates the field of data science and machine learning.

Python vs. JavaScript: Community, Libraries, and Resources Python vs. JavaScript: Community, Libraries, and Resources Apr 15, 2025 am 12:16 AM

Python and JavaScript have their own advantages and disadvantages in terms of community, libraries and resources. 1) The Python community is friendly and suitable for beginners, but the front-end development resources are not as rich as JavaScript. 2) Python is powerful in data science and machine learning libraries, while JavaScript is better in front-end development libraries and frameworks. 3) Both have rich learning resources, but Python is suitable for starting with official documents, while JavaScript is better with MDNWebDocs. The choice should be based on project needs and personal interests.

Detailed explanation of docker principle Detailed explanation of docker principle Apr 14, 2025 pm 11:57 PM

Docker uses Linux kernel features to provide an efficient and isolated application running environment. Its working principle is as follows: 1. The mirror is used as a read-only template, which contains everything you need to run the application; 2. The Union File System (UnionFS) stacks multiple file systems, only storing the differences, saving space and speeding up; 3. The daemon manages the mirrors and containers, and the client uses them for interaction; 4. Namespaces and cgroups implement container isolation and resource limitations; 5. Multiple network modes support container interconnection. Only by understanding these core concepts can you better utilize Docker.

How to run programs in terminal vscode How to run programs in terminal vscode Apr 15, 2025 pm 06:42 PM

In VS Code, you can run the program in the terminal through the following steps: Prepare the code and open the integrated terminal to ensure that the code directory is consistent with the terminal working directory. Select the run command according to the programming language (such as Python's python your_file_name.py) to check whether it runs successfully and resolve errors. Use the debugger to improve debugging efficiency.

Python: Automation, Scripting, and Task Management Python: Automation, Scripting, and Task Management Apr 16, 2025 am 12:14 AM

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

Is the vscode extension malicious? Is the vscode extension malicious? Apr 15, 2025 pm 07:57 PM

VS Code extensions pose malicious risks, such as hiding malicious code, exploiting vulnerabilities, and masturbating as legitimate extensions. Methods to identify malicious extensions include: checking publishers, reading comments, checking code, and installing with caution. Security measures also include: security awareness, good habits, regular updates and antivirus software.

What is vscode What is vscode for? What is vscode What is vscode for? Apr 15, 2025 pm 06:45 PM

VS Code is the full name Visual Studio Code, which is a free and open source cross-platform code editor and development environment developed by Microsoft. It supports a wide range of programming languages ​​and provides syntax highlighting, code automatic completion, code snippets and smart prompts to improve development efficiency. Through a rich extension ecosystem, users can add extensions to specific needs and languages, such as debuggers, code formatting tools, and Git integrations. VS Code also includes an intuitive debugger that helps quickly find and resolve bugs in your code.

How to install nginx in centos How to install nginx in centos Apr 14, 2025 pm 08:06 PM

CentOS Installing Nginx requires following the following steps: Installing dependencies such as development tools, pcre-devel, and openssl-devel. Download the Nginx source code package, unzip it and compile and install it, and specify the installation path as /usr/local/nginx. Create Nginx users and user groups and set permissions. Modify the configuration file nginx.conf, and configure the listening port and domain name/IP address. Start the Nginx service. Common errors need to be paid attention to, such as dependency issues, port conflicts, and configuration file errors. Performance optimization needs to be adjusted according to the specific situation, such as turning on cache and adjusting the number of worker processes.

See all articles