


In-depth understanding of Python distributed crawler principles
python video tutorial column introduces the principle of distributed crawler.
Free recommendation: python video tutorial
First of all, let’s do it first Let’s see how people obtain web content if it is normal human behavior.
(1) Open the browser, enter the URL, and open the source web page
(2) Select the content we want, including title, author, abstract, text, etc. Information
(3) Store in hard disk
The above three processes, mapped to the technical level, are actually: network request, capturing structured data, and data storage.
We use Python to write a simple program to implement the simple crawling function above.
#!/usr/bin/python #-*- coding: utf-8 -*- ''''' Created on 2014-03-16 @author: Kris ''' import urllib2, re, cookielib def httpCrawler(url): ''''' @summary: 网页抓取 ''' content = httpRequest(url) title = parseHtml(content) saveData(title) def httpRequest(url): ''''' @summary: 网络请求 ''' try: ret = None SockFile = None request = urllib2.Request(url) request.add_header('User-Agent', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322)') request.add_header('Pragma', 'no-cache') opener = urllib2.build_opener() SockFile = opener.open(request) ret = SockFile.read() finally: if SockFile: SockFile.close() return ret def parseHtml(html): ''''' @summary: 抓取结构化数据 ''' content = None pattern = '<title>([^<]*?)</title>' temp = re.findall(pattern, html) if temp: content = temp[0] return content def saveData(data): ''''' @summary: 数据存储 ''' f = open('test', 'wb') f.write(data) f.close() if __name__ == '__main__': url = 'http://www.baidu.com' httpCrawler(url)
Looks very simple, yes, it is a basic program for getting started with crawlers. Of course, implementing a collection process is nothing more than the above basic steps. But to implement a powerful collection process, you will encounter the following problems:
(1) Access with cookie information is required. For example, most social software basically requires users to log in. Only after that can we see valuable things. In fact, it is very simple. We can use the cookielib module provided by Python to achieve every visit with the cookie information given by the source website. In this way, as long as we successfully simulate the login, the crawler will be logged in. status, then we can collect all the information seen by the logged in user. The following is a modification to the httpRequest() method using cookies:
ckjar = cookielib.MozillaCookieJar() cookies = urllib2.HTTPCookieProcessor(ckjar) #定义cookies对象 def httpRequest(url): ''''' @summary: 网络请求 ''' try: ret = None SockFile = None request = urllib2.Request(url) request.add_header('User-Agent', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322)') request.add_header('Pragma', 'no-cache') opener = urllib2.build_opener(cookies) #传递cookies对象 SockFile = opener.open(request) ret = SockFile.read() finally: if SockFile: SockFile.close() return ret
(2) Encoding issue. There are currently two most common encodings on websites: utf-8 or gbk. When we collect the source website encoding and the encoding stored in our database is inconsistent, for example, the encoding of 163.com uses gbk, and what we need to store is utf. -8 encoded data, then we can use the encode() and decode() methods provided in Python to convert, for example:
content = content.decode('gbk', 'ignore') #将gbk编码转为unicode编码 content = content.encode('utf-8', 'ignore') #将unicode编码转为utf-8编码
There is a unicode encoding in the middle, and we need to convert to the intermediate encoding unicode in order to gbk or utf-8 conversion.
(3) The tags in the web page are incomplete. For example, some source codes have a start tag but no end tag. If the HTML tag is incomplete, it will affect our ability to capture structured data. We can use Python The BeautifulSoup module first cleans the source code and then analyzes and obtains the content.
(4) Some websites use JS to save web content. When we looked directly at the source code, we found a bunch of troublesome JS code. You can use mozilla, webkit and other toolkits that can parse browsers to parse js and ajax, although the speed will be slightly slower.
(5)The picture exists in flash form. When the content in the picture is composed of text or numbers, then this is easier to handle. We only need to use OCR technology to achieve automatic recognition. However, if it is a flash link, we store the entire URL.
(6) A webpage has multiple webpage structures. If we only have one set of crawling rules, it will definitely not work. Therefore, we need to configure multiple sets of simulations to assist in crawling.
(7) Monitor the source website. After all, crawling other people's things is not a good thing, so most websites will have restrictions on crawlers prohibiting access.
A good collection system should be that no matter where our target data is, as long as it is visible to the user, we can collect it back. What-you-see-is-what-you-get unblocked collection, whether data needs to be logged in or not can be collected smoothly. Most valuable information generally requires logging in to see, such as social networking sites. In order to cope with logging in, the website must have a crawler system that simulates user login to obtain data normally. However, social websites hope to form a closed loop and are unwilling to put data outside the site. This kind of system will not be as open as news and other content. Most of these social websites will adopt some restrictions to prevent robot crawler systems from crawling data. Generally, it will not take long for an account to be crawled before it is detected and access is prohibited. Does that mean we can’t crawl data from these websites? This is definitely not the case. As long as social websites do not close web page access, we can also access the data that normal people can access. In the final analysis, it is a simulation of a person's normal behavior, which is professionally called "anti-monitoring".
The source website generally has the following restrictions:
1. The number of visits to a single IP within a certain period of time. A normal user accesses the website, unless it is random. Click to play, otherwise you will not visit a website too quickly within a certain period of time, and it will not last too long. This problem is easy to solve. We can use a large number of irregular proxy IPs to form a proxy pool, randomly select proxies from the proxy pool, and simulate access. There are two types of proxy IPs, transparent proxy and anonymous proxy.
2. The number of visits to a single account within a certain period of time. If a person accesses a data interface 24 hours a day and the speed is very fast, it may be a robot. We can use a large number of accounts with normal behavior. Normal behavior is how ordinary people operate on social networking sites, and the number of URLs visited per unit time is minimized. There can be a period of time between each visit. This time interval can be a random value. , that is, after each visit to a URL, it sleeps for a random period of time, and then visits the next URL.
If you can control the access policy of the account and IP, there will basically be no problem. Of course, the opponent's website will also have operation and maintenance strategies to adjust. In a battle between the enemy and ourselves, the crawler must be able to sense that the other party's anti-monitoring will have an impact on us, and notify the administrator to handle it in a timely manner. In fact, the most ideal thing is to be able to intelligently implement anti-monitoring confrontation through machine learning and achieve uninterrupted capture.
The following is a distributed crawler architecture diagram that I am designing recently, as shown in Figure 1:
This is purely a humble work, preliminary ideas It is being implemented and the communication between the server and the client is being established. The Socket module of Python is mainly used to realize the communication between the server and the client. If you are interested, you can contact me individually to discuss and complete a better solution.
The above is the detailed content of In-depth understanding of Python distributed crawler principles. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

It is impossible to view MongoDB password directly through Navicat because it is stored as hash values. How to retrieve lost passwords: 1. Reset passwords; 2. Check configuration files (may contain hash values); 3. Check codes (may hardcode passwords).

As a data professional, you need to process large amounts of data from various sources. This can pose challenges to data management and analysis. Fortunately, two AWS services can help: AWS Glue and Amazon Athena.

The steps to start a Redis server include: Install Redis according to the operating system. Start the Redis service via redis-server (Linux/macOS) or redis-server.exe (Windows). Use the redis-cli ping (Linux/macOS) or redis-cli.exe ping (Windows) command to check the service status. Use a Redis client, such as redis-cli, Python, or Node.js, to access the server.

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.

Question: How to view the Redis server version? Use the command line tool redis-cli --version to view the version of the connected server. Use the INFO server command to view the server's internal version and need to parse and return information. In a cluster environment, check the version consistency of each node and can be automatically checked using scripts. Use scripts to automate viewing versions, such as connecting with Python scripts and printing version information.

Navicat's password security relies on the combination of symmetric encryption, password strength and security measures. Specific measures include: using SSL connections (provided that the database server supports and correctly configures the certificate), regularly updating Navicat, using more secure methods (such as SSH tunnels), restricting access rights, and most importantly, never record passwords.
