How to do anti-crawling in python
A web crawler is a program that automatically extracts web pages. It downloads web pages from the World Wide Web for search engines and is an important component of search engines. But when web crawlers are abused, too many homogeneous things appear on the Internet, and originality cannot be protected. As a result, many websites began to fight against web crawlers and tried every means to protect their content.
1: User-Agent Referer detection (recommended learning: Python video tutorial)
User- Agent is a field in the HTTP protocol, and its role is to describe some information about the terminal that issues the HTTP request.
Enables the server to identify the operating system and version, CPU type, browser and version, browser rendering engine, browser language, browser plug-in, etc. used by the customer.
The server can know who is visiting the website through this field. Block users who are not normal browsers.
Solution:
Disguise the User-Agent of the browser, because the User-Agent of each browser is different, and all users can Use a browser. All UA detection can be solved by conditioning the browser's User-Agent on each request.
Referer is part of the header. When the browser sends a request to the web server, it usually brings the Referer and tells the server Which page did I link to from? For example, some picture websites will detect your Referer value when you request a picture. If the Referer does not match, normal pictures will not be returned.
Solution:
In the request to detect the referer, carry the matching referer value.
2: js obfuscation and rendering
The so-called JavaScript obfuscation is basically:
1. Remove some things that are not actually called The function.
2. Merge scattered variable declarations.
3. Simplification of logical functions.
4. Simplification of variable names. It depends on the pros and cons of different compression tools. Common tools include UglifyJS, JScrambler and other tools.
js rendering is actually the modification of the HTML page. For example, some web pages themselves do not return data. The data is added to HTML after js loading. When encountering this situation, we need to know that the crawler will not perform JavaScript operations. So it needs to be dealt with in other ways.
Solution:
1. Find the key code by reading the website js source code and implement it in python.
2. Find the key code by reading the website js source code, and use PyV8, execjs and other libraries to directly execute the js code.
3. Directly simulate the browser environment through the selenium library
3: IP restriction frequency
WEB systems all use the http protocol to connect to the WEB container Yes, each request will generate at least one TCP connection between the client and the server.
For the server, you can clearly see the requests initiated by an IP address within the unit time.
When the number of requests exceeds a certain value, it can be determined as an abnormal user request.
Solution:
1. Design the IP proxy pool by yourself, and carry a different proxy address with each request through rotation.
2. ADSL dynamic dialing has a unique feature. Every time you dial a number, you get a new IP. That is, its IP is not fixed.
Four: Verification code
Verification code (CAPTCHA) is a "Completely Automated PublicTuring test to tell Computers and HumansApart" ) is a public, fully automated program that distinguishes whether the user is a computer or a human.
It can prevent: malicious cracking of passwords, ticket fraud, forum flooding, and effectively prevents a hacker from making continuous login attempts on a specific registered user using a specific program to violently crack.
This question can be generated and judged by a computer, but only a human can answer it. Since computers cannot answer CAPTCHA questions, the user who answers the questions can be considered a human.
Solution:
1. Manually identify verification codes
2.pytesseract identifies simple verification codes
3.Docking Coding platform
4. Machine learning
For more Python-related technical articles, please visit the Python Tutorial column to learn!
The above is the detailed content of How to do anti-crawling in python. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The speed of mobile XML to PDF depends on the following factors: the complexity of XML structure. Mobile hardware configuration conversion method (library, algorithm) code quality optimization methods (select efficient libraries, optimize algorithms, cache data, and utilize multi-threading). Overall, there is no absolute answer and it needs to be optimized according to the specific situation.

It is impossible to complete XML to PDF conversion directly on your phone with a single application. It is necessary to use cloud services, which can be achieved through two steps: 1. Convert XML to PDF in the cloud, 2. Access or download the converted PDF file on the mobile phone.

There is no built-in sum function in C language, so it needs to be written by yourself. Sum can be achieved by traversing the array and accumulating elements: Loop version: Sum is calculated using for loop and array length. Pointer version: Use pointers to point to array elements, and efficient summing is achieved through self-increment pointers. Dynamically allocate array version: Dynamically allocate arrays and manage memory yourself, ensuring that allocated memory is freed to prevent memory leaks.

An application that converts XML directly to PDF cannot be found because they are two fundamentally different formats. XML is used to store data, while PDF is used to display documents. To complete the transformation, you can use programming languages and libraries such as Python and ReportLab to parse XML data and generate PDF documents.

XML can be converted to images by using an XSLT converter or image library. XSLT Converter: Use an XSLT processor and stylesheet to convert XML to images. Image Library: Use libraries such as PIL or ImageMagick to create images from XML data, such as drawing shapes and text.

To generate images through XML, you need to use graph libraries (such as Pillow and JFreeChart) as bridges to generate images based on metadata (size, color) in XML. The key to controlling the size of the image is to adjust the values of the <width> and <height> tags in XML. However, in practical applications, the complexity of XML structure, the fineness of graph drawing, the speed of image generation and memory consumption, and the selection of image formats all have an impact on the generated image size. Therefore, it is necessary to have a deep understanding of XML structure, proficient in the graphics library, and consider factors such as optimization algorithms and image format selection.

Use most text editors to open XML files; if you need a more intuitive tree display, you can use an XML editor, such as Oxygen XML Editor or XMLSpy; if you process XML data in a program, you need to use a programming language (such as Python) and XML libraries (such as xml.etree.ElementTree) to parse.

XML formatting tools can type code according to rules to improve readability and understanding. When selecting a tool, pay attention to customization capabilities, handling of special circumstances, performance and ease of use. Commonly used tool types include online tools, IDE plug-ins, and command-line tools.
