Web crawler is a program or script that automatically crawls World Wide Web information according to certain rules. They are widely used in Internet search engines or other similar websites and can automatically collect all the information they can access. pages to access the content of these sites. Scrapy is a very powerful crawler framework, and it is written in python. Let’s take a look at what Scrapy is?
1. Required knowledge
The required knowledge is: Linux system Python language Scrapy framework XPath (XML path language) and some auxiliary tools (browser developer tools and XPath helper plug-ins).
Our crawler is developed using the Scrapy crawler framework in Python language and runs on Linux, so you need to be proficient in the Python language, Scrapy framework and basic knowledge of the Linux operating system.
We need to use XPath to extract what we want from the target HTML page, including Chinese text paragraphs and "next page" links, etc.
The browser's developer tools are the main auxiliary tools for writing crawlers. You can use this tool to analyze the pattern of page links, locate the elements you want to extract in the HTML page, and then extract their XPath expressions for use in the crawler code. You can also view the Referer, Cookie and other information in the page request header. If the crawled target is a dynamic website, the tool can also analyze the JavaScript requests behind it.
The XPath helper plug-in is a plug-in for chrome, and it can also be installed on browsers based on the chrome core. XPath helper can be used to debug XPath expressions.
2. Environment setup
To install Scrapy, you can use the pip command: pip install Scrapy
Scrapy has many related dependencies, so it may Encountered the following problem:
ImportError: No module named w3lib.http
Solution: pip install w3lib
ImportError: No module named twisted
Solution: pip install twisted
ImportError: No module named lxml.HTML
Solution: pip install lxml
error: libxml/xmlversion.h: No such file or directory
Solution: apt-get install libxml2-dev libxslt-dev
apt-get install Python-lxml
ImportError: No module named cssselect
Solution: pip install cssselect
ImportError: No module named OpenSSL
Solution: pip install pyOpenSSL
Suggestion:
Use the easy way: install with anaconda.
3. Scrapy framework
1. Introduction to Scrapy
Scrapy is a famous crawler framework written in Python . Scrapy can easily carry out web scraping, and can also be easily customized according to your own needs.
The overall structure of Scrapy is roughly as follows:
2.Scrapy components
Scrapy mainly includes the following components:
Engine (Scrapy)
is used to process the data flow of the entire system and trigger transactions (framework core).
Scheduler
is used to accept requests from the engine, push them into the queue, and return when the engine requests again. It can be imagined as a The priority queue of the URL (the URL or link of the crawled web page), which determines the next URL to be crawled, and removes duplicate URLs.
Downloader
is used to download web content and return the web content to the spider (Scrapy Downloader is built on twisted, an efficient asynchronous model of) .
Crawlers
Crawlers are mainly used to extract the information they need from specific web pages, which are so-called entities (Items). Users can also extract links from it and let Scrapy continue to crawl the next page.
Project Pipeline(Pipeline)
is responsible for processing entities extracted from web pages by crawlers. The main functions are to persist entities, verify the validity of entities, and remove unnecessary Information. When the page is parsed by the crawler, it will be sent to the project pipeline and the data will be processed through several specific sequences.
Downloader Middlewares
The framework located between the Scrapy engine and the downloader. It mainly handles requests and requests between the Scrapy engine and the downloader. response.
Spider Middlewares
A framework between the Scrapy engine and the crawler. Its main job is to process the spider's response input and request output.
Scheduler Middewares
Middleware between the Scrapy engine and the scheduler, sending requests and responses from the Scrapy engine to the scheduler.
Scrapy running process:
1. The engine takes out a link (URL) from the scheduler for the next crawl
2. The engine encapsulates the URL into a request (Request) and passes it to the downloader
3. The downloader downloads the resource and encapsulates it into a response package (Response)
4. The crawler parses the Response
5. If the entity (Item) is parsed, it will be handed over to the entity pipeline for further processing
6. If the link (URL) is parsed, the URL will be handed to the scheduler to wait for crawling
The above is the detailed content of What is the powerful crawler framework Scrapy?. For more information, please follow other related articles on the PHP Chinese website!