What is a crawler and the basic process of a crawler

爱喝马黛茶的安东尼
Release: 2019-06-05 10:24:37
forward
5113 people have browsed it

With the rapid development of the Internet, more and more data are flooding this era. Obtaining and processing data has become an essential part of our lives, and crawlers have emerged as the times require.

Many languages ​​​​can crawl, but crawlers based on python are more concise and convenient. Crawlers have also become an essential part of the python language.

This article explains what a crawler is and the basic process of a crawler. The next issue will further understand the basic process of a crawler, Request and Response.

What is a crawler and the basic process of a crawler

#What is a crawler?

Crawler is a web crawler, in English it is Web Spider. Translated, it means a spider crawling on the Internet. If the Internet is regarded as a big web, then a crawler is a spider crawling around on the big web. When it encounters the food it wants, it will grab it.

We enter a URL in the browser, hit Enter, and see the page information of the website. This is when the browser requests the website's server and obtains network resources. Then, the crawler is equivalent to simulating the browser to send a request and obtain the HTML code. HTML code usually contains tags and text information, and we extract the information we want from it.

Usually a crawler starts from a certain page of a website, crawls the content of this page, finds other link addresses in the web page, and then crawls from this address to the next page, so that it keeps crawling. Go down and grab information in batches. Then, we can see that a web crawler is a program that continuously crawls web pages and captures information.


The basic process of the crawler:

1. Initiate a request:

Initiate to the target site through the HTTP library Request, that is, send a Request. The request can contain additional headers and other information, and then wait for the server to respond. The process of this request is like opening the browser, entering the URL: www.baidu.com in the browser address bar, and then clicking Enter. This process is actually equivalent to the browser acting as a browsing client and sending a request to the server.

2. Get the response content:

If the server can respond normally, we will get a Response. The content of the Response is the content to be obtained. The type may include HTML, Json string, binary Data (pictures, videos, etc.) and other types. This process is that the server receives the client's request and parses the web page HTML file sent to the browser.

3. Parse the content:

The obtained content may be HTML, which can be parsed using regular expressions and web page parsing libraries. It may also be Json, which can be directly converted to Json object parsing. It may be binary data that can be saved or further processed. This step is equivalent to the browser getting the server-side file locally, interpreting it and displaying it.

4. Save data:

The saving method can be to save the data as text, save the data to the database, or save it as a specific jpg, mp4 and other format files. This is equivalent to downloading pictures or videos on the webpage when we browse the web.

The above is the detailed content of What is a crawler and the basic process of a crawler. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:csdn.net
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!