Home > Backend Development > Python Tutorial > How crawlers work

How crawlers work

迷茫
Release: 2017-03-25 16:58:22
Original
1707 people have browsed it
  1. How the crawler works

Web crawler, that is, Web Spider, is a very vivid name. If the Internet is compared to a spider web, then a spider is a spider crawling around on the web. Web spiders search for web pages through their link addresses. Starting from a certain page of the website (usually the home page), read the content of the web page, find other link addresses in the web page,

and then use these link addresses to find the next web page, and this cycle continues until the All pages of this website have been crawled. If the entire Internet is regarded as a website, then web spiders can use this principle to crawl all web pages on the Internet. In this way, a web crawler is a crawler, a program that crawls web pages. The basic operation of a web crawler is to crawl web pages. So how can you get the page you want exactly as you want? Let’s start with the URL.

 

The process of crawling web pages is actually the same as how readers usually use IE browser to browse web pages. For example, you enter the address www.baidu.com in the address bar of your browser. The process of opening a web page is actually that the browser, as a browsing "client", sends a request to the server, "grabs" the server-side files locally, and then interprets and displays them. HTML is a markup language that uses tags to mark content and parse and differentiate them. The function of the browser is to parse the obtained HTML code, and then convert the original code into the website page we see directly.

Simply put, URL is the string of URL entered in the browser. Before understanding URLs, you must first understand the concept of URIs.

 What is a URI?

Every resource available on the Web, such as HTML documents, images, video clips, programs, etc., is located by a Universal Resource Identifier (URI).

A URI usually consists of three parts:

    The naming mechanism for accessing resources;
  • The host name for storing resources;
  • The name of the resource itself, represented by the path.
  • This is a resource that can be accessed through the HTTP protocol,
  • is located on the host,
  • Access via the path "/html/html40".
2. Understanding and examples of URL

URL is a subset of URI. It is the abbreviation of Uniform Resource Locator, translated as "Uniform Resource Locator". In layman's terms, URL is a string describing information resources on the Internet, and is mainly used in various WWW client programs and server programs. URLs can be used to describe various information resources in a unified format, including files, server addresses and directories, etc. The general format of the URL is (the ones with square brackets [] are optional):

  protocol :// hostname[:port] / path / [;parameters][?query]#fragment
Copy after login

The format of the URL consists of three parts:

    The first part is the agreement (or service method).
  • The second part is the IP address of the host where the resource is stored (sometimes also including the port number).
  • The third part is the specific address of the host resource, such as directory and file name, etc.
  • The first part and the second part are separated by the "://" symbol, and the second part and the third part are separated by the "/" symbol. The first and second parts are indispensable, and the third part can sometimes be omitted.

 

3. Simple comparison of URL and URI

URI belongs to a lower level abstraction of URL, a string text standard. In other words, URIs belong to the parent class, and URLs belong to the subclasses of URI. URL is a subset of URI. The definition of URI is: Uniform Resource Identifier; the definition of URL is: Uniform Resource Locator. The difference between the two is that URI represents the path to the request server and defines such a resource. The URL also describes how to access the resource (http://).

 

Let’s take a look at two small examples of URLs.

1. URL example of HTTP protocol:

Use Hypertext Transfer Protocol HTTP to provide resources for hypertext information services.

The hypertext file (file type is .html) is welcome.htm in the directory /channel. This is a computer from the People's Daily of China.

The computer domain name is www.rol.cn.Net.

The hypertext file (file type is .html) is talk1.htm in the directory/talk.

This is the address of Ruide Chat Room. You can enter the first room of Ruide Chat Room from here.

2. File URL

When using URL to represent a file, the server mode is represented by file, followed by the host IP address, file access path (i.e. directory), file name and other information.

Sometimes directory and file names can be omitted, but the "/" symbol cannot be omitted.

The main processing object of the crawler is the URL. It obtains the required file content based on the URL address, and then further processes it.

Therefore, accurately understanding URLs is crucial to understanding web crawlers.

The above is the detailed content of How crawlers work. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template