Nowadays, many friends with programming skills are no longer satisfied with manual search for content, and hope to quickly obtain the required content by writing crawler software. So how to use python to make a crawler? The editor below will explain to you some ideas
Methods/steps of writing a python crawler
First we need to determine the content of the target page to be crawled, as shown in the figure below, for example To get the temperature value
Then we need to open F12 of the browser and find the characteristics of the content we want to get, such as what style tags or ID attributes it has
Next we open the cmd command line interface and import the requests library and html library, as shown in the figure below. This lxml needs to be downloaded and installed by yourself
The next step is to get the page content through the requests library, and then use the html under lxml to convert it into text, as shown in the figure below
The next step is to use xpath syntax To search for the content of a specific element, the name of class or id is generally used here, as shown in the figure below
Finally, run the program to get the required content, as follows As shown in the figure
To sum up, using python to make a crawler mainly uses requests to obtain content, and then searches for specific elements based on the content. This is only the simplest process, but These are the steps even in complex crawlers.
Related recommendations: "Python Tutorial"
The above is the detailed content of How to write python crawler. For more information, please follow other related articles on the PHP Chinese website!