How to use the Python web crawler requests library
1. What is a web crawler
Simply put, it is to build a program to download, parse and organize data from the Internet in an automated way.
Just like when we browse the web, we will copy and paste the content we are interested in into our notebooks to facilitate reading and browsing next time-the web crawler helps us automatically complete these contents
Of course, if you encounter some websites that cannot be copied and pasted - the web crawler can show its power even more
Why we need web crawlers
When we need to do some data analysis - and many times these data are stored in web pages, and manual downloading takes too long. At this time, we need web crawlers to help us automatically crawl these data (of course we will filter out those data that are not available on the web page). Things to use)
Applications of web crawlers
Accessing and collecting network data has a very wide range of applications, many of which belong to the field of data science. Let’s take a look at the following examples:
Taobao sellers need to find useful positive and negative information from the massive reviews to help them further capture the hearts of customers and analyze customers’ shopping psychology. Some scholars crawled on social media such as Twitter and Weibo. Information to build a data set to build a predictive model for identifying depression and suicidal thoughts - so that more people in need can get help - of course we also need to consider privacy-related issues - But it's cool isn't it?
As an artificial intelligence engineer, they crawled the pictures of the volunteers’ preferences from Ins to train the deep learning model to predict whether the given images would be liked by the volunteers. ;Mobile phone manufacturers incorporate these models into their picture apps and push them to you. The data scientists of the e-commerce platform crawl the information of the products browsed by users, conduct analysis and prediction, so as to push the products that the users want to know and buy the most
Yes! Web crawlers are widely used, ranging from daily batch crawling of high-definition wallpapers and pictures to data sources for artificial intelligence, deep learning, and business strategy formulation.
This era is the era of data, and data is the "new oil"
2. Network transmission protocol HTTP
Yes, when it comes to web crawlers, one thing that cannot be avoided is Of course, for this HTTP, we don’t need to understand all aspects of the protocol definition in detail like network engineers, but as an introduction, we still have to have a certain understanding.
The International Organization for Standardization ISO maintains the open communication system interconnection reference model OSI, and this model divides the computer communication structure into seven layers
Physical layer: including Ethernet protocol, USB protocol, Bluetooth protocol, etc.
Data link layer: including Ethernet protocol
Network layer: including IP protocol
Transport layer: including TCP, UDP protocol
Session layer: Contains protocols for opening/closing and managing sessions
Presentation layer: Contains protocols for protecting formatting and translating data
Application layer: Contains HTTP and DNS network service protocols
Now let’s take a look at what the HTTP request and response look like (because it will be involved later Define request headers) A general request message consists of the following content:
Request line
Multiple request headers
Empty line
Optional message body
Specific request message:
GET https://www.baidu.com/?tn=80035161_1_dg HTTP/1.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: zh-Hans-CN,zh-Hans;q=0.8,en-GB;q=0.5,en;q=0.3 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18362 Accept-Encoding: gzip, deflate, br Host: www.baidu.com Connection: Keep-Alive
This is access Of course, we don’t need to know many of the details in Baidu’s request, because python’s request package will help us complete our crawling
Of course we can also view the information returned by the webpage for our request:
HTTP/1.1 200 OK //这边的状态码为200表示我们的请求成功 Bdpagetype: 2 Cache-Control: private Connection: keep-alive Content-Encoding: gzip Content-Type: text/html;charset=utf-8 Date: Sun, 09 Aug 2020 02:57:00 GMT Expires: Sun, 09 Aug 2020 02:56:59 GMT X-Ua-Compatible: IE=Edge,chrome=1 Transfer-Encoding: chunked
3. Requests library (Students who don’t like theoretical knowledge can come here directly)
We know that Python also has other preset libraries for handling HTTP - urllib and urllib3, but the requests library is easier to learn - the code is simpler and easier to understand. Of course, when we successfully crawl the web page and extract the things we are interested in, we will mention another very useful library - Beautiful Soup - this is More later
1. Installation of requests library
Here we can directly find the .whl file of requests to install, or we can directly use pip to install it (of course, if you have pycharm, you can directly install it from The environment inside is loading and downloading)
2. Actual combat
Now we start to formally crawl the webpage
The code is as follows:
import requests target = 'https://www.baidu.com/' get_url = requests.get(url=target) print(get_url.status_code) print(get_url.text)
Output results
200 //返回状态码200表示请求成功 <!DOCTYPE html>//这里删除了很多内容,实际上输出的网页信息比这要多得多 <!--STATUS OK--><html> <head><meta http-equiv=content-type content=text/html; charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=Edge> <meta content=always name=referrer> <link rel=stylesheet type=text/css src=//www.baidu.com/img/gs.gif> </p> </div> </div> </div> </body> </html>
The above five lines of code have done a lot. We can already crawl all the HTML content of the web page
The first line of code: Load the requests library. The second line of code: Give the website number that needs to be crawled. Three lines of code: The general format of requests using requests is as follows:
对象 = requests.get(url=你想要爬取的网站地址)
The fourth line of code: Returns the status code of the request. The fifth line of code: Outputs the corresponding content body
Of course we can also print More content
import requests target = 'https://www.baidu.com/' get_url = requests.get(url=target) # print(get_url.status_code) # print(get_url.text) print(get_url.reason)//返回状态 print(get_url.headers) //返回HTTP响应中包含的服务器头的内容(和上面展示的内容差不多) print(get_url.request) print(get_url.request.headers)//返回请求中头的内容
OK {'Cache-Control': 'private, no-cache, no-store, proxy-revalidate, no-transform', 'Connection': 'keep-alive', 'Content-Encoding': 'gzip', 'Content-Type': 'text/html', 'Date': 'Sun, 09 Aug 2020 04:14:22 GMT', 'Last-Modified': 'Mon, 23 Jan 2017 13:23:55 GMT', 'Pragma': 'no-cache', 'Server': 'bfe/1.0.8.18', 'Set-Cookie': 'BDORZ=27315; max-age=86400; domain=.baidu.com; path=/', 'Transfer-Encoding': 'chunked'} <PreparedRequest [GET]> {'User-Agent': 'python-requests/2.22.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
The above is the detailed content of How to use the Python web crawler requests library. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



VS Code can be used to write Python and provides many features that make it an ideal tool for developing Python applications. It allows users to: install Python extensions to get functions such as code completion, syntax highlighting, and debugging. Use the debugger to track code step by step, find and fix errors. Integrate Git for version control. Use code formatting tools to maintain code consistency. Use the Linting tool to spot potential problems ahead of time.

In VS Code, you can run the program in the terminal through the following steps: Prepare the code and open the integrated terminal to ensure that the code directory is consistent with the terminal working directory. Select the run command according to the programming language (such as Python's python your_file_name.py) to check whether it runs successfully and resolve errors. Use the debugger to improve debugging efficiency.

VS Code can run on Windows 8, but the experience may not be great. First make sure the system has been updated to the latest patch, then download the VS Code installation package that matches the system architecture and install it as prompted. After installation, be aware that some extensions may be incompatible with Windows 8 and need to look for alternative extensions or use newer Windows systems in a virtual machine. Install the necessary extensions to check whether they work properly. Although VS Code is feasible on Windows 8, it is recommended to upgrade to a newer Windows system for a better development experience and security.

VS Code extensions pose malicious risks, such as hiding malicious code, exploiting vulnerabilities, and masturbating as legitimate extensions. Methods to identify malicious extensions include: checking publishers, reading comments, checking code, and installing with caution. Security measures also include: security awareness, good habits, regular updates and antivirus software.

VS Code is the full name Visual Studio Code, which is a free and open source cross-platform code editor and development environment developed by Microsoft. It supports a wide range of programming languages and provides syntax highlighting, code automatic completion, code snippets and smart prompts to improve development efficiency. Through a rich extension ecosystem, users can add extensions to specific needs and languages, such as debuggers, code formatting tools, and Git integrations. VS Code also includes an intuitive debugger that helps quickly find and resolve bugs in your code.

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.

VS Code not only can run Python, but also provides powerful functions, including: automatically identifying Python files after installing Python extensions, providing functions such as code completion, syntax highlighting, and debugging. Relying on the installed Python environment, extensions act as bridge connection editing and Python environment. The debugging functions include setting breakpoints, step-by-step debugging, viewing variable values, and improving debugging efficiency. The integrated terminal supports running complex commands such as unit testing and package management. Supports extended configuration and enhances features such as code formatting, analysis and version control.

Yes, VS Code can run Python code. To run Python efficiently in VS Code, complete the following steps: Install the Python interpreter and configure environment variables. Install the Python extension in VS Code. Run Python code in VS Code's terminal via the command line. Use VS Code's debugging capabilities and code formatting to improve development efficiency. Adopt good programming habits and use performance analysis tools to optimize code performance.
