How Python uses Requests to request web pages
Requests inherits all features of urllib2.
Requests supports HTTP connection persistence and connection pooling, supports using cookies to maintain sessions, supports file upload, supports automatically determining the encoding of response content, and supports internationalized URLs and automatic encoding of POST data.
Installation method
Use pip to install
$ pip install requests
GET request
Basic GET request (headers parameters and parmas parameters)
1. The most basic GET request can directly use the get method'
response = requests.get("http://www.baidu.com/") # 也可以这么写 # response = requests.request("get", "http://www.baidu.com/")
2. Add headers and query parameters
If If you want to add headers, you can pass in the headers parameter to add header information in the request header.
If you want to pass parameters in the url, you can use the params parameter.
import requests kw = {'wd':'长城'} headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"} # params 接收一个字典或者字符串的查询参数,字典类型自动转换为url编码,不需要urlencode() response = requests.get("http://www.baidu.com/s?", params = kw, headers = headers) # 查看响应内容,response.text 返回的是Unicode格式的数据 print (response.text) # 查看响应内容,response.content返回的字节流数据 print (respones.content) # 查看完整url地址 print (response.url) # 查看响应头部字符编码 print (response.encoding) # 查看响应码 print (response.status_code)
Running results
......
......
'http://www.baidu .com/s?wd=Great Wall'
'utf-8'
200
When using response.text, Requests will be based on the text encoding of the HTTP response Automatically decode response content, most Unicode character sets can be decoded seamlessly.
When response.content is used, the original binary byte stream of the server's response data is returned, which can be used to save binary files such as images.
POST method
1. Basic POST request
response = requests.post("http://www.baidu.com/",data = data)
2.body with parameters
formdata = { "type": "AUTO", "doctype": "json", "key": "www", "ue": "UTF-8", } url = "http://auto-installment/v1/loan-credit-check" response = requests.post(url,data = data,headers=headers) print(response.text)#显示返回结果 print(response.json())# 如果是json文件可以直接显示
Note:
The printed result appears garbled in Chinese, use json.dupms(response, ensure_ascii=False)) to solve the problem
Session
Generally use Session to cross-request Always maintain certain parameters, such as logging in before you can access other pages
# 1. 创建session对象,可以保存Cookie值 session = requests.session() # 2. 需要登录的用户名和密码 data = {"username": "mxxxx", "password": "1233444"} # 3. 发送附带用户名和密码的请求,并获取登录后的Cookie值,保存在ssion里 session.post("https://www.jianshu.com/sign_in", data=data) # 4. ssion包含用户登录后的Cookie值,可以直接访问那些登录后才可以访问的页面 response = session.get("https://www.jianshu.com/writer#/")
Notes on pitfalls
1. When using requests to request an interface, an error occurs. But there is no problem with the interface itself. This is because there are two types of request parameters in the interface: simple types (generally less than 3) and complex object types.
Solution: Define the types of these two parameters in headers
Simple type: headers={"Content-Type": "application/x-www-form-urlencoded"}
Complex object type: headers={"Content-Type":application/json}
2. Some HTTPS requests have SSL certificate verification
Solution: response = requests.get("https://www.baidu.com/", verify=False)
Extension
1.After requests fail, Add a retry mechanism (if it fails, it will be retried 3 times)
request_retry = requests.adapatrs.HTTPAdapaters(max_retries=3) session.mount('https://',request_retry)
2. Use grequests to implement asynchronous requests
urls = [ 'http://www.url1.com', 'http://www.url2.com', 'http://www.url3.com', 'http://www.url4.com', 'http://www.url5.com', ] resp = (grequests.get(u) for u in urls) grequests.map(resp)
3. Custom cookies
We use Session instances to keep cookies between requests, but in some special cases, custom cookies need to be used
We use Session instances to keep cookies between requests Cookies, but in some special cases, custom cookies need to be used
# 自定义cookies cookie = {'guid':'5BF0FAB4-A7CF-463E-8C17-C1576fc7a9a8','uuid':'3ff5f4091f35a467'} session.post('http://', cookies=cookie)
4. Count the time spent on an API request
session.get(url).elapsed.total_seconds()
5. Set the request Timeout
session.get(url, timeout=15)
6. File upload
Requests uses files as parameters to simulate submitting file data
file = {'file':open('test.bmp','rb')} #rb表示用二进制格式打开指定目录下的文件,且用于只读 r =requests.post('http://',files=file) print(r.text)
The above is the detailed content of How Python uses Requests to request web pages. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

It is impossible to view MongoDB password directly through Navicat because it is stored as hash values. How to retrieve lost passwords: 1. Reset passwords; 2. Check configuration files (may contain hash values); 3. Check codes (may hardcode passwords).

As a data professional, you need to process large amounts of data from various sources. This can pose challenges to data management and analysis. Fortunately, two AWS services can help: AWS Glue and Amazon Athena.

The steps to start a Redis server include: Install Redis according to the operating system. Start the Redis service via redis-server (Linux/macOS) or redis-server.exe (Windows). Use the redis-cli ping (Linux/macOS) or redis-cli.exe ping (Windows) command to check the service status. Use a Redis client, such as redis-cli, Python, or Node.js, to access the server.

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.

Question: How to view the Redis server version? Use the command line tool redis-cli --version to view the version of the connected server. Use the INFO server command to view the server's internal version and need to parse and return information. In a cluster environment, check the version consistency of each node and can be automatically checked using scripts. Use scripts to automate viewing versions, such as connecting with Python scripts and printing version information.

Navicat's password security relies on the combination of symmetric encryption, password strength and security measures. Specific measures include: using SSL connections (provided that the database server supports and correctly configures the certificate), regularly updating Navicat, using more secure methods (such as SSH tunnels), restricting access rights, and most importantly, never record passwords.
