Home > Backend Development > Python Tutorial > How Python uses Requests to request web pages

How Python uses Requests to request web pages

WBOY
Release: 2023-04-25 09:29:01
forward
1787 people have browsed it

Requests inherits all features of urllib2.

Requests supports HTTP connection persistence and connection pooling, supports using cookies to maintain sessions, supports file upload, supports automatically determining the encoding of response content, and supports internationalized URLs and automatic encoding of POST data.

Installation method

Use pip to install

$ pip install requests
Copy after login

GET request

Basic GET request (headers parameters and parmas parameters)

1. The most basic GET request can directly use the get method'

response = requests.get("http://www.baidu.com/")
 
# 也可以这么写
# response = requests.request("get", "http://www.baidu.com/")
Copy after login

2. Add headers and query parameters

If If you want to add headers, you can pass in the headers parameter to add header information in the request header.

If you want to pass parameters in the url, you can use the params parameter.

import requests
 
kw = {'wd':'长城'}
 
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"}
 
# params 接收一个字典或者字符串的查询参数,字典类型自动转换为url编码,不需要urlencode()
response = requests.get("http://www.baidu.com/s?", params = kw, headers = headers)
 
# 查看响应内容,response.text 返回的是Unicode格式的数据
print (response.text)
 
# 查看响应内容,response.content返回的字节流数据
print (respones.content)
 
# 查看完整url地址
print (response.url)
 
# 查看响应头部字符编码
print (response.encoding)
 
# 查看响应码
print (response.status_code)
Copy after login

Running results

......

......

'http://www.baidu .com/s?wd=Great Wall'

'utf-8'

200

When using response.text, Requests will be based on the text encoding of the HTTP response Automatically decode response content, most Unicode character sets can be decoded seamlessly.

When response.content is used, the original binary byte stream of the server's response data is returned, which can be used to save binary files such as images.

POST method

1. Basic POST request

response = requests.post("http://www.baidu.com/",data = data)
Copy after login

2.body with parameters

formdata = {
    "type": "AUTO",
    "doctype": "json",
    "key": "www",
    "ue": "UTF-8",
}
url = "http://auto-installment/v1/loan-credit-check"
response = requests.post(url,data = data,headers=headers)
 
print(response.text)#显示返回结果
print(response.json())# 如果是json文件可以直接显示
Copy after login

Note:

The printed result appears garbled in Chinese, use json.dupms(response, ensure_ascii=False)) to solve the problem

Session

Generally use Session to cross-request Always maintain certain parameters, such as logging in before you can access other pages

# 1. 创建session对象,可以保存Cookie值
session = requests.session()
 
# 2. 需要登录的用户名和密码
data = {"username": "mxxxx", "password": "1233444"}
 
# 3. 发送附带用户名和密码的请求,并获取登录后的Cookie值,保存在ssion里
session.post("https://www.jianshu.com/sign_in", data=data)
 
# 4. ssion包含用户登录后的Cookie值,可以直接访问那些登录后才可以访问的页面
response = session.get("https://www.jianshu.com/writer#/")
Copy after login

Notes on pitfalls

1. When using requests to request an interface, an error occurs. But there is no problem with the interface itself. This is because there are two types of request parameters in the interface: simple types (generally less than 3) and complex object types.

Solution: Define the types of these two parameters in headers

Simple type: headers={"Content-Type": "application/x-www-form-urlencoded"}

Complex object type: headers={"Content-Type":application/json}

2. Some HTTPS requests have SSL certificate verification

Solution: response = requests.get("https://www.baidu.com/", verify=False)

Extension

1.After requests fail, Add a retry mechanism (if it fails, it will be retried 3 times)

request_retry = requests.adapatrs.HTTPAdapaters(max_retries=3)
session.mount('https://',request_retry)
Copy after login

2. Use grequests to implement asynchronous requests

urls = [
    'http://www.url1.com',
    'http://www.url2.com',
    'http://www.url3.com',
    'http://www.url4.com',
    'http://www.url5.com',
]
resp = (grequests.get(u) for u in urls)
grequests.map(resp)
Copy after login

3. Custom cookies

We use Session instances to keep cookies between requests, but in some special cases, custom cookies need to be used

We use Session instances to keep cookies between requests Cookies, but in some special cases, custom cookies need to be used

# 自定义cookies
cookie = {'guid':'5BF0FAB4-A7CF-463E-8C17-C1576fc7a9a8','uuid':'3ff5f4091f35a467'}
 
session.post('http://', cookies=cookie)
Copy after login

4. Count the time spent on an API request

session.get(url).elapsed.total_seconds()
Copy after login

5. Set the request Timeout

session.get(url, timeout=15)
Copy after login

6. File upload

Requests uses files as parameters to simulate submitting file data

file = {'file':open('test.bmp','rb')}   #rb表示用二进制格式打开指定目录下的文件,且用于只读
r =requests.post('http://',files=file)
print(r.text)
Copy after login

The above is the detailed content of How Python uses Requests to request web pages. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template