Home Web Front-end Front-end Q&A Basic use of requests library

Basic use of requests library

Jun 11, 2018 pm 10:55 PM
requests

1. response.content和response.text的区别

response.content是编码后的byte类型(“str”数据类型),response.text是unicode类型。这两种方法的使用要视情况而定。注意:unicode -> str 是编码过程(encode()); str -> unicode 是解码过程(decode())。示例如下:

# --coding:utf-8-- #
import requests
response = requests.get("https://baidu.com/")
print response.url
print type(response.content)
with open("C:\\Users\\Administrator\\Desktop\\content.html", "w") as f:
    f.write(response.content)
    print "content保存成功"
print type(response.text)
with open("C:\\Users\\Administrator\\Desktop\\text.html", "w") as f:
    # 返回url的编码方式
    print response.encoding
    f.write(response.text.encode("ISO-8859-1"))
    print "text保存成功"
Copy after login

2. 发送get请求,直接调用“resquests.get" 就可以了。response的一些属性:response.text; response.content; response.url; response.encoding; response.status_code

# --coding:utf-8-- #
import requests
params = {
    "wd": "中国"
}
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.62 Safari/537.36"
}
response = requests.get("https://baidu.com/s", params=params, headers=headers)
print response.url
with open("C:\\Users\\Administrator\\Desktop\\get.html", "w") as f:
    f.write(response.content)
    print "保存成功"
Copy after login

3. 发送post请求:传入data信息。注意get请求传入的是params信息。示例如下:

# --coding:utf-8-- #
import requests
data = {
    "first": "true",
    "pn": "1",
    "wd": "python"
}
headers = {
    "Referer": "https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=",
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.62 Safari/537.36"
}
response = requests.post("https://www.lagou.com/jobs/positionAjax.json?needAddtionalResult=false", data=data, headers=headers)
print response.encoding
print type(response.content)
with open("C:\\Users\\Administrator\\Desktop\\post.html", "w") as f:
    f.write(response.content)
    print "保存成功"
Copy after login

4. 使用代理。在get方法中增加proxy参数即可。示例代码如下:

# --coding:utf-8-- #
import requests
proxy = {
    "http": "124.42.7.103"
}
response = requests.get("http://httpbin.org/ip", proxies=proxy)
print response.content
Copy after login

5. requests处理cookies信息。使用requests.Session()方法即可。示例代码如下:

# --coding:utf-8-- #
import requests
url = "http://www.renren.com/PLogin.do"
# url = "http://www.renren.com/SysHome.do"
data = {"email": "账号", "password": "密码"}
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.62 Safari/537.36"
}
session = requests.Session()
session.post(url, data=data, headers=headers)
response = session.get("http://www.renren.com/543484094/profile")
with open("C:\\Users\\Administrator\\Desktop\\Liwei.html", "w") as fp:
    fp.write(response.content)
    print "保存成功"
Copy after login

6. 处理不信任的SSL证书。与上面的代码相比,多了一个verify=False参数,为了处理SSL证书不受信用的问题。

示例代码如下:

response = session.get("http://www.renren.com/543484094/profile", verify=False)
Copy after login

以上就是关于requests库的基本使用。

本文讲解了requests库的基本使用 ,更多相关内容请关注php中文网。

相关推荐:

前端调用微信支付接口

jQuery对象与DOM对象

jQuery插件开发标准写法

The above is the detailed content of Basic use of requests library. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Will R.E.P.O. Have Crossplay?
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to realize the mutual conversion between CURL and python requests in python How to realize the mutual conversion between CURL and python requests in python May 03, 2023 pm 12:49 PM

Both curl and Pythonrequests are powerful tools for sending HTTP requests. While curl is a command-line tool that allows you to send requests directly from the terminal, Python's requests library provides a more programmatic way to send requests from Python code. The basic syntax for converting curl to Pythonrequestscurl command is as follows: curl[OPTIONS]URL When converting curl command to Python request, we need to convert the options and URL into Python code. Here is an example curlPOST command: curl-XPOST https://example.com/api

How to use the Python crawler Requests library How to use the Python crawler Requests library May 16, 2023 am 11:46 AM

1. Install the requests library. Because the learning process uses the Python language, Python needs to be installed in advance. I installed Python 3.8. You can check the Python version you installed by running the command python --version. It is recommended to install Python 3.X or above. After installing Python, you can directly install the requests library through the following command. pipinstallrequestsPs: You can switch to domestic pip sources, such as Alibaba and Douban, which are fast. In order to demonstrate the function, I used nginx to simulate a simple website. After downloading, just run the nginx.exe program in the root directory.

How Python uses Requests to request web pages How Python uses Requests to request web pages Apr 25, 2023 am 09:29 AM

Requests inherits all features of urllib2. Requests supports HTTP connection persistence and connection pooling, the use of cookies to maintain sessions, file uploading, automatic determination of the encoding of response content, and automatic encoding of internationalized URLs and POST data. Installation method uses pip to install $pipinstallrequestsGET request basic GET request (headers parameters and parmas parameters) 1. The most basic GET request can directly use the get method 'response=requests.get("http://www.baidu.com/&quot

How to use python requests post How to use python requests post Apr 29, 2023 pm 04:52 PM

Python simulates the browser sending post requests importrequests format request.postrequest.post(url,data,json,kwargs)#post request format request.get(url,params,kwargs)#Compared with get request, sending post request parameters are divided into forms ( x-www-form-urlencoded) json (application/json) data parameter supports dictionary format and string format. The dictionary format uses the json.dumps() method to convert the data into a legal json format string. This method requires

Download PDF files using Python's Requests and BeautifulSoup Download PDF files using Python's Requests and BeautifulSoup Aug 30, 2023 pm 03:25 PM

Request and BeautifulSoup are Python libraries that can download any file or PDF online. The requests library is used to send HTTP requests and receive responses. BeautifulSoup library is used to parse the HTML received in the response and get the downloadable pdf link. In this article, we will learn how to download PDF using Request and BeautifulSoup in Python. Install dependencies Before using BeautifulSoup and Request libraries in Python, we need to install these libraries in the system using the pip command. To install request and the BeautifulSoup and Request libraries,

Using the Requests module in Python Using the Requests module in Python Sep 02, 2023 am 10:21 AM

Requests is a Python module that can be used to send various HTTP requests. It is an easy-to-use library with many features, from passing parameters in URLs to sending custom headers and SSL verification. In this tutorial, you will learn how to use this library to send simple HTTP requests in Python. You can use requests in Python versions 2.6–2.7 and 3.3–3.6. Before continuing, you should know that Requests is an external module, so you must install it before trying the examples in this tutorial. You can install it by running the following command in the terminal: pipinstallrequests Once the module is installed, you can import it using the following command

How to install and use Python requests How to install and use Python requests May 18, 2023 pm 07:49 PM

1. Preparation work First, we need to make sure that we have installed the requests library before. If it is not installed, follow the steps below to install the library. pip installation Whether it is Windows, Linux or Mac, it can be installed through the pip package management tool. Run the following command on the command line to complete the installation of the requests library: pip3installrequests This is the simplest installation method and is recommended. Verify installation In order to verify whether the library has been installed successfully, you can test it on the command line: importrequestsres=requests.get('https://www.baidu

How to use Python crawler to crawl web page data using BeautifulSoup and Requests How to use Python crawler to crawl web page data using BeautifulSoup and Requests Apr 29, 2023 pm 12:52 PM

1. Introduction The implementation principle of web crawlers can be summarized into the following steps: Sending HTTP requests: Web crawlers obtain web page content by sending HTTP requests (usually GET requests) to the target website. In Python, HTTP requests can be sent using the requests library. Parse HTML: After receiving the response from the target website, the crawler needs to parse the HTML content to extract useful information. HTML is a markup language used to describe the structure of web pages. It consists of a series of nested tags. The crawler can locate and extract the required data based on these tags and attributes. In Python, you can use libraries such as BeautifulSoup and lxml to parse HTML. Data extraction: After parsing the HTML,

See all articles