How to use Python's Requests package to implement simulated login

不言
Release: 2018-05-02 14:22:20
Original
2887 people have browsed it

This article mainly introduces in detail the use of Python's Requests package to simulate login. It has a certain reference value. Interested friends can refer to it

I liked to use python to capture some pages some time ago. Play, but they basically use get to request some pages, and then filter them through regular rules.

I tried it today and simulated logging into my personal website. The discovery is also relatively simple. Reading this article requires a certain understanding of the http protocol and http sessions.

Note: Because the simulated login is my personal website, the following code handles the personal website and account password.

Website Analysis

The essential first step for crawlers is to analyze the target website. Here we use Google Chrome’s developer tools for analysis.

Fetch through login and see such a request.

The upper part is the request header, and the lower part is the parameters passed by the request. As can be seen from the picture, the page submits three parameters through the form. They are _csrf, usermane, password respectively.

The csrf is to prevent cross-domain script forgery. The principle is very simple, that is, for every request, the server generates an encrypted string. Place it in a hidden input form. When making another request, pass this string together to verify whether it is a request from the same user.

So, our code logic is there. Start by requesting a login page. Then analyze the page and get the csrf string. Finally, this string and the account password are passed to the server for login.

The first code

#!/usr/bin/env python2.7
# -*- coding: utf-8 -*-

import requests
import re

# 头部信息
headers = {
 'Host':"localhost",
 'Accept-Language':"zh-CN,zh;q=0.8",
 'Accept-Encoding':"gzip, deflate",
 'Content-Type':"application/x-www-form-urlencoded",
 'Connection':"keep-alive",
 'Referer':"http://localhost/login",
 'User-Agent':"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36"
}

# 登陆方法
def login(url,csrf):
 data = {
 "_csrf" : csrf,
 "username": "xiedj",
 "password": "***"
 }

 response = requests.post(url, data=data, headers=headers)
 return response.content


# 第一次访问获取csrf值
def get_login_web(url):
 page = requests.get('http://localhost/login')
 reg = r&#39;<meta name="csrf-token" content="(.+)">&#39;
 csrf = re.findall(reg,page.content)[0]
 login_page = login(url,csrf)
 print login_page


if __name__ == "__main__":
 url = "http://localhost/login/checklogin"
 get_login_web(url)
Copy after login

The code seems to have no problem. However, an error occurred during execution. After checking, the reason for the error is that the csrf verification failed!

After repeatedly confirming that the csrf obtained and the csrf string requested to log in were OK, I thought of a problem.
If you still don’t know the cause of the error, you can pause and think about a problem here. "How does the server know that the first request to obtain csrf and the second post login request are from the same user?"

At this point, it should be clear. If you want to log in successfully, you need to solve how to make the service believe that both The requests are from the same user. You need to use http session here (if you are not sure, you can Baidu yourself, here is a brief introduction).

The http protocol is a stateless protocol. To make this stateless become stateful, sessions were introduced. To put it simply, record this status through the session. When a user requests a web service for the first time, the server will generate a session to save the user's information. At the same time, when returning to the user, the session ID is saved in cookies. When the user requests again, the browser will bring this cookie with it. Therefore, the server can know whether multiple requests are for the same user.

So our code needs to get this sessionID when making the first request. Pass this sessionID together with the second request. The great thing about requests is that you can use this session object with a simple request.Session().

The second code

#!/usr/bin/env python2.7
# -*- coding: utf-8 -*-

import requests
import re

# 头部信息
headers = {
 &#39;Host&#39;:"localhost",
 &#39;Accept-Language&#39;:"zh-CN,zh;q=0.8",
 &#39;Accept-Encoding&#39;:"gzip, deflate",
 &#39;Content-Type&#39;:"application/x-www-form-urlencoded",
 &#39;Connection&#39;:"keep-alive",
 &#39;Referer&#39;:"http://localhost/login",
 &#39;User-Agent&#39;:"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36"
}

# 登陆方法
def login(url,csrf,r_session):
 data = {
 "_csrf" : csrf,
 "username": "xiedj",
 "password": "***"
 }

 response = r_session.post(url, data=data, headers=headers)
 return response.content


# 第一次访问获取csrf值
def get_login_web(url):
 r_session = requests.Session()
 page = r_session.get(&#39;http://localhost/login&#39;)
 reg = r&#39;<meta name="csrf-token" content="(.+)">&#39;
 csrf = re.findall(reg,page.content)[0]


 login_page = login(url,csrf,r_session)
 print login_page


if __name__ == "__main__":
 url = "http://localhost/login/checklogin"
 get_login_web(url)
Copy after login

The page after successful login

You can know from the code that after requests.Session() starts the session object, the second request will automatically pass the last session ID together.

Related recommendations:

How to use Python to export Excel charts and export them as pictures

Analyze the open function using python Reasons for the No Such File or DIr error


The above is the detailed content of How to use Python's Requests package to implement simulated login. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!