How to use Python to implement job analysis reports

WBOY
Release: 2023-05-01 22:07:22
forward
1264 people have browsed it

1. The goal of this article

Get the Ajax request and parse the required fields in JSON

Save the data to Excel

Save the data to MySQL for easy analysis

2. Analysis results

1. Introduction of the library

Average salary levels of Python positions in five cities

2. Page structure

We enter the query The condition is Python as an example. Other conditions are not selected by default. Click Query to see all Python positions. Then we open the console and click the Network tab to see the following request:

How to use Python to implement job analysis reports

Judging from the response results, this request is exactly what we need. We can just request this address directly later. As can be seen from the picture, the following result is the information of each position.

Here we know where to request data and where to get the results. But there are only 15 pieces of data on the first page in the result list. How to get the data on other pages?

3. Request parameters

We click on the parameters tab, as follows:

We found that three form data were submitted. It is obvious that kd is the keyword we searched for. pn is the current page number. Just default to first, don't worry about it. All that's left is to construct a request to download 30 pages of data.

4. Constructing requests and parsing data

Constructing requests is very simple, we still use the requests library to do it. First, we construct the form data

data = {'first': 'true', 'pn': page, 'kd': lang_name}
Copy after login

and then use requests to request the url address. The parsed JSON data is done. Since Lagou has strict restrictions on crawlers, we need to add all the headers fields in the browser and increase the crawler interval. I set it to 10-20s later, and then the data can be obtained normally.

import requests

def get_json(url, page, lang_name):
   headers = {
       'Host': 'www.lagou.com',
       'Connection': 'keep-alive',
       'Content-Length': '23',
       'Origin': 'https://www.lagou.com',
       'X-Anit-Forge-Code': '0',
       'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:61.0) Gecko/20100101 Firefox/61.0',
       'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
       'Accept': 'application/json, text/javascript, */*; q=0.01',
       'X-Requested-With': 'XMLHttpRequest',
       'X-Anit-Forge-Token': 'None',
       'Referer': 'https://www.lagou.com/jobs/list_python?city=%E5%85%A8%E5%9B%BD&cl=false&fromSearch=true&labelWords=&suginput=',
       'Accept-Encoding': 'gzip, deflate, br',
       'Accept-Language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7'
   }
   data = {'first': 'false', 'pn': page, 'kd': lang_name}
   json = requests.post(url, data, headers=headers).json()
   list_con = json['content']['positionResult']['result']
   info_list = []
   for i in list_con:
       info = []
       info.append(i.get('companyShortName', '无'))
       info.append(i.get('companyFullName', '无'))
       info.append(i.get('industryField', '无'))
       info.append(i.get('companySize', '无'))
       info.append(i.get('salary', '无'))
       info.append(i.get('city', '无'))
       info.append(i.get('education', '无'))
       info_list.append(info)
   return info_list
Copy after login

4. Get all data

Now that we understand how to parse the data, the only thing left is to request all pages continuously. We construct a function to request all 30 pages of data.

def main():
   lang_name = 'python'
   wb = Workbook()
   conn = get_conn()
   for i in ['北京', '上海', '广州', '深圳', '杭州']:
       page = 1
       ws1 = wb.active
       ws1.title = lang_name
       url = 'https://www.lagou.com/jobs/positionAjax.json?city={}&needAddtionalResult=false'.format(i)
       while page < 31:
           info = get_json(url, page, lang_name)
           page += 1
           import time
           a = random.randint(10, 20)
           time.sleep(a)
           for row in info:
               insert(conn, tuple(row))
               ws1.append(row)
   conn.close()
   wb.save(&#39;{}职位信息.xlsx&#39;.format(lang_name))

if __name__ == &#39;__main__&#39;:
   main()
Copy after login

The above is the detailed content of How to use Python to implement job analysis reports. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!