Record a simple Python crawler instance

零到壹度
Release: 2018-03-31 13:55:13
Original
6600 people have browsed it

This article mainly shares with you an article about recording a simple Python crawler instance. Friends who need it can take a look.

The main process is divided into:

Crawling, sorting, and storage

1. Several packages are used, including

requests for sending requests to the website and obtaining the web page code

BeautifulSoup4 for processing the obtained web page code and extracting effective information

pandas is used to store information

When to_excel('docname.xlsx'), you may need another package openpyxl

import requests
from bs4 import BeautifulSoup
import re
import json
import pandas
import excel
import sqlite3
# import openpyxl
Copy after login

2. To crawl the Sina website The editor-in-charge of each news is an example.

The functions of def can be determined by working backwards

After obtaining the web address of the current news item, how to obtain the editor-in-charge?

d

ef getComments(url):
    # 向url对应网址发送请求,获取到的网页内容存储在res中
    res=requests.get(url)
    # 将res内容编码,编码的方式'utf-8'根据网页的charset而定
    res.encoding='utf-8'
    # 因为需要处理res,因此将文本存入soup
    # html.parser不清楚是干嘛的
    soup=BeautifulSoup(res.text,'html.parser')
    # 根据所需要的内容,通过BS4的select选择,得到数组,用[0]取出元素
    # 因为是文本所以直接通过.text得到所需要的内容
    return soup.select('.show_author')[0].text
# 在soup.select('.link')[0]中,若为id则带#
#                             若为class则带.
#                             其他的如a和h1等则无要求
#其中需要层层select并取[0]
#有些有多元素,则需通过for遍历
Copy after login

ii) How to get the URL of each news page based on the main page

A certain line of files is found in json, so comments=requests.get(' url') then

jd=json.loads(comments.text.strip('var data='))

jd=['result']['count']['total '] ==>This is the dictionary in the dictionary, you can view it from the preview of the web page inspection element

==>Can be converted into a dictionary

To restore it to a dictionary, the left and right ends If there is anything extra, it should be removed through strip()

If you need to delete it separately on the left and right sides, use lstrip() and rstrip(), that is, left and right

==>for ent in ~:

ent['url']

***) If the required elements obtained by soup.select() are in the same class, you can use contents[0] to distinguish

***) Conversion between time and str

from datetime import date time
Str==>time        dt=datetime.strptime(timesource,’%Y%m%d’)
time==>Str        dt.strftime(‘%Y-%m-%d’)
Copy after login

***) Connect each element of list[]

‘-‘.join(list) #将list中的各元素以-方式连接
‘’.join([p.text.strip() for p in soup.select(‘#artibody p’)[:-1]])
Copy after login

***) For multi-page URLs , you need to find the corresponding part of the page and change it to {},

Then replace it through format()

news_total=[]
for i in range(1,3):
    newsurl=url.format(i)
    newsary=parseListlink(newsurl)
    new_total.extend(newsary)
Copy after login

3. Use pandas to store data, which is the DataFrame() function

df=pandas.DataFrame(list)
print(df.head(20))  #显示前20条信息
df.to_excel('news.xlsx') #转存为excel格式,名字为news.xlsx
Copy after login

The format of list is

for u in geturl(url):
    excel1 = [] # 循环开始清空数组
    result = {} # 循环开始清空字典
    try:
        # 每个条目在新字典赋值
        result['zeren']=getComments(u)
        result['id']=i
        i=i+1
    except:
        continue
    #每个条目形成数组
    excel1.append(result)
    #在列表中添加数组
    list.extend(excel1)
Copy after login

4. Storage database

df=pandas.DataFrame(list)
print(df.head(20))  #显示前20条信息
# df.to_excel('news.xlsx') #转存为excel格式,名字为news.xlsx
with sqlite3.connect('news.sqlite') as db:
    # 存入news.sqlite文件中的news表格
    df.to_sql('news',con=db)
    # 读取/查询news表格并将数据赋值给df2
    df2=pandas.read_sql_query('SELECT * FROM news',con=db)
Copy after login


The above is the detailed content of Record a simple Python crawler instance. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!