Record a simple Python crawler instance
This article mainly shares with you an article about recording a simple Python crawler instance. Friends who need it can take a look.
The main process is divided into:
Crawling, sorting, and storage
1. Several packages are used, including
requests for sending requests to the website and obtaining the web page code
BeautifulSoup4 for processing the obtained web page code and extracting effective information
pandas is used to store information
When to_excel('docname.xlsx'), you may need another package openpyxl
import requests from bs4 import BeautifulSoup import re import json import pandas import excel import sqlite3 # import openpyxl
2. To crawl the Sina website The editor-in-charge of each news is an example.
The functions of def can be determined by working backwards
After obtaining the web address of the current news item, how to obtain the editor-in-charge?
d
ef getComments(url): # 向url对应网址发送请求,获取到的网页内容存储在res中 res=requests.get(url) # 将res内容编码,编码的方式'utf-8'根据网页的charset而定 res.encoding='utf-8' # 因为需要处理res,因此将文本存入soup # html.parser不清楚是干嘛的 soup=BeautifulSoup(res.text,'html.parser') # 根据所需要的内容,通过BS4的select选择,得到数组,用[0]取出元素 # 因为是文本所以直接通过.text得到所需要的内容 return soup.select('.show_author')[0].text # 在soup.select('.link')[0]中,若为id则带# # 若为class则带. # 其他的如a和h1等则无要求 #其中需要层层select并取[0] #有些有多元素,则需通过for遍历
ii) How to get the URL of each news page based on the main page
A certain line of files is found in json, so comments=requests.get(' url') then
jd=json.loads(comments.text.strip('var data='))
jd=['result']['count']['total '] ==>This is the dictionary in the dictionary, you can view it from the preview of the web page inspection element
==>Can be converted into a dictionary
To restore it to a dictionary, the left and right ends If there is anything extra, it should be removed through strip()
If you need to delete it separately on the left and right sides, use lstrip() and rstrip(), that is, left and right
==>for ent in ~:
ent['url']
***) If the required elements obtained by soup.select() are in the same class, you can use contents[0] to distinguish
***) Conversion between time and str
from datetime import date time Str==>time dt=datetime.strptime(timesource,’%Y%m%d’) time==>Str dt.strftime(‘%Y-%m-%d’)
***) Connect each element of list[]
‘-‘.join(list) #将list中的各元素以-方式连接 ‘’.join([p.text.strip() for p in soup.select(‘#artibody p’)[:-1]])
***) For multi-page URLs , you need to find the corresponding part of the page and change it to {},
Then replace it through format()
news_total=[] for i in range(1,3): newsurl=url.format(i) newsary=parseListlink(newsurl) new_total.extend(newsary)
3. Use pandas to store data, which is the DataFrame() function
df=pandas.DataFrame(list) print(df.head(20)) #显示前20条信息 df.to_excel('news.xlsx') #转存为excel格式,名字为news.xlsx
The format of list is
for u in geturl(url): excel1 = [] # 循环开始清空数组 result = {} # 循环开始清空字典 try: # 每个条目在新字典赋值 result['zeren']=getComments(u) result['id']=i i=i+1 except: continue #每个条目形成数组 excel1.append(result) #在列表中添加数组 list.extend(excel1)
4. Storage database
df=pandas.DataFrame(list) print(df.head(20)) #显示前20条信息 # df.to_excel('news.xlsx') #转存为excel格式,名字为news.xlsx with sqlite3.connect('news.sqlite') as db: # 存入news.sqlite文件中的news表格 df.to_sql('news',con=db) # 读取/查询news表格并将数据赋值给df2 df2=pandas.read_sql_query('SELECT * FROM news',con=db)
The above is the detailed content of Record a simple Python crawler instance. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



MySQL has a free community version and a paid enterprise version. The community version can be used and modified for free, but the support is limited and is suitable for applications with low stability requirements and strong technical capabilities. The Enterprise Edition provides comprehensive commercial support for applications that require a stable, reliable, high-performance database and willing to pay for support. Factors considered when choosing a version include application criticality, budgeting, and technical skills. There is no perfect option, only the most suitable option, and you need to choose carefully according to the specific situation.

The article introduces the operation of MySQL database. First, you need to install a MySQL client, such as MySQLWorkbench or command line client. 1. Use the mysql-uroot-p command to connect to the server and log in with the root account password; 2. Use CREATEDATABASE to create a database, and USE select a database; 3. Use CREATETABLE to create a table, define fields and data types; 4. Use INSERTINTO to insert data, query data, update data by UPDATE, and delete data by DELETE. Only by mastering these steps, learning to deal with common problems and optimizing database performance can you use MySQL efficiently.

The main reasons for MySQL installation failure are: 1. Permission issues, you need to run as an administrator or use the sudo command; 2. Dependencies are missing, and you need to install relevant development packages; 3. Port conflicts, you need to close the program that occupies port 3306 or modify the configuration file; 4. The installation package is corrupt, you need to download and verify the integrity; 5. The environment variable is incorrectly configured, and the environment variables must be correctly configured according to the operating system. Solve these problems and carefully check each step to successfully install MySQL.

MySQL download file is corrupt, what should I do? Alas, if you download MySQL, you can encounter file corruption. It’s really not easy these days! This article will talk about how to solve this problem so that everyone can avoid detours. After reading it, you can not only repair the damaged MySQL installation package, but also have a deeper understanding of the download and installation process to avoid getting stuck in the future. Let’s first talk about why downloading files is damaged. There are many reasons for this. Network problems are the culprit. Interruption in the download process and instability in the network may lead to file corruption. There is also the problem with the download source itself. The server file itself is broken, and of course it is also broken when you download it. In addition, excessive "passionate" scanning of some antivirus software may also cause file corruption. Diagnostic problem: Determine if the file is really corrupt

MySQL database performance optimization guide In resource-intensive applications, MySQL database plays a crucial role and is responsible for managing massive transactions. However, as the scale of application expands, database performance bottlenecks often become a constraint. This article will explore a series of effective MySQL performance optimization strategies to ensure that your application remains efficient and responsive under high loads. We will combine actual cases to explain in-depth key technologies such as indexing, query optimization, database design and caching. 1. Database architecture design and optimized database architecture is the cornerstone of MySQL performance optimization. Here are some core principles: Selecting the right data type and selecting the smallest data type that meets the needs can not only save storage space, but also improve data processing speed.

MySQL performance optimization needs to start from three aspects: installation configuration, indexing and query optimization, monitoring and tuning. 1. After installation, you need to adjust the my.cnf file according to the server configuration, such as the innodb_buffer_pool_size parameter, and close query_cache_size; 2. Create a suitable index to avoid excessive indexes, and optimize query statements, such as using the EXPLAIN command to analyze the execution plan; 3. Use MySQL's own monitoring tool (SHOWPROCESSLIST, SHOWSTATUS) to monitor the database health, and regularly back up and organize the database. Only by continuously optimizing these steps can the performance of MySQL database be improved.

MySQL can run without network connections for basic data storage and management. However, network connection is required for interaction with other systems, remote access, or using advanced features such as replication and clustering. Additionally, security measures (such as firewalls), performance optimization (choose the right network connection), and data backup are critical to connecting to the Internet.

It is impossible to view MongoDB password directly through Navicat because it is stored as hash values. How to retrieve lost passwords: 1. Reset passwords; 2. Check configuration files (may contain hash values); 3. Check codes (may hardcode passwords).
