It’s my first time writing a blog, so I’m a little nervous, so don’t comment if you don’t like it.
If there are any shortcomings, I hope readers will point them out and I will correct them.
学习爬虫之前你需要了解(个人建议,铁头娃可以无视): - **少许网页制作知识,起码要明白什么标签...** - **相关语言基础知识。比如用java做爬虫起码会用Java语言,用python做爬虫起码要会用python语言...** - **一些网络相关知识。比如TCP/IP、cookie之类的知识,明白网页打开的原理。** - **国家法律。知道哪些能爬,哪些不能爬,别瞎爬。**
As the title states, all codes in this article use python3.6.X.
First, you need to install (pip3 install xxxx and it will be OK)
requests module
BeautifulSoup module (or lxml module)
These two libraries are very powerful. requests are used to send web page requests and open web pages, while beautifulsoup and lxml are used to parse content and extract what you want. BeautifulSoup favors regular expressions, lxml favors XPath. Because I am more accustomed to using the beautifulsoup library, this article mainly uses the beautifulsoup library without going into too much detail about lxml. (It is recommended to read the documentation before using it)
The main structure of the crawler:
Manager: Manage the addresses you want to crawl.
Downloader: Download web page information.
Filter: Filter out the content you need from the downloaded web page information.
Storage: Save the downloaded things where you want to save them. (Depending on the actual situation, it is optional.)
Basically all the web crawlers I have come into contact with can't escape this structure, ranging from sracpy to urllib. As long as you know this structure, you don’t need to memorize it. The advantage of knowing it is that you can at least know what you are writing when writing, and you will know where to debug when a bug occurs.
There is a lot of nonsense in the front... The text is as follows:
This article uses crawling https://baike.baidu.com/item/Python (the Baidu entry of python as an example):
(Because taking screenshots is too troublesome...this will be the only picture in this article)
If you want to crawl the python entry content, first, you need to Know the URL you want to crawl:
url = 'https://baike.baidu.com/item/Python'
Because you only need to crawl this page, the manager is OK.
html = request.urlopen(url)
Call the urlopen() function, the downloader is OK
Soup = BeautifulSoup(html,"html.parser") baike = Soup.find_all("p",class_='lemma-summary')
Use the beautifulsoup function in the Beautifulsoup library together with the find_all function, the parser is OK
Let me say something here, the return of the find_all function Value is a list. So the output needs to be printed in a loop.
Since this example does not need to be saved, it can be printed directly, so:
for content in baike: print (content.get_text())
get_text() is used to extract the text in the label.
Sort out the above code:
import requestsfrom bs4 import BeautifulSoupfrom urllib import requestimport reif __name__ == '__main__': url = 'https://baike.baidu.com/item/Python' html = request.urlopen(url) Soup = BeautifulSoup(html,"html.parser") baike = Soup.find_all("p",class_='lemma-summary') for content in baike: print (content.get_text())
The entry in Baidu Encyclopedia will appear.
Similar methods can also crawl some novels, pictures, headlines, etc., and are by no means limited to entries.
If you can write this program after closing this article, congratulations, you are getting started. Remember, never memorize the code.
The steps are omitted...The whole process is a bit rough...Sorry...it slipped ( ̄ー ̄)...
The above is the detailed content of Introduction to Python3 basic crawler. For more information, please follow other related articles on the PHP Chinese website!