Python crawling article example tutorial

巴扎黑
Release: 2017-08-07 17:37:45
Original
1836 people have browsed it

This article mainly introduces you to the relevant information of using python to crawl prose network articles. The introduction in the article is very detailed and has certain reference and learning value for everyone. Friends who need it can take a look below.

This article mainly introduces to you the relevant content about python crawling prose network articles. It is shared for your reference and study. Let’s take a look at the detailed introduction:

The rendering is as follows:


Configure python 2.7


 bs4

 requests
Copy after login

Installation using pipsudo pip install bs4


##

sudo pip install requests
Copy after login

Briefly explain the use of bs4 because it is crawling web pages, so I will introduce it. find and find_all

The difference between find and find_all is that they return different things. find returns the first matched tag and the content in the tag

find_all returns a list

For example, we write a test.html to test the difference between find and find_all.

The content is:


<html>
<head>
</head>
<body>
<p id="one"><a></a></p>
<p id="two"><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >abc</a></p>
<p id="three"><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >three a</a><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >three a</a><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >three a</a></p>
<p id="four"><a href="#" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >four<p>four p</p><p>four p</p><p>four p</p> a</a></p>
</body>
</html>
Copy after login

Then the code of test.py is:


from bs4 import BeautifulSoup
import lxml

if __name__==&#39;__main__&#39;:
 s = BeautifulSoup(open(&#39;test.html&#39;),&#39;lxml&#39;)
 print s.prettify()
 print "------------------------------"
 print s.find(&#39;p&#39;)
 print s.find_all(&#39;p&#39;)
 print "------------------------------"
 print s.find(&#39;p&#39;,id=&#39;one&#39;)
 print s.find_all(&#39;p&#39;,id=&#39;one&#39;)
 print "------------------------------"
 print s.find(&#39;p&#39;,id="two")
 print s.find_all(&#39;p&#39;,id="two")
 print "------------------------------"
 print s.find(&#39;p&#39;,id="three")
 print s.find_all(&#39;p&#39;,id="three")
 print "------------------------------"
 print s.find(&#39;p&#39;,id="four")
 print s.find_all(&#39;p&#39;,id="four")
 print "------------------------------"
Copy after login

After running, we can see the result. When getting the specified tag, there is not much difference between the two. When getting a group of tags, the difference between the two will be displayed.


So when we use it, we must pay attention to what we want, otherwise an error will occur


The next step is to obtain web page information through requests. I don’t quite understand why others write heard. With other things


I directly access the web page, obtain several classified secondary web pages of Prose Network through the get method, and then crawl all the web pages through a group test


def get_html():
 url = "https://www.sanwen.net/"
 two_html = [&#39;sanwen&#39;,&#39;shige&#39;,&#39;zawen&#39;,&#39;suibi&#39;,&#39;rizhi&#39;,&#39;novel&#39;]
 for doc in two_html:
 i=1
  if doc==&#39;sanwen&#39;:
  print "running sanwen -----------------------------"
  if doc==&#39;shige&#39;:
  print "running shige ------------------------------"
  if doc==&#39;zawen&#39;:
  print &#39;running zawen -------------------------------&#39;
  if doc==&#39;suibi&#39;:
  print &#39;running suibi -------------------------------&#39;
  if doc==&#39;rizhi&#39;:
  print &#39;running ruzhi -------------------------------&#39;
  if doc==&#39;nove&#39;:
  print &#39;running xiaoxiaoshuo -------------------------&#39;
 while(i<10):
 par = {&#39;p&#39;:i}
 res = requests.get(url+doc+&#39;/&#39;,params=par)
 if res.status_code==200:
  soup(res.text)
  i+=i
Copy after login

In this part of the code, I did not process the

res.status_code that is not 200. The resulting problem is that the error will not be displayed, and the crawled content will have lost. Then I analyzed the web page of prose website and found that it was www.sanwen.net/rizhi/&p=1

The maximum value of p is 10. I don’t understand this. The last time I crawled the disk, it was 100 pages. Forget it. Forget it and analyze it later. Then get the content of each page through the get method.


After getting the content of each page, we analyze the author and title. The code is like this


def soup(html_text):
 s = BeautifulSoup(html_text,&#39;lxml&#39;)
 link = s.find(&#39;p&#39;,class_=&#39;categorylist&#39;).find_all(&#39;li&#39;)
 for i in link:
 if i!=s.find(&#39;li&#39;,class_=&#39;page&#39;):
 title = i.find_all(&#39;a&#39;)[1]
 author = i.find_all(&#39;a&#39;)[2].text
 url = title.attrs[&#39;href&#39;]
 sign = re.compile(r&#39;(//)|/&#39;)
 match = sign.search(title.text)
 file_name = title.text
 if match:
 file_name = sign.sub(&#39;a&#39;,str(title.text))
Copy after login

There is something wrong when getting the title. Please tell me. Guys, why do you add slashes in the title when writing prose? Not only one but also two. This problem directly caused an error in the file name when I wrote the file later, so I wrote a regular expression and I changed it for you. Bar.


The last step is to get the prose content. Through the analysis of each page, we get the article address, and then get the content directly. I originally wanted to get it one by one by changing the web page address, which saves trouble.


def get_content(url):
 res = requests.get(&#39;https://www.sanwen.net&#39;+url)
 if res.status_code==200:
 soup = BeautifulSoup(res.text,&#39;lxml&#39;)
 contents = soup.find(&#39;p&#39;,class_=&#39;content&#39;).find_all(&#39;p&#39;)
 content = &#39;&#39;
 for i in contents:
 content+=i.text+&#39;\n&#39;
 return content
Copy after login

The last thing is to write the file and save it ok


 f = open(file_name+&#39;.txt&#39;,&#39;w&#39;)

 print &#39;running w txt&#39;+file_name+&#39;.txt&#39;
 f.write(title.text+&#39;\n&#39;)
 f.write(author+&#39;\n&#39;)
 content=get_content(url) 
 f.write(content)
 f.close()
Copy after login

Three functions get the prose from the prose network, but there are The problem, the problem is that I don’t know why some prose is lost. I can only get about 400 articles. This is much different from the articles on Prose.com, but it is indeed obtained page by page. I hope this problem will be solved. Please help me. Maybe we should make the web page inaccessible. Of course, I think it has something to do with the broken network in my dormitory


 f = open(file_name+&#39;.txt&#39;,&#39;w&#39;)
 print &#39;running w txt&#39;+file_name+&#39;.txt&#39;
 f.write(title.text+&#39;\n&#39;)
 f.write(author+&#39;\n&#39;)
 content=get_content(url) 
 f.write(content)
 f.close()
Copy after login
Almost forgot about the rendering


There may be a timeout phenomenon. I can only say that you must choose a good Internet connection when going to college!

The above is the detailed content of Python crawling article example tutorial. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!