How does python crawl articles from prose.com?

零下一度
Release: 2017-07-03 09:39:31
Original
1608 people have browsed it

image.png

Configure python 2.7

    bs4

    requests
Copy after login

Install using pip to install sudo pip install bs4

sudo pip install requests

Briefly explain the use of bs4. Because it is crawling web pages, I will introduce find and find_all

The difference between find and find_all is that the things returned are different. Find returns is the first matched tag and the content in the tag

find_all returns a list

For example, we write a test.html to test the difference between find and find_all. The content is:

<html>
<head>
</head>
<body>
<div id="one"><a></a></div>
<div id="two"><a href="#">abc</a></div>
<div id="three"><a href="#">three a</a><a href="#">three a</a><a href="#">three a</a></div>
<div id="four"><a href="#">four<p>four p</p><p>four p</p><p>four p</p> a</a></div>
</body>
</html>
Copy after login

 <br/>
Copy after login

Then the code of test.py is:

from bs4 import BeautifulSoup
import lxml

if __name__==&#39;__main__&#39;:
  s = BeautifulSoup(open(&#39;test.html&#39;),&#39;lxml&#39;)
  print s.prettify()
  print "------------------------------"
  print s.find(&#39;div&#39;)
  print s.find_all(&#39;div&#39;)
  print "------------------------------"
  print s.find(&#39;div&#39;,id=&#39;one&#39;)
  print s.find_all(&#39;div&#39;,id=&#39;one&#39;)
  print "------------------------------"
  print s.find(&#39;div&#39;,id="two")
  print s.find_all(&#39;div&#39;,id="two")
  print "------------------------------"
  print s.find(&#39;div&#39;,id="three")
  print s.find_all(&#39;div&#39;,id="three")
  print "------------------------------"
  print s.find(&#39;div&#39;,id="four")
  print s.find_all(&#39;div&#39;,id="four")
  print "------------------------------"
Copy after login

 <br/>
Copy after login
Copy after login
Copy after login
Copy after login

After running, we can see the result. When getting the specified tag, there is not much difference between the two. When getting a group of tags, the difference between the two will be displayed.


image. png

So we must pay attention to what we need when using it, otherwise an error will occur
The next step is to obtain web page information through requests. I don’t quite understand why others write heard and follow For other things
I directly accessed the webpage, obtained the second-level webpages of several categories on prose.com through the get method, and then passed a group test to crawl all the webpages

def get_html():
  url = ""
  two_html = [&#39;sanwen&#39;,&#39;shige&#39;,&#39;zawen&#39;,&#39;suibi&#39;,&#39;rizhi&#39;,&#39;novel&#39;]  for doc in two_html:
      i=1          if doc==&#39;sanwen&#39;:print "running sanwen -----------------------------"  if doc==&#39;shige&#39;:print "running shige ------------------------------"  if doc==&#39;zawen&#39;:print &#39;running zawen -------------------------------&#39;  if doc==&#39;suibi&#39;:print &#39;running suibi -------------------------------&#39;  if doc==&#39;rizhi&#39;:print &#39;running ruzhi -------------------------------&#39;  if doc==&#39;nove&#39;:print &#39;running xiaoxiaoshuo -------------------------&#39;  while(i<10):
        par = {&#39;p&#39;:i}
        res = requests.get(url+doc+&#39;/&#39;,params=par)if res.status_code==200:
          soup(res.text)
              i+=i
Copy after login

 <br/>
Copy after login
Copy after login
Copy after login
Copy after login

In this part of the code, I did not process the res.status_code that is not 200. The resulting problem is that errors will not be displayed and the crawled content will be lost. Then I analyzed the web page of Sanwen.net and found that it is www.sanwen.net/rizhi/&p=1
. The maximum value of p is 10. I don’t understand. The last time I crawled the disk, it was 100 pages. I’ll analyze it later. Then get the content of each page through the get method. <br/>After getting the content of each page, it is to analyze the author and title. The code is like this

def soup(html_text):
  s = BeautifulSoup(html_text,&#39;lxml&#39;)
  link = s.find(&#39;div&#39;,class_=&#39;categorylist&#39;).find_all(&#39;li&#39;)  for i in link:if i!=s.find(&#39;li&#39;,class_=&#39;page&#39;):
      title = i.find_all(&#39;a&#39;)[1]
      author = i.find_all(&#39;a&#39;)[2].text
      url = title.attrs[&#39;href&#39;]
      sign = re.compile(r&#39;(//)|/&#39;)
      match = sign.search(title.text)
      file_name = title.text      if match:
        file_name = sign.sub(&#39;a&#39;,str(title.text))
Copy after login

 <br/>
Copy after login
Copy after login
Copy after login
Copy after login

There is a cheating thing when getting the title, please ask the boss When you write prose, why do you add slashes in the title? Not only one but also two. This problem directly led to an error in the file name when I wrote the file later, so I wrote a regular expression and I changed it for you. <br/>The last step is to get the prose content. Through the analysis of each page, we can get the article address, and then get the content directly. I originally wanted to get it one by one by changing the web page address, which saves trouble.

def get_content(url):
  res = requests.get(&#39;&#39;+url)  if res.status_code==200:
    soup = BeautifulSoup(res.text,&#39;lxml&#39;)
    contents = soup.find(&#39;div&#39;,class_=&#39;content&#39;).find_all(&#39;p&#39;)
    content = &#39;&#39;for i in contents:
      content+=i.text+&#39;\n&#39;return content
Copy after login

 <br/>
Copy after login
Copy after login
Copy after login
Copy after login

The last thing is to write the file and save it ok

   f = open(file_name+&#39;.txt&#39;,&#39;w&#39;)      print &#39;running w txt&#39;+file_name+&#39;.txt&#39;  f.write(title.text+&#39;\n&#39;)
      f.write(author+&#39;\n&#39;)
      content=get_content(url)     
      f.write(content)
      f.close()
Copy after login

Three functions to obtain There is a problem with the essays on prose.com. The problem is that I don’t know why some essays are lost. I can only get about 400 articles. This is much different from the articles on prose.com, but it is indeed available page by page. Come on, I hope someone can help me with this issue. Maybe we should make the web page inaccessible. Of course, I think it has something to do with the broken network in my dormitory

     f = open(file_name+&#39;.txt&#39;,&#39;w&#39;)      print &#39;running w txt&#39;+file_name+&#39;.txt&#39;  f.write(title.text+&#39;\n&#39;)
      f.write(author+&#39;\n&#39;)
      content=get_content(url)     
      f.write(content)
      f.close()
Copy after login

I almost forgot the rendering

Although the code is messy, I never stop


The above is the detailed content of How does python crawl articles from prose.com?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template