Home > Backend Development > Python Tutorial > Use python to crawl articles from prose website

Use python to crawl articles from prose website

PHP中文网
Release: 2017-07-03 17:30:29
Original
1324 people have browsed it

image.png

Configure python 2.7

<code>    bs4

    requests</code>
Copy after login

Installation using pip sudo pip install bs4

sudo pip install requests

Briefly explain the use of bs4 because it is crawling web pages, so I will introduce find and find_all

The difference between find and find_all is that what is returned is different. What return is the first matched tag and the content in the tag

find_all returns a list

For example, we write a test.html to test the difference between find and find_all. The content is:

<html>
<head>
</head>
<body>
<div id="one"><a></a></div>
<div id="two"><a href="#">abc</a></div>
<div id="three"><a href="#">three a</a><a href="#">three a</a><a href="#">three a</a></div>
<div id="four"><a href="#">four<p>four p</p><p>four p</p><p>four p</p> a</a></div>
</body>
</html>
Copy after login

<code class="xml"><span class="hljs-tag"> </span></code>
Copy after login

Then the code of test.py is:

from bs4 import BeautifulSoup
import lxml

if __name__=='__main__':
  s = BeautifulSoup(open('test.html'),'lxml')
  print s.prettify()
  print "------------------------------"
  print s.find('div')
  print s.find_all('div')
  print "------------------------------"
  print s.find('div',id='one')
  print s.find_all('div',id='one')
  print "------------------------------"
  print s.find('div',id="two")
  print s.find_all('div',id="two")
  print "------------------------------"
  print s.find('div',id="three")
  print s.find_all('div',id="three")
  print "------------------------------"
  print s.find('div',id="four")
  print s.find_all('div',id="four")
  print "------------------------------"
Copy after login

<code class="python"><span class="hljs-keyword"> </span></code>
Copy after login

After running, we can see the result. When getting the specified tag, there is not much difference between the two. When getting a group of tags, the difference between the two will be displayed.


image.png

So we have to pay attention to what we want when using it, otherwise an error will occur
The next step is to obtain the web page information through requests. I don’t quite understand why others write heard and other things
Me Directly access the web page, obtain several classified secondary web pages of Prose Network through the get method, and then pass a group test to crawl all the web pages

<span style="color: #0000ff">def</span><span style="color: #000000"> get_html():
  url </span>= <span style="color: #800000">"</span><span style="color: #800000"></span><span style="color: #800000">"</span><span style="color: #000000">
  two_html </span>= [<span style="color: #800000">'</span><span style="color: #800000">sanwen</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">shige</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">zawen</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">suibi</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">rizhi</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">novel</span><span style="color: #800000">'</span><span style="color: #000000">]
  </span><span style="color: #0000ff">for</span> doc <span style="color: #0000ff">in</span><span style="color: #000000"> two_html:
      i</span>=1
          <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">sanwen</span><span style="color: #800000">'</span><span style="color: #000000">:
            </span><span style="color: #0000ff">print</span> <span style="color: #800000">"</span><span style="color: #800000">running sanwen -----------------------------</span><span style="color: #800000">"</span>
          <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">shige</span><span style="color: #800000">'</span><span style="color: #000000">:
            </span><span style="color: #0000ff">print</span> <span style="color: #800000">"</span><span style="color: #800000">running shige ------------------------------</span><span style="color: #800000">"</span>
          <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">zawen</span><span style="color: #800000">'</span><span style="color: #000000">:
            </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running zawen -------------------------------</span><span style="color: #800000">'</span>
          <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">suibi</span><span style="color: #800000">'</span><span style="color: #000000">:
            </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running suibi -------------------------------</span><span style="color: #800000">'</span>
          <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">rizhi</span><span style="color: #800000">'</span><span style="color: #000000">:
            </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running ruzhi -------------------------------</span><span style="color: #800000">'</span>
          <span style="color: #0000ff">if</span> doc==<span style="color: #800000">'</span><span style="color: #800000">nove</span><span style="color: #800000">'</span><span style="color: #000000">:
            </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running xiaoxiaoshuo -------------------------</span><span style="color: #800000">'</span>
      <span style="color: #0000ff">while</span>(i<10<span style="color: #000000">):
        par </span>= {<span style="color: #800000">'</span><span style="color: #800000">p</span><span style="color: #800000">'</span><span style="color: #000000">:i}
        res </span>= requests.get(url+doc+<span style="color: #800000">'</span><span style="color: #800000">/</span><span style="color: #800000">'</span>,params=<span style="color: #000000">par)
        </span><span style="color: #0000ff">if</span> res.status_code==200<span style="color: #000000">:
          soup(res.text)
              i</span>+=i
Copy after login

<code class="python"><span class="hljs-function"><span class="hljs-keyword"> </span></span></code>
Copy after login
Copy after login
Copy after login

In this part of the code, I did not process the res.status_code that is not 200. The resulting problem is that errors will not be displayed and the crawled content will be lost. Then I analyzed the web page of Sanwen.net and found that it was www.sanwen.net/rizhi/&p=1
. The maximum value of p is 10. I don’t understand. The last time I crawled the disk, it was 100 pages. I’ll analyze it later. Then get the content of each page through the get method.
After obtaining the content of each page, the author and title are analyzed. The code is like this

<span style="color: #0000ff">def</span><span style="color: #000000"> soup(html_text):
  s </span>= BeautifulSoup(html_text,<span style="color: #800000">'</span><span style="color: #800000">lxml</span><span style="color: #800000">'</span><span style="color: #000000">)
  link </span>= s.find(<span style="color: #800000">'</span><span style="color: #800000">div</span><span style="color: #800000">'</span>,class_=<span style="color: #800000">'</span><span style="color: #800000">categorylist</span><span style="color: #800000">'</span>).find_all(<span style="color: #800000">'</span><span style="color: #800000">li</span><span style="color: #800000">'</span><span style="color: #000000">)
  </span><span style="color: #0000ff">for</span> i <span style="color: #0000ff">in</span><span style="color: #000000"> link:
    </span><span style="color: #0000ff">if</span> i!=s.find(<span style="color: #800000">'</span><span style="color: #800000">li</span><span style="color: #800000">'</span>,class_=<span style="color: #800000">'</span><span style="color: #800000">page</span><span style="color: #800000">'</span><span style="color: #000000">):
      title </span>= i.find_all(<span style="color: #800000">'</span><span style="color: #800000">a</span><span style="color: #800000">'</span>)[1<span style="color: #000000">]
      author </span>= i.find_all(<span style="color: #800000">'</span><span style="color: #800000">a</span><span style="color: #800000">'</span>)[2<span style="color: #000000">].text
      url </span>= title.attrs[<span style="color: #800000">'</span><span style="color: #800000">href</span><span style="color: #800000">'</span><span style="color: #000000">]
      sign </span>= re.compile(r<span style="color: #800000">'</span><span style="color: #800000">(//)|/</span><span style="color: #800000">'</span><span style="color: #000000">)
      match </span>=<span style="color: #000000"> sign.search(title.text)
      file_name </span>=<span style="color: #000000"> title.text
      </span><span style="color: #0000ff">if</span><span style="color: #000000"> match:
        file_name </span>= sign.sub(<span style="color: #800000">'</span><span style="color: #800000">a</span><span style="color: #800000">'</span>,str(title.text))
Copy after login

<code class="python"><span class="hljs-function"><span class="hljs-keyword"> </span></span></code>
Copy after login
Copy after login
Copy after login

There is a cheating thing when getting the title. Guys, why do you add slashes in the title when writing prose? Not only add one but also add two. This problem directly caused the file name to appear when I wrote the file later. Wrong, so write a regular expression and I'll change it for you.
The last step is to get the prose content. Through the analysis of each page, we can get the article address, and then get the content directly. I originally wanted to get it one by one by changing the web page address, which saves trouble.

<span style="color: #0000ff">def</span><span style="color: #000000"> get_content(url):
  res </span>= requests.get(<span style="color: #800000">'</span><span style="color: #800000"></span><span style="color: #800000">'</span>+<span style="color: #000000">url)
  </span><span style="color: #0000ff">if</span> res.status_code==200<span style="color: #000000">:
    soup </span>= BeautifulSoup(res.text,<span style="color: #800000">'</span><span style="color: #800000">lxml</span><span style="color: #800000">'</span><span style="color: #000000">)
    contents </span>= soup.find(<span style="color: #800000">'</span><span style="color: #800000">div</span><span style="color: #800000">'</span>,class_=<span style="color: #800000">'</span><span style="color: #800000">content</span><span style="color: #800000">'</span>).find_all(<span style="color: #800000">'</span><span style="color: #800000">p</span><span style="color: #800000">'</span><span style="color: #000000">)
    content </span>= <span style="color: #800000">''</span>
    <span style="color: #0000ff">for</span> i <span style="color: #0000ff">in</span><span style="color: #000000"> contents:
      content</span>+=i.text+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span>
    <span style="color: #0000ff">return</span> content
Copy after login

<code class="python"><span class="hljs-function"><span class="hljs-keyword"> </span></span></code>
Copy after login
Copy after login
Copy after login

The last thing is to write the file and save it ok

   f = open(file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">w</span><span style="color: #800000">'</span><span style="color: #000000">)

      </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running w txt</span><span style="color: #800000">'</span>+file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span><span style="color: #000000">
      f.write(title.text</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">)
      f.write(author</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">)
      content</span>=<span style="color: #000000">get_content(url)     
      f.write(content)
      f.close()</span>
Copy after login

The three functions obtain prose from Prose.com, but there is a problem. The problem is that I don’t know why some prose is lost. I can only get about 400 articles, which is much different from the articles from Prose.com, but it is true. It was obtained page by page. I hope someone can help me with this question. Maybe we should make the web page inaccessible. Of course, I think it has something to do with the broken network in my dormitory

     f = open(file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span>,<span style="color: #800000">'</span><span style="color: #800000">w</span><span style="color: #800000">'</span><span style="color: #000000">)
      </span><span style="color: #0000ff">print</span> <span style="color: #800000">'</span><span style="color: #800000">running w txt</span><span style="color: #800000">'</span>+file_name+<span style="color: #800000">'</span><span style="color: #800000">.txt</span><span style="color: #800000">'</span><span style="color: #000000">
      f.write(title.text</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">)
      f.write(author</span>+<span style="color: #800000">'</span><span style="color: #800000">\n</span><span style="color: #800000">'</span><span style="color: #000000">)
      content</span>=<span style="color: #000000">get_content(url)     
      f.write(content)
      f.close()</span>
Copy after login

Almost forgot about the renderings

Although the code is messy, I never stop

The above is the detailed content of Use python to crawl articles from prose website. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template