Home > Backend Development > Python Tutorial > [Python] Web crawler (8): Source code and analysis of web crawler (v0.3) of Encyclopedia of Embarrassments (simplified update)

[Python] Web crawler (8): Source code and analysis of web crawler (v0.3) of Encyclopedia of Embarrassments (simplified update)

黄舟
Release: 2017-01-21 14:31:05
Original
1507 people have browsed it

Q&A:

1. Why does it show that Encyclopedia of Embarrassing Things is unavailable for a period of time?

Answer: Some time ago, the Encyclopedia of Embarrassing Things added a Header test, which made it impossible to crawl. It is necessary to simulate the Header in the code. Now the code has been modified and can be used normally.


#2. Why do you need to create a new thread separately?

Answer: The basic process is as follows: the crawler starts a new thread in the background and crawls two pages of the Encyclopedia of Embarrassing Stories. If there are less than two pages left, it will crawl another page. When users press enter, they only get the latest content from the inventory instead of going online, so browsing is smoother. You can also put the loading in the main thread, but this will cause the problem of long waiting time during the crawling process.

Project content:

A web crawler for Encyclopedia of Embarrassing Things written in Python.

Usage:

Create a new Bug.py file, copy the code into it, and double-click to run it.

Program functions:

Browse the Encyclopedia of Embarrassing Things in the command prompt.

Principle explanation:

First of all, browse the homepage of Embarrassing Encyclopedia: http://www.qiushibaike.com/hot/page/1

Okay It can be seen that the number after page/ in the link is the corresponding page number. Remember this to prepare for future writing.

Then, right-click to view the page source code:

[Python] Web crawler (8): Source code and analysis of web crawler (v0.3) of Encyclopedia of Embarrassments (simplified update)

Observation found that each paragraph is marked with a div, where class must be content and title is the posting time , we only need to use regular expressions to "deduct" it.

After understanding the principle, the rest is the content of regular expressions. You can refer to this blog post:

http://blog.csdn.net/wxg694175346/article/details/ 8929576


## Running effect:

[Python] Web crawler (8): Source code and analysis of web crawler (v0.3) of Encyclopedia of Embarrassments (simplified update)

# -*- coding: utf-8 -*-    
     
import urllib2    
import urllib    
import re    
import thread    
import time    
  
    
#----------- 加载处理糗事百科 -----------    
class Spider_Model:    
        
    def __init__(self):    
        self.page = 1    
        self.pages = []    
        self.enable = False    
    
    # 将所有的段子都扣出来,添加到列表中并且返回列表    
    def GetPage(self,page):    
        myUrl = "http://m.qiushibaike.com/hot/page/" + page    
        user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'   
        headers = { 'User-Agent' : user_agent }   
        req = urllib2.Request(myUrl, headers = headers)   
        myResponse = urllib2.urlopen(req)  
        myPage = myResponse.read()    
        #encode的作用是将unicode编码转换成其他编码的字符串    
        #decode的作用是将其他编码的字符串转换成unicode编码    
        unicodePage = myPage.decode("utf-8")    
    
        # 找出所有class="content"的div标记    
        #re.S是任意匹配模式,也就是.可以匹配换行符    
        myItems = re.findall(&#39;<div.*?class="content".*?title="(.*?)">(.*?)</div>&#39;,unicodePage,re.S)    
        items = []    
        for item in myItems:    
            # item 中第一个是div的标题,也就是时间    
            # item 中第二个是div的内容,也就是内容    
            items.append([item[0].replace("\n",""),item[1].replace("\n","")])    
        return items    
    
    # 用于加载新的段子    
    def LoadPage(self):    
        # 如果用户未输入quit则一直运行    
        while self.enable:    
            # 如果pages数组中的内容小于2个    
            if len(self.pages) < 2:    
                try:    
                    # 获取新的页面中的段子们    
                    myPage = self.GetPage(str(self.page))    
                    self.page += 1    
                    self.pages.append(myPage)    
                except:    
                    print &#39;无法链接糗事百科!&#39;    
            else:    
                time.sleep(1)    
            
    def ShowPage(self,nowPage,page):    
        for items in nowPage:    
            print u&#39;第%d页&#39; % page , items[0]  , items[1]    
            myInput = raw_input()    
            if myInput == "quit":    
                self.enable = False    
                break    
            
    def Start(self):    
        self.enable = True    
        page = self.page    
    
        print u&#39;正在加载中请稍候......&#39;    
            
        # 新建一个线程在后台加载段子并存储    
        thread.start_new_thread(self.LoadPage,())    
            
        #----------- 加载处理糗事百科 -----------    
        while self.enable:    
            # 如果self的page数组中存有元素    
            if self.pages:    
                nowPage = self.pages[0]    
                del self.pages[0]    
                self.ShowPage(nowPage,page)    
                page += 1    
    
    
#----------- 程序的入口处 -----------    
print u"""  
---------------------------------------  
   程序:糗百爬虫  
   版本:0.3  
   作者:why  
   日期:2014-06-03  
   语言:Python 2.7  
   操作:输入quit退出阅读糗事百科  
   功能:按下回车依次浏览今日的糗百热点  
---------------------------------------  
"""  
    
    
print u&#39;请按下回车浏览今日的糗百内容:&#39;    
raw_input(&#39; &#39;)    
myModel = Spider_Model()    
myModel.Start()
Copy after login

The above is [Python] Web Crawler (8): Embarrassing Things Encyclopedia's web crawler (v0.3) source code and analysis (simplified update) content, please pay attention to the PHP Chinese website (www.php.cn) for more related content!


Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template