Table of Contents
【原创】时尚首席(关于时尚,名利,事业,爱情,励志)
xxxxxxxxxx
(.*?)
Home Backend Development Python Tutorial [Python] Web Crawler (9): Baidu Tieba Web Crawler (v0.4) source code and analysis

[Python] Web Crawler (9): Baidu Tieba Web Crawler (v0.4) source code and analysis

Jan 21, 2017 pm 02:33 PM

百度贴吧的爬虫制作和糗百的爬虫制作原理基本相同,都是通过查看源码扣出关键数据,然后将其存储到本地txt文件。

源码下载:

http://download.csdn.net/detail/wxg694175346/6925583

项目内容:

用Python写的百度贴吧的网络爬虫。

使用方法:

新建一个BugBaidu.py文件,然后将代码复制到里面后,双击运行。

程序功能:

将贴吧中楼主发布的内容打包txt存储到本地。

原理解释:

首先,先浏览一下某一条贴吧,点击只看楼主并点击第二页之后url发生了一点变化,变成了:

http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1

可以看出来,see_lz=1是只看楼主,pn=1是对应的页码,记住这一点为以后的编写做准备。

这就是我们需要利用的url。

接下来就是查看页面源码。

首先把题目抠出来存储文件的时候会用到。

可以看到百度使用gbk编码,标题使用h1标记:

<h1 id="原创-时尚首席-关于时尚-名利-事业-爱情-励志">【原创】时尚首席(关于时尚,名利,事业,爱情,励志)</h1>
Copy after login

同样,正文部分用div和class综合标记,接下来要做的只是用正则表达式来匹配即可。

运行截图:

[Python] Web Crawler (9): Baidu Tieba Web Crawler (v0.4) source code and analysis

生成的txt文件:

[Python] Web Crawler (9): Baidu Tieba Web Crawler (v0.4) source code and analysis

# -*- coding: utf-8 -*-  
#---------------------------------------  
#   程序:百度贴吧爬虫  
#   版本:0.5  
#   作者:why  
#   日期:2013-05-16  
#   语言:Python 2.7  
#   操作:输入网址后自动只看楼主并保存到本地文件  
#   功能:将楼主发布的内容打包txt存储到本地。  
#---------------------------------------  
   
import string  
import urllib2  
import re  
  
#----------- 处理页面上的各种标签 -----------  
class HTML_Tool:  
    # 用非 贪婪模式 匹配 \t 或者 \n 或者 空格 或者 超链接 或者 图片  
    BgnCharToNoneRex = re.compile("(\t|\n| |<a.*?>|<img .*? alt="[Python] Web Crawler (9): Baidu Tieba Web Crawler (v0.4) source code and analysis" >)")  
      
    # 用非 贪婪模式 匹配 任意<>标签  
    EndCharToNoneRex = re.compile("<.*?>")  
  
    # 用非 贪婪模式 匹配 任意<p>标签  
    BgnPartRex = re.compile("<p.*?>")  
    CharToNewLineRex = re.compile("(<br/>|</p>|<tr>|<div>|</div>)")  
    CharToNextTabRex = re.compile("<td>")  
  
    # 将一些html的符号实体转变为原始符号  
    replaceTab = [("<","<"),(">",">"),("&","&"),("&","\""),(" "," ")]  
      
    def Replace_Char(self,x):  
        x = self.BgnCharToNoneRex.sub("",x)  
        x = self.BgnPartRex.sub("\n    ",x)  
        x = self.CharToNewLineRex.sub("\n",x)  
        x = self.CharToNextTabRex.sub("\t",x)  
        x = self.EndCharToNoneRex.sub("",x)  
  
        for t in self.replaceTab:    
            x = x.replace(t[0],t[1])    
        return x    
      
class Baidu_Spider:  
    # 申明相关的属性  
    def __init__(self,url):    
        self.myUrl = url + &#39;?see_lz=1&#39;  
        self.datas = []  
        self.myTool = HTML_Tool()  
        print u&#39;已经启动百度贴吧爬虫,咔嚓咔嚓&#39;  
    
    # 初始化加载页面并将其转码储存  
    def baidu_tieba(self):  
        # 读取页面的原始信息并将其从gbk转码  
        myPage = urllib2.urlopen(self.myUrl).read().decode("gbk")  
        # 计算楼主发布内容一共有多少页  
        endPage = self.page_counter(myPage)  
        # 获取该帖的标题  
        title = self.find_title(myPage)  
        print u&#39;文章名称:&#39; + title  
        # 获取最终的数据  
        self.save_data(self.myUrl,title,endPage)  
  
    #用来计算一共有多少页  
    def page_counter(self,myPage):  
        # 匹配 "共有<span class="red">12</span>页" 来获取一共有多少页  
        myMatch = re.search(r&#39;class="red">(\d+?)</span>&#39;, myPage, re.S)  
        if myMatch:    
            endPage = int(myMatch.group(1))  
            print u&#39;爬虫报告:发现楼主共有%d页的原创内容&#39; % endPage  
        else:  
            endPage = 0  
            print u&#39;爬虫报告:无法计算楼主发布内容有多少页!&#39;  
        return endPage  
  
    # 用来寻找该帖的标题  
    def find_title(self,myPage):  
        # 匹配 <h1 id="xxxxxxxxxx">xxxxxxxxxx</h1> 找出标题  
        myMatch = re.search(r&#39;<h1 id="">(.*?)</h1>&#39;, myPage, re.S)  
        title = u&#39;暂无标题&#39;  
        if myMatch:  
            title  = myMatch.group(1)  
        else:  
            print u&#39;爬虫报告:无法加载文章标题!&#39;  
        # 文件名不能包含以下字符: \ / : * ? " < > |  
        title = title.replace(&#39;\\&#39;,&#39;&#39;).replace(&#39;/&#39;,&#39;&#39;).replace(&#39;:&#39;,&#39;&#39;).replace(&#39;*&#39;,&#39;&#39;).replace(&#39;?&#39;,&#39;&#39;).replace(&#39;"&#39;,&#39;&#39;).replace(&#39;>&#39;,&#39;&#39;).replace(&#39;<&#39;,&#39;&#39;).replace(&#39;|&#39;,&#39;&#39;)  
        return title  
  
  
    # 用来存储楼主发布的内容  
    def save_data(self,url,title,endPage):  
        # 加载页面数据到数组中  
        self.get_data(url,endPage)  
        # 打开本地文件  
        f = open(title+&#39;.txt&#39;,&#39;w+&#39;)  
        f.writelines(self.datas)  
        f.close()  
        print u&#39;爬虫报告:文件已下载到本地并打包成txt文件&#39;  
        print u&#39;请按任意键退出...&#39;  
        raw_input();  
  
    # 获取页面源码并将其存储到数组中  
    def get_data(self,url,endPage):  
        url = url + &#39;&pn=&#39;  
        for i in range(1,endPage+1):  
            print u&#39;爬虫报告:爬虫%d号正在加载中...&#39; % i  
            myPage = urllib2.urlopen(url + str(i)).read()  
            # 将myPage中的html代码处理并存储到datas里面  
            self.deal_data(myPage.decode(&#39;gbk&#39;))  
              
  
    # 将内容从页面代码中抠出来  
    def deal_data(self,myPage):  
        myItems = re.findall(&#39;id="post_content.*?>(.*?)</div>&#39;,myPage,re.S)  
        for item in myItems:  
            data = self.myTool.Replace_Char(item.replace("\n","").encode(&#39;gbk&#39;))  
            self.datas.append(data+&#39;\n&#39;)  
  
  
  
#-------- 程序入口处 ------------------  
print u"""#--------------------------------------- 
#   程序:百度贴吧爬虫 
#   版本:0.5 
#   作者:why 
#   日期:2013-05-16 
#   语言:Python 2.7 
#   操作:输入网址后自动只看楼主并保存到本地文件 
#   功能:将楼主发布的内容打包txt存储到本地。 
#--------------------------------------- 
"""  
  
# 以某小说贴吧为例子  
# bdurl = &#39;http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1&#39;  
  
print u&#39;请输入贴吧的地址最后的数字串:&#39;  
bdurl = &#39;http://tieba.baidu.com/p/&#39; + str(raw_input(u&#39;http://tieba.baidu.com/p/&#39;))   
  
#调用  
mySpider = Baidu_Spider(bdurl)  
mySpider.baidu_tieba()
Copy after login

以上就是 [Python]网络爬虫(九):百度贴吧的网络爬虫(v0.4)源码及解析的内容,更多相关内容请关注PHP中文网(www.php.cn)!


Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to use mysql after installation How to use mysql after installation Apr 08, 2025 am 11:48 AM

The article introduces the operation of MySQL database. First, you need to install a MySQL client, such as MySQLWorkbench or command line client. 1. Use the mysql-uroot-p command to connect to the server and log in with the root account password; 2. Use CREATEDATABASE to create a database, and USE select a database; 3. Use CREATETABLE to create a table, define fields and data types; 4. Use INSERTINTO to insert data, query data, update data by UPDATE, and delete data by DELETE. Only by mastering these steps, learning to deal with common problems and optimizing database performance can you use MySQL efficiently.

How does PS feathering control the softness of the transition? How does PS feathering control the softness of the transition? Apr 06, 2025 pm 07:33 PM

The key to feather control is to understand its gradual nature. PS itself does not provide the option to directly control the gradient curve, but you can flexibly adjust the radius and gradient softness by multiple feathering, matching masks, and fine selections to achieve a natural transition effect.

Do mysql need to pay Do mysql need to pay Apr 08, 2025 pm 05:36 PM

MySQL has a free community version and a paid enterprise version. The community version can be used and modified for free, but the support is limited and is suitable for applications with low stability requirements and strong technical capabilities. The Enterprise Edition provides comprehensive commercial support for applications that require a stable, reliable, high-performance database and willing to pay for support. Factors considered when choosing a version include application criticality, budgeting, and technical skills. There is no perfect option, only the most suitable option, and you need to choose carefully according to the specific situation.

How to set up PS feathering? How to set up PS feathering? Apr 06, 2025 pm 07:36 PM

PS feathering is an image edge blur effect, which is achieved by weighted average of pixels in the edge area. Setting the feather radius can control the degree of blur, and the larger the value, the more blurred it is. Flexible adjustment of the radius can optimize the effect according to images and needs. For example, using a smaller radius to maintain details when processing character photos, and using a larger radius to create a hazy feeling when processing art works. However, it should be noted that too large the radius can easily lose edge details, and too small the effect will not be obvious. The feathering effect is affected by the image resolution and needs to be adjusted according to image understanding and effect grasp.

How to optimize database performance after mysql installation How to optimize database performance after mysql installation Apr 08, 2025 am 11:36 AM

MySQL performance optimization needs to start from three aspects: installation configuration, indexing and query optimization, monitoring and tuning. 1. After installation, you need to adjust the my.cnf file according to the server configuration, such as the innodb_buffer_pool_size parameter, and close query_cache_size; 2. Create a suitable index to avoid excessive indexes, and optimize query statements, such as using the EXPLAIN command to analyze the execution plan; 3. Use MySQL's own monitoring tool (SHOWPROCESSLIST, SHOWSTATUS) to monitor the database health, and regularly back up and organize the database. Only by continuously optimizing these steps can the performance of MySQL database be improved.

What impact does PS feathering have on image quality? What impact does PS feathering have on image quality? Apr 06, 2025 pm 07:21 PM

PS feathering can lead to loss of image details, reduced color saturation and increased noise. To reduce the impact, it is recommended to use a smaller feather radius, copy the layer and then feather, and carefully compare the image quality before and after feathering. In addition, feathering is not suitable for all cases, and sometimes tools such as masks are more suitable for handling image edges.

How to optimize MySQL performance for high-load applications? How to optimize MySQL performance for high-load applications? Apr 08, 2025 pm 06:03 PM

MySQL database performance optimization guide In resource-intensive applications, MySQL database plays a crucial role and is responsible for managing massive transactions. However, as the scale of application expands, database performance bottlenecks often become a constraint. This article will explore a series of effective MySQL performance optimization strategies to ensure that your application remains efficient and responsive under high loads. We will combine actual cases to explain in-depth key technologies such as indexing, query optimization, database design and caching. 1. Database architecture design and optimized database architecture is the cornerstone of MySQL performance optimization. Here are some core principles: Selecting the right data type and selecting the smallest data type that meets the needs can not only save storage space, but also improve data processing speed.

Navicat's method to view MongoDB database password Navicat's method to view MongoDB database password Apr 08, 2025 pm 09:39 PM

It is impossible to view MongoDB password directly through Navicat because it is stored as hash values. How to retrieve lost passwords: 1. Reset passwords; 2. Check configuration files (may contain hash values); 3. Check codes (may hardcode passwords).

See all articles