Python 爬虫爬取指定博客的所有文章
自上一篇文章 Z Story : Using Django with GAE Python 后台抓取多个网站的页面全文 后,大体的进度如下:
1.增加了Cron: 用来告诉程序每隔30分钟 让一个task 醒来, 跑到指定的那几个博客上去爬取最新的更新
2.用google 的 Datastore 来存贮每次爬虫爬下来的内容。。只存贮新的内容。。
就像上次说的那样,这样以来 性能有了大幅度的提高: 原来的每次请求后, 爬虫才被唤醒 所以要花大约17秒的时间才能从后台输出到前台而现在只需要2秒不到
3.对爬虫进行了优化
1. Cron.yaml 来安排每个程序醒来的时间
经过翻文档, 问问题终于弄明白google的cron的工作原理--实际上只是google每隔指定的时间虚拟地访问一个我们自己指定的url…
因此在Django 下, 根本不需要写一个纯的python 程序 一定不要写:
if __name__=="__main__":
只需要自己配置一个url 放在views.py里:
def updatePostsDB(request): #deleteAll() SiteInfos=[] SiteInfo={} SiteInfo['PostSite']="L2ZStory" SiteInfo['feedurl']="feed://l2zstory.wordpress.com/feed/" SiteInfo['blog_type']="wordpress" SiteInfos.append(SiteInfo) SiteInfo={} SiteInfo['PostSite']="YukiLife" SiteInfo['feedurl']="feed://blog.sina.com.cn/rss/1583902832.xml" SiteInfo['blog_type']="sina" SiteInfos.append(SiteInfo) SiteInfo={} SiteInfo['PostSite']="ZLife" SiteInfo['feedurl']="feed://ireallife.wordpress.com/feed/" SiteInfo['blog_type']="wordpress" SiteInfos.append(SiteInfo) SiteInfo={} SiteInfo['PostSite']="ZLife_Sina" SiteInfo['feedurl']="feed://blog.sina.com.cn/rss/1650910587.xml" SiteInfo['blog_type']="sina" SiteInfos.append(SiteInfo) try: for site in SiteInfos: feedurl=site['feedurl'] blog_type=site['blog_type'] PostSite=site['PostSite'] PostInfos=getPostInfosFromWeb(feedurl,blog_type) recordToDB(PostSite,PostInfos) Msg="Cron Job Done..." except Exception,e: Msg=str(e) return HttpResponse(Msg)
cron.yaml 要放在跟app.yaml同一个级别上:
cron:
- description: retrieve newest posts
url: /task_updatePosts/
schedule: every 30 minutes
在url.py 里只要指向这个把task_updatePostsDB 指向url就好了
调试这个cron的过程可以用惨烈来形容。。。在stackoverflow上有很多很多人在问为什么自己的cron不能工作。。。我一开始也是满头是汗,找不着头脑。。。最后侥幸弄好了,大体步骤也是空泛的很。。但是很朴实:
首先,一定要确保自己的程序没有什么syntax error….然后可以自己试着手动访问一下那个url 如果cron 正常的话,这个时候任务应该已经被执行了 最后实在不行的话多看看log…
2. Datastore的配置和利用--Using Datastore with Django
我的需求在这里很简单--没有join…所以我就直接用了最简陋的django-helper..
这个models.py 是个重点:
from appengine_django.models import BaseModel
from google.appengine.ext import db
classPostsDB(BaseModel):
link=db.LinkProperty()
title=db.StringProperty()
author=db.StringProperty()
date=db.DateTimeProperty()
description=db.TextProperty()
postSite=db.StringProperty()
前两行是重点中的重点。。。。我一开始天真没写第二行。。。结果我花了2个多小时都没明白是怎么回事。。得不偿失。。。
读写的时候, 千万别忘了。。。PostDB.put()
一开始的时候,我为了省事,就直接每次cron被唤醒, 就删除全部的数据, 然后重新写入新爬下来的数据。。。
结果。。。一天过后。。。有4万条读写纪录。。。。而每天免费的只有5万条。。。。
所以就改为在插入之前先看看有没有更新, 有的话就写,没的话就不写。。总算把数据库这部分搞好了。。。
3.爬虫的改进:
一开始的时候,爬虫只是去爬feed里给的文章。。这样一来,如果一个博客有24*30篇文章的话。。。最多只能拿到10篇。。。。
这次,改进版能爬所有的文章。。我分别拿孤独川陵, 韩寒, Yuki和Z的博客做的试验。。成功的很。。。其中孤独川陵那里有720+篇文章。。。无遗漏掉的被爬下来了。。
import urllib #from BeautifulSoup import BeautifulSoup from pyquery import PyQuery as pq def getArticleList(url): lstArticles=[] url_prefix=url[:-6] Cnt=1 response=urllib.urlopen(url) html=response.read() d=pq(html) try: pageCnt=d("ul.SG_pages").find('span') pageCnt=int(d(pageCnt).text()[1:-1]) except: pageCnt=1 for i in range(1,pageCnt+1): url=url_prefix+str(i)+".html" #print url response=urllib.urlopen(url) html=response.read() d=pq(html) title_spans=d(".atc_title").find('a') date_spans=d('.atc_tm') for j in range(0,len(title_spans)): titleObj=title_spans[j] dateObj=date_spans[j] article={} article['link']= d(titleObj).attr('href') article['title']= d(titleObj).text() article['date']=d(dateObj).text() article['desc']=getPageContent(article['link']) lstArticles.append(article) return lstArticles def getPageContent(url): #get Page Content response=urllib.urlopen(url) html=response.read() d=pq(html) pageContent=d("div.articalContent").text() #print pageContent return pageContent def main(): url='http://blog.sina.com.cn/s/articlelist_1191258123_0_1.html'#Han Han url="http://blog.sina.com.cn/s/articlelist_1225833283_0_1.html"#Gu Du Chuan Ling url="http://blog.sina.com.cn/s/articlelist_1650910587_0_1.html"#Feng url="http://blog.sina.com.cn/s/articlelist_1583902832_0_1.html"#Yuki lstArticles=getArticleList(url) for article in lstArticles: f=open("blogs/"+article['date']+"_"+article['title']+".txt",'w') f.write(article['desc'].encode('utf-8')) #特别注意对中文的处理 f.close() #print article['desc'] if __name__=='__main__': main()
对PyQuery的推荐。。
很遗憾的说, BueautifulSoup让我深深的失望了。。。在我写上篇文章的时候,当时有个小bug..一直找不到原因。。在我回家后,又搭上了很多时间试图去弄明白为什么BueautifulSoup一直不能抓到我想要的内容。。。后来大体看了看它selector部分的源代码觉得应该是它对于很多还有<script>tag的不规范html页面的解析不准确。。。</script>
我放弃了这个库, 又试了lxml..基于xpath 很好用。。但是xpath的东西我老是需要查文档。。。所以我又找了个库PyQuery…可以用jQuery选择器的工具。。。非常非常非常好用。。。。具体的用法就看上面吧。。。这个库有前途。。。
隐忧
因为pyquery基于lxml…而lxml的底层又是c…所以估计在gae上用不了。。。我这个爬虫只能现在在我的电脑上爬好东西。。。然后push到server上。。。
总结
一句话, 我爱死Python了
两句话, 我爱死Python了,我爱死Django了
三句话, 我爱死Python了,我爱死Django了,我爱死jQuery了。。。
四句号, 我爱死Python了,我爱死Django了,我爱死jQuery了,我爱死pyQuery了。。。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

An application that converts XML directly to PDF cannot be found because they are two fundamentally different formats. XML is used to store data, while PDF is used to display documents. To complete the transformation, you can use programming languages and libraries such as Python and ReportLab to parse XML data and generate PDF documents.

The speed of mobile XML to PDF depends on the following factors: the complexity of XML structure. Mobile hardware configuration conversion method (library, algorithm) code quality optimization methods (select efficient libraries, optimize algorithms, cache data, and utilize multi-threading). Overall, there is no absolute answer and it needs to be optimized according to the specific situation.

It is impossible to complete XML to PDF conversion directly on your phone with a single application. It is necessary to use cloud services, which can be achieved through two steps: 1. Convert XML to PDF in the cloud, 2. Access or download the converted PDF file on the mobile phone.

To generate images through XML, you need to use graph libraries (such as Pillow and JFreeChart) as bridges to generate images based on metadata (size, color) in XML. The key to controlling the size of the image is to adjust the values of the <width> and <height> tags in XML. However, in practical applications, the complexity of XML structure, the fineness of graph drawing, the speed of image generation and memory consumption, and the selection of image formats all have an impact on the generated image size. Therefore, it is necessary to have a deep understanding of XML structure, proficient in the graphics library, and consider factors such as optimization algorithms and image format selection.

Use most text editors to open XML files; if you need a more intuitive tree display, you can use an XML editor, such as Oxygen XML Editor or XMLSpy; if you process XML data in a program, you need to use a programming language (such as Python) and XML libraries (such as xml.etree.ElementTree) to parse.

XML formatting tools can type code according to rules to improve readability and understanding. When selecting a tool, pay attention to customization capabilities, handling of special circumstances, performance and ease of use. Commonly used tool types include online tools, IDE plug-ins, and command-line tools.

There is no APP that can convert all XML files into PDFs because the XML structure is flexible and diverse. The core of XML to PDF is to convert the data structure into a page layout, which requires parsing XML and generating PDF. Common methods include parsing XML using Python libraries such as ElementTree and generating PDFs using ReportLab library. For complex XML, it may be necessary to use XSLT transformation structures. When optimizing performance, consider using multithreaded or multiprocesses and select the appropriate library.

There is no built-in sum function in C language, so it needs to be written by yourself. Sum can be achieved by traversing the array and accumulating elements: Loop version: Sum is calculated using for loop and array length. Pointer version: Use pointers to point to array elements, and efficient summing is achieved through self-increment pointers. Dynamically allocate array version: Dynamically allocate arrays and manage memory yourself, ensuring that allocated memory is freed to prevent memory leaks.
