我用scrapy框架写了个简单的爬虫,爬取安居客上房源信息。最初能够正确爬下来,之后可能请求次数太多酒重定向到验证码页面,我试着加了headers和禁止了重定向中间件依旧无法访问。不知该如何解决
代码如下:
# -*- coding: utf-8 -*-
from HouseRenting.user_agents import agents
from HouseRenting.items import HouserentingItem
import scrapy
import random
class AnjukeSpider(scrapy.Spider):
name = 'anjuke'
allowed_domains = ['bj.zu.anjuke.com']
start_urls = ['http://bj.zu.anjuke.com/fangyuan/x2']
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4',
'Connection': 'keep-alive',
'Host': 'bj.zu.anjuke.com',
'User-Agent': random.choice(agents)
}
def parse(self, response):
print response.body
for item in response.xpath('//p[contains(@class, "zu-itemmod")]'):
# title = response.xpath('p/h3/a/text()').extract_first()
link = item.xpath('p/h3/a/@href').extract_first()
yield scrapy.Request(url=link, headers=self.headers, callback=self.parse_detail)
def parse_detail(self, response):
def extract_with_xpath(query):
return response.xpath(query).extract_first().strip()
print response.url
yield {'rental': extract_with_xpath('//span[@class="f26"]/text()')}
下面是debug信息:
$ scrapy crawl anjuke
2016-12-04 23:40:01 [scrapy] INFO: Scrapy 1.1.0 started (bot: HouseRenting)
2016-12-04 23:40:01 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'HouseRenting.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['HouseRenting.spiders'], 'BOT_NAME': 'HouseRenting', 'AUTOTHROTTLE_ENABLED': True, 'DOWNLOAD_DELAY': 3}
2016-12-04 23:40:01 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.throttle.AutoThrottle']
2016-12-04 23:40:01 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-12-04 23:40:01 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-12-04 23:40:01 [scrapy] INFO: Enabled item pipelines:
[]
2016-12-04 23:40:01 [scrapy] INFO: Spider opened
2016-12-04 23:40:01 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-12-04 23:40:01 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6025
2016-12-04 23:40:02 [scrapy] DEBUG: Crawled (200) <GET http://bj.zu.anjuke.com/robots.txt> (referer: None)
2016-12-04 23:40:08 [scrapy] DEBUG: Redirecting (302) to <GET http://www.anjuke.com/captcha-verify/?callback=shield&history=aHR0cDovL2JqLnp1LmFuanVrZS5jb20vZmFuZ3l1YW4veDI%3D> from <GET http://bj.zu.anjuke.com/fangyuan/x2>
2016-12-04 23:40:09 [scrapy] DEBUG: Crawled (200) <GET http://www.anjuke.com/robots.txt> (referer: None)
2016-12-04 23:40:12 [scrapy] DEBUG: Crawled (200) <GET http://www.anjuke.com/captcha-verify/?callback=shield&history=aHR0cDovL2JqLnp1LmFuanVrZS5jb20vZmFuZ3l1YW4veDI%3D> (referer: None)
2016-12-04 23:40:12 [scrapy] INFO: Closing spider (finished)
2016-12-04 23:40:12 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1367,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 15276,
'downloader/response_count': 4,
'downloader/response_status_count/200': 3,
'downloader/response_status_count/302': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 12, 4, 15, 40, 12, 583814),
'log_count/DEBUG': 5,
'log_count/INFO': 7,
'response_received_count': 3,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2016, 12, 4, 15, 40, 1, 924582)}
2016-12-04 23:40:12 [scrapy] INFO: Spider closed (finished
我用浏览器能正常访问,不会跳转到验证码页面,不知该如何模拟真实浏览器的请求
我也爬過安居客的數據
給你的建議就是
第一設置爬取頻率我使用的是5秒左右,但是爬取一次全國的時間太長
第二買高匿IP 使用代理去爬