用python模拟登陆一个网站,一直遇到404问题,求指导!
代码
scrapy importieren
von scrapy.http import Request, FormRequest
von scrapy.selector import Selector
Klasse StackSpiderSpider(scrapy.Spider):
name = "stack_spider"
start_urls = ['https://stackoverflow.com/']
headers = {
"host": "cdn.sstatic.net",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.5",
"Connection": "keep-alive",
"Content-Type":" application/x-www-form-urlencoded; charset=UTF-8",
"User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:54.0) Gecko/20100101 Firefox/54.0"
}
#重写了爬虫类的方法, 实现了自定义请求, 运行成功后会调用callback回调函数
def start_requests(self) :
return [Request("https://stackoverflow.com/users/login",
meta = {
# 'dont_redirect': True,
# 'handle_httpstatus_list': [302],
'cookiejar' : 1},
callback = self.post_login)] #添加了meta
#FormRequeset
def post_login(self, response) :
# 请求网页后返回网页中的_xsrf字段的文字, 用于成功提交表单
fkey = Selector(response).xpath('//input[@name="fkey"]/@value').extract()[0]
ssrc = Selector(response).xpath('//input[@name="ssrc"]/@value').extract()[0]
print fkey
print ssrc
#FormRequeset.from_response是Scrapy提供的一个函数, 用于post表单
#登陆成功后, 会调用after_login回调函数
return [FormRequest.from_response(response,
meta = {
# 'dont_redirect': True,
# 'handle_httpstatus_list': [302],
'cookiejar' : response.meta['cookiejar']}, #注意这里cookie的获取
headers = self.headers,
formdata = {
"fkey":fkey,
"ssrc":ssrc,
"email":"1045608243@qq.com",
"password":"12345",
"oauth_version":"",
"oauth_server":"",
"openid_username":"",
"openid_identifier":""
},
callback = self.after_login,
dont_filter = True
)]
def after_login(self, response) :
filename = "1.html"
with open(filename,'wb') as fp:
fp.write(response.body)
# print response.body
调试信息
2017-04-18 11:19:23 [scrapy.utils.log] INFO: Scrapy 1.3.3 gestartet (bot: text5)
2017-04-18 11:19:23 [scrapy.utils.log ] INFO: Überschriebene Einstellungen: {'NEWSPIDER_MO
DULE': 'text5.spiders', 'SPIDER_MODULES': ['text5.spiders'], 'BOT_NAME': 'text5'
}
2017-04-18 11:19: 23 [scrapy.middleware] INFO: Aktivierte Erweiterungen:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-04- 18 11:19:24 [scrapy.middleware] INFO: Aktivierte Downloader-Middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
. 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddle Ware' ,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares .redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-18 11:19:24 [scrapy.middleware] INFO: Aktivierte Spider-Middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares .length.DepthMiddleware']
2017-04-18 11:19:24 [scrapy.middleware] INFO: Aktivierte Elementpipelines:
[]
2017-04-18 11:19:24 [scrapy.core.engine] INFO : Spider geöffnet
2017-04-18 11:19:24 [scrapy.extensions.logstats] INFO: 0 Seiten gecrawlt (bei 0 Seiten/Min.), 0 Elemente gescrapt (bei 0 Elementen/Min.)
2017-04 -18 11:19:24 [scrapy.extensions.telnet] DEBUG: Telnet-Konsole lauscht 127.0.0.1:6023
2017-04-18 11:19:24 [scrapy.core.engine] DEBUG: Crawled (200 ) <GET https://stack
overflow.com/users/login> (Referenz: Keine)
1145f3f2e28e56c298bc28a1a735254b
2017-04-18 11:19:25 [scrapy.core.engine] DEBUG: Gecrawlt (404) <GET https://stack
overflow.com/search?q=&ssrc=&openid_username=&oauth_server=&oauth_version=&fkey =
1145f3f2e28e56c298bc28a1a735254b&password=wanglihong1993&email=1067863906%40qq.c
om&openid_identifier=> (Referenz: https://stackoverflow.com/use...
2017-04-18 11:19:25 [scrapy.spidermiddlewares.httperror] INFO: Antwort wird ignoriert
<404 https://stackoverflow.com/sea ...
auth_version=&fkey=1145f3f2e28e56c298bc28a1a735254b&password=wanglihong1993&emai
l=1067863906%40qq.com&openid_identifier=>: HTTP-Statuscode wird nicht verarbeitet oder ist nicht zulässig
2017-04-1 8 11:19:25 [scrapy.core. engine] INFO: Spider schließen (abgeschlossen)
2017-04-18 11:19:25 [scrapy.statscollectors] INFO: Scrapy-Statistiken ausgeben:
{'downloader/request_bytes': 881,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 12631,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 1 ,
'finish_reason': 'fertig',
'finish_time': datetime.datetime(2017, 4, 18, 3, 19, 25, 143000),
'log_count/DEBUG': 3,
'log_count/INFO': 8,
'request_third_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued /memory': 2,
'start_time': datetime.datetime(2017, 4, 18, 3, 19, 24, 146000)}
2017-04-18 11:19:25 [scrapy.core.engine] INFO: Spider geschlossen (fertig)
老弟,你的密码泄漏了