有时爬虫会报如下超时错误:
Traceback (most recent call last):
File "/opt/pyspider/pyspider/run.py", line 351, in <lambda>
app.config['fetch'] = lambda x: umsgpack.unpackb(fetcher_rpc.fetch(x).data)
File "/usr/lib/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
File "/usr/lib/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib/python2.7/xmlrpclib.py", line 1321, in single_request
response.msg,
ProtocolError: <ProtocolError for fetcher/: 504 Gateway Time-out>
请问有什么好的方法避免?
這個錯誤只會在偵錯時出現
@足兆叉蟲
這個確實是調試時的前台錯位,而且在後台fetcher會報這樣的錯誤:
[E 161014 23:45:09 tornado_fetcher:202] [599] douban:f25b5797bvie .douban.com/revi... ValueError('No JSON object could be decoded',) 50.00s
我調試完成後,真正開始爬取時,過一段時間後會有大量的這個錯誤,而且在頁面上顯示爬蟲status為“PAUSED”。請問是什麼問題?如何解決?