用python写个小爬虫,只用了urllib2,urllib,re模块,各位大神,求解啊?
Traceback (most recent call last):
File "C:/Users/user/Desktop/python ����/mm/mm.py", line 62, in <module>
urllib.urlretrieve(mat[0], fname)
File "D:\Python27\lib\urllib.py", line 94, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "D:\Python27\lib\urllib.py", line 240, in retrieve
fp = self.open(url, data)
File "D:\Python27\lib\urllib.py", line 208, in open
return getattr(self, name)(url)
File "D:\Python27\lib\urllib.py", line 345, in open_http
h.endheaders(data)
File "D:\Python27\lib\httplib.py", line 991, in endheaders
self._send_output(message_body)
File "D:\Python27\lib\httplib.py", line 844, in _send_output
self.send(msg)
File "D:\Python27\lib\httplib.py", line 806, in send
self.connect()
File "D:\Python27\lib\httplib.py", line 787, in connect
self.timeout, self.source_address)
File "D:\Python27\lib\socket.py", line 571, in create_connection
raise err
IOError: [Errno socket error] [Errno 10060]
这个问题很正常,频繁的访问某个网站会被认为是DOS攻击,通常做了Rate-limit的网站都会停止响应一段时间,你可以Catch这个Exception,sleep一段时间然后重试,也可以根据重试的次数做exponential backup off。