用python写个小爬虫,只用了urllib2,urllib,re模块,各位大神,求解啊?
Traceback (most recent call last):
File "C:/Users/user/Desktop/python ����/mm/mm.py", line 62, in <module>
urllib.urlretrieve(mat[0], fname)
File "D:\Python27\lib\urllib.py", line 94, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "D:\Python27\lib\urllib.py", line 240, in retrieve
fp = self.open(url, data)
File "D:\Python27\lib\urllib.py", line 208, in open
return getattr(self, name)(url)
File "D:\Python27\lib\urllib.py", line 345, in open_http
h.endheaders(data)
File "D:\Python27\lib\httplib.py", line 991, in endheaders
self._send_output(message_body)
File "D:\Python27\lib\httplib.py", line 844, in _send_output
self.send(msg)
File "D:\Python27\lib\httplib.py", line 806, in send
self.connect()
File "D:\Python27\lib\httplib.py", line 787, in connect
self.timeout, self.source_address)
File "D:\Python27\lib\socket.py", line 571, in create_connection
raise err
IOError: [Errno socket error] [Errno 10060]
This problem is normal. Frequent visits to a website will be considered a DOS attack. Usually websites with rate-limit will stop responding for a period of time. You can catch this Exception, sleep for a period of time and then try again, or you can try again based on the The number of times you try to do exponential backup off.