我使用requests下载文件,有时候会在网络情况不好的情况下不能完整的下载文件,该如何确保文件被完整的下载?
条件1: 文件的响应头中并没有content-length
,我无法知道文件具体的大小
下载代码如下
r = requests.get(url, stream=True)
with open('test', 'wb') as fd:
for chunk in r.iter_content(1024 * 100):
fd.write(chunk)
网络问题不好重现,我测试的方法是,在本地使用python -m SimpleHTTPServer
搭建服务端,然后用此代码下载大文件,然后手动关闭服务端,此代码并没有引发任何异常成功退出,下载的文件也是不完整的。
我希望这种不正常关闭连接能够触发异常处理,该如何做?
Download it in sections and then put it together again
If you just want to verify the integrity of the downloaded file, you can verify the file MD5 value after the loop ends. However, the implementation needs to pass the MD5 of the source file to the connection. You can create a text file in the working directory to save the MD5 value of all files in the current directory, and then Pass it to requests through GET parameters, such as http://localhost/a.mp4?md5=xx...