1.我写了个爬虫去爬取页面,成功的拿到了所有图片的网络地址2.下载保存到本地后,我去打开图片,发现一只在转圈无法打开]2]
欢迎选择我的课程,让我们一起见证您的进步~~
Already solved, I found the method'''with open(os.path.join(filename, image_name), 'wb') as fs:
fs.write(r.content) fs.close
'''
It is possible that the anti-crawler mechanism has been triggered or the URL is incorrect, causing the downloaded text file to actually be used. Open it with a text editor and see the content?
requests文档:
In general, however, you should use a pattern like this to save what is being streamed to a file:
with open(filename, 'wb') as fd: for chunk in r.iter_content(chunk_size): fd.write(chunk)
测试:
with open('./pic2/'+str(self.picnum)+'.jpeg', 'wb') as fd: for chunk in r.iter_content(): fd.write(chunk) print('第%s图片下载成功。' % self.picnum)
Already solved, I found the method
'''
with open(os.path.join(filename, image_name), 'wb') as fs:
'''
It is possible that the anti-crawler mechanism has been triggered or the URL is incorrect, causing the downloaded text file to actually be used.
Open it with a text editor and see the content?
requests文档:
In general, however, you should use a pattern like this to save what is being streamed to a file:
测试: