python - 爬虫模拟登陆网站的次数多了,一阵子变得访问不了
PHPz
PHPz 2017-04-17 17:34:11
0
3
539

如题。
我用的是requests的session来登陆的。
每次运行完都会用close()来关闭掉。
因为我在测试一些东西,所以我经常待程序运行完又马上运行。
用POST来登陆的时候没有带其他信息,就只带了要POST的数据。

这是不是网站的一种反爬虫机制?为什么浏览器多次访问就可以?要不要带上头部?

PHPz
PHPz

学习是最好的投资!

reply all(3)
洪涛

Just save the cookie

洪涛

You can add User Agent to the request to simulate browser access.

左手右手慢动作

It is recommended to use the Archer Cloud Crawler Platform to develop crawlers and support automatic collection in the cloud.
A few lines of javascript can implement complex crawlers and provide many functional functions: anti-anti-crawlers, js rendering, data publishing, chart analysis, anti-leeching, etc., which are often encountered in the process of developing crawlers The problem will be solved by the Archer for you.

Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template