How to implement distributed crawler using client IP

WBOY
Release: 2016-08-08 09:06:43
Original
1279 people have browsed it

If you use a server-side crawler, you will encounter various problems. How can you use the visitor's IP to access the crawled website when opening the web page, and then upload the data? Can this be achieved by a distributed crawler? Ajax gets the crawled data and then sends it to your own server?

Are there any similar examples or open source projects?

Reply content:

If you use a server-side crawler, you will encounter various problems. How can you use the visitor's IP to access the crawled website when opening the web page, and then upload the data? Can this be achieved by a distributed crawler? Ajax gets the crawled data and then sends it to your own server?

Are there any similar examples or open source projects?

You are stealing user privacy, it won’t work~

The basic principle is to create a hidden iframe and then request the target website. After the request is successful, use ajax to save it to the local server. . Because many websites have implemented anti-crawling strategies, server-side crawlers often fail. In this case, it is very useful to use client-side crawlers.

However, the user experience is not very good. . . .

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!