Home > Backend Development > Python Tutorial > How Scrapy uses proxy IP, user agent, and cookies to avoid anti-crawler strategies

How Scrapy uses proxy IP, user agent, and cookies to avoid anti-crawler strategies

王林
Release: 2023-06-23 13:22:40
Original
2120 people have browsed it

With the development of web crawlers, more and more websites and servers are beginning to adopt anti-crawler strategies to prevent data from being maliciously crawled. These strategies include IP blocking, user agent detection, Cookies verification, etc. Without a corresponding response strategy, our crawlers can easily be labeled as malicious and banned. Therefore, in order to avoid this situation, we need to apply policies such as proxy IP, user agent, and cookies in the crawler program of the Scrapy framework. This article will introduce in detail how to apply these three strategies.

  1. Proxy IP

Proxy IP can effectively change our real IP address, thus preventing the server from detecting our crawler program. At the same time, the proxy IP also gives us the opportunity to crawl under multiple IPs, thereby avoiding the situation where a single IP is blocked due to frequent requests.

In Scrapy, we can use middlewares to set the proxy IP. First, we need to make relevant configurations in settings.py, for example:

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
    'scrapy_proxies.RandomProxy': 100,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
}
Copy after login

In the above configuration, we use the scrapy_proxies library to implement the proxy IP settings. Among them, 100 represents the priority, and the smaller the value, the higher the priority. After this setting, during the request process, Scrapy will randomly select an IP address from the proxy IP pool to make the request.

Of course, we can also customize the proxy IP source. For example, we can use the API provided by the free proxy IP website to obtain the proxy IP. The code example is as follows:

class GetProxy(object):
    def __init__(self, proxy_url):
        self.proxy_url = proxy_url

    def get_proxy_ip(self):
        response = requests.get(self.proxy_url)
        if response.status_code == 200:
            json_data = json.loads(response.text)
            proxy = json_data.get('proxy')
            return proxy
        else:
            return None


class RandomProxyMiddleware(object):
    def __init__(self):
        self.proxy_url = 'http://api.xdaili.cn/xdaili-api//greatRecharge/getGreatIp?spiderId=e2f1f0cc6c5e4ef19f884ea6095deda9&orderno=YZ20211298122hJ9cz&returnType=2&count=1'
        self.get_proxy = GetProxy(self.proxy_url)

    def process_request(self, request, spider):
        proxy = self.get_proxy.get_proxy_ip()
        if proxy:
            request.meta['proxy'] = 'http://' + proxy
Copy after login

In the above code, we define a RandomProxyMiddleware class and use the Requests library to obtain the proxy IP. By adding the proxy IP to the request header, we can set the proxy IP.

  1. user agent

The user agent is part of the identification request header and contains information such as the device, operating system, and browser that initiated the request. When many servers process requests, they will use the user agent information in the request header to determine whether the request is a crawler, thereby performing anti-crawler processing.

Similarly, in Scrapy, we can use middlewares to implement user agent settings. For example:

class RandomUserAgent(object):
    def __init__(self):
        self.user_agents = ['Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3']
                  
    def process_request(self, request, spider):
        user_agent = random.choice(self.user_agents)
        request.headers.setdefault('User-Agent', user_agent)
Copy after login

In the above code, we define a RandomUserAgent class and randomly select a User-Agent as the user agent information in the request header. This way, even if our crawler sends a large number of requests, it can avoid being considered a malicious crawler by the server.

  1. Cookies

Cookies are a piece of data returned by the server through the Set-Cookie field in the response header when responding to a request. When the browser initiates a request to the server again, the previous Cookies information will be included in the request header to achieve login verification and other operations.

Similarly, in Scrapy, we can also set Cookies through middlewares. For example:

class RandomCookies(object):
    def __init__(self):
        self.cookies = {
            'example_cookie': 'example_value'
        }
                  
    def process_request(self, request, spider):
        cookie = random.choice(self.cookies)
        request.cookies = cookie
Copy after login

In the above code, we define a RandomCookies class and randomly select a Cookie as the Cookies information in the request header. In this way, we can implement login verification operations by setting Cookies during the request process.

Summary

In the process of using Scrapy for data crawling, it is very critical to avoid the ideas and methods of anti-crawler strategies. This article details how to set proxy IP, user agent, Cookies and other policies through middlewares in Scrapy to make the crawler program more hidden and secure.

The above is the detailed content of How Scrapy uses proxy IP, user agent, and cookies to avoid anti-crawler strategies. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template