Home > Backend Development > Python Tutorial > The power of Scrapy: How to recognize and process verification codes?

The power of Scrapy: How to recognize and process verification codes?

王林
Release: 2023-06-22 15:09:12
Original
1802 people have browsed it

Scrapy is a powerful Python framework that helps us crawl data on websites easily. However, we run into problems when the website we want to crawl has a verification code. The purpose of CAPTCHAs is to prevent automated crawlers from attacking a website, so they tend to be highly complex and difficult to crack. In this post, we’ll cover how to use the Scrapy framework to identify and process CAPTCHAs to allow our crawlers to bypass these defenses.

What is a verification code?

Captcha is a test used to prove that the user is a real human being and not a machine. It is usually an obfuscated text string or an indecipherable image that requires the user to manually enter or select what is displayed. CAPTCHAs are designed to catch automated bots and scripts to protect websites from malicious attacks and abuse.

There are usually three types of CAPTCHAs:

  1. Text CAPTCHA: Users need to copy and paste a string of text to prove they are a human user and not a bot.
  2. Number verification code: The user is required to enter the displayed number in the input box.
  3. Image verification code: The user is required to enter the characters or numbers in the displayed image in the input box. This is usually the most difficult type to crack because the characters or numbers in the image can be distorted, misplaced or Has other visual noise.

Why do you need to process verification codes?

Crawlers are often automated on a large scale, so they can easily be identified as robots and banned from websites from obtaining data. CAPTCHAs were introduced to prevent this from happening. Once ep enters the verification code stage, the Scrapy crawler will stop waiting for user input, and therefore cannot continue to crawl data, resulting in a decrease in the efficiency and integrity of the crawler.

Therefore, we need a way to handle the verification code so that our crawler can automatically pass and continue its task. Usually we use third-party tools or APIs to complete the recognition of verification codes. These tools and APIs use machine learning and image processing algorithms to recognize images and characters, and return the results to our program.

How to handle verification codes in Scrapy?

Open Scrapy's settings.py file, we need to modify the DOWNLOADER_MIDDLEWARES field and add the following proxy:

DOWNLOADER_MIDDLEWARES = {'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 350,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 400,
'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy. contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
'scrapy.contrib. downloadermiddleware.ajaxcrawl.AjaxCrawlMiddleware': 900,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 800,
'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderSt ats ': 850,
'tutorial.middlewares.CaptchaMiddleware': 999}

In this example, we use CaptchaMiddleware to handle the verification code. CaptchMiddleware is a custom middleware class that will handle the download request and call the API to identify the verification code when needed, then fill the verification code into the request and return to continue execution.

Code example:

class CaptchaMiddleware(object):

def __init__(self):
    self.client = CaptchaClient()
    self.max_attempts = 5

def process_request(self, request, spider):
    # 如果没有设置dont_filter则默认开启
    if not request.meta.get('dont_filter', False):
        request.meta['dont_filter'] = True

    if 'captcha' in request.meta:
        # 带有验证码信息
        captcha = request.meta['captcha']
        request.meta.pop('captcha')
    else:
        # 没有验证码则获取
        captcha = self.get_captcha(request.url, logger=spider.logger)

    if captcha:
        # 如果有验证码则添加到请求头
        request = request.replace(
            headers={
                'Captcha-Code': captcha,
                'Captcha-Type': 'math',
            }
        )
        spider.logger.debug(f'has captcha: {captcha}')

    return request

def process_response(self, request, response, spider):
    # 如果没有验证码或者验证码失败则不重试
    need_retry = 'Captcha-Code' in request.headers.keys()
    if not need_retry:
        return response

    # 如果已经尝试过,则不再重试
    retry_times = request.meta.get('retry_times', 0)
    if retry_times >= self.max_attempts:
        return response

    # 验证码校验失败则重试
    result = self.client.check(request.url, request.headers['Captcha-Code'])
    if not result:
        spider.logger.warning(f'Captcha check fail: {request.url}')
        return request.replace(
            meta={
                'captcha': self.get_captcha(request.url, logger=spider.logger),
                'retry_times': retry_times + 1,
            },
            dont_filter=True,
        )

    # 验证码校验成功则继续执行
    spider.logger.debug(f'Captcha check success: {request.url}')
    return response

def get_captcha(self, url, logger=None):
    captcha = self.client.solve(url)
    if captcha:
        if logger:
            logger.debug(f'get captcha [0:4]: {captcha[0:4]}')
        return captcha

    return None
Copy after login

In this middleware, we use the CaptchaClient object as the captcha solution middleware, we can use multiple A captcha solution middleware.

Notes

When implementing this middleware, please pay attention to the following points:

  1. The identification and processing of verification codes require the use of third-party tools or APIs. We need to make sure we have legal licenses and use them according to the manufacturer's requirements.
  2. After adding such middleware, the request process will become more complex, and developers need to test and debug carefully to ensure that the program can work properly.

Conclusion

By using the Scrapy framework and the middleware for verification code recognition and processing, we can effectively bypass the verification code defense strategy and achieve effective crawling of the target website. This method usually saves time and effort than manually entering verification codes, and is more efficient and accurate. However, it is important to note that you read and comply with the license agreements and requirements of third-party tools and APIs before using them.

The above is the detailed content of The power of Scrapy: How to recognize and process verification codes?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template