Home > Backend Development > Python Tutorial > Analysis of link extractor and deduplication tools in Scrapy

Analysis of link extractor and deduplication tools in Scrapy

WBOY
Release: 2023-06-22 09:17:27
Original
1693 people have browsed it

Scrapy is an excellent Python crawler framework. It supports advanced features such as concurrency, distribution, and asynchronousness, and can help developers crawl data on the Internet faster and more stably. In Scrapy, link extractors and deduplication tools are very important components to assist crawlers in completing automated data capture and processing. This article will analyze the link extractor and deduplication tools in Scrapy, explore how they are implemented, and their application in the Scrapy crawling process.

1. The function and implementation of the link extractor

Link Extractor is a tool in the Scrapy crawler framework that automatically extracts URL links. In a complete crawler process, it is often necessary to extract some URL links from the web page, and then further access and process them based on these links. The link extractor is used to implement this process. It can automatically extract links from web pages according to some rules, and save these links to Scrapy's request queue for subsequent processing.

In Scrapy, the link extractor matches through regular expressions or XPath expressions. Scrapy provides two link extractors: LinkExtractor based on regular expressions and LxmlLinkExtractor based on XPath expressions.

  1. Regular expression-based LinkExtractor

Regular expression-based LinkExtractor can automatically extract successfully matched links by performing regular matching on URLs in web pages. For example, if we want to extract all links starting with http://example.com/ from a web page, we can use the following code:

from scrapy.linkextractors import LinkExtractor

link_extractor = LinkExtractor(allow=r'^http://example.com/')
links = link_extractor.extract_links(response)
Copy after login

The allow parameter specifies a regular expression to match all links starting with http Links starting with ://example.com/. The extract_links() method can extract all successfully matched links and save them in a list of Link objects.

The Link object is a data structure used to represent links in the Scrapy framework, which contains information such as the link's URL, title, anchor text, and link type. Through these objects, we can easily obtain the required links and further process and access them in the Scrapy crawler.

  1. LxmlLinkExtractor based on XPath expressions

LxmlLinkExtractor based on XPath expressions can automatically extract successful matches by matching XPath expressions on HTML tags in web pages Link. For example, if we want to extract all a links with class attributes equal to "storylink" from a web page, we can use the following code:

from scrapy.linkextractors import LxmlLinkExtractor

link_extractor = LxmlLinkExtractor(restrict_xpaths='//a[@class="storylink"]')
links = link_extractor.extract_links(response)
Copy after login

restrict_xpaths parameter specifies an XPath expression to match all class attributes equal to "storylink" " a tag. LxmlLinkExtractor is used similarly to LinkExtractor, and can save the extracted links in a list of Link objects. It should be noted that since LxmlLinkExtractor uses the lxml library for HTML parsing, the following code needs to be added to the project configuration file:

# settings.py
DOWNLOAD_HANDLERS = {
    's3': None,
}
Copy after login

The above code can disable the default downloader in Scrapy and use the lxml library. HTML parser.

2. The role and implementation of deduplication tools

When crawling the Web, link deduplication is very important, because in most cases, different links to the same web page are It will appear repeatedly. If the duplicates are not removed, it will cause repeated crawling problems and waste bandwidth and time. Therefore, the Duplicate Filter was introduced in Scrapy to mark and judge the links that have been crawled to avoid repeated visits.

The principle of deduplication tool is to save the visited URL link into a data structure, and then judge whether the new URL link has been visited. If it has been visited, the URL link will be discarded. , otherwise add it to the crawler's request queue. Scrapy has many built-in deduplication tools, including memory-based Set deduplication, disk-based SQLite3 deduplication, and Redis-based deduplication. Different deduplicators have different applicable scenarios. Let’s take the Redis deduplicator as an example to illustrate.

  1. Redis-based deduplicator

Redis is a high-performance NoSQL in-memory database that can support advanced features such as distribution, persistence, and rich data structures. Very suitable for implementing Scrapy's deduplication tool. The Redis deduplicator in Scrapy can mark URL links that have been visited to avoid repeated visits.

Scrapy uses the memory-based Set class deduplicator by default. If you need to use the Redis deduplicator, you can add the following code to the project configuration file:

# settings.py
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_HOST = "localhost"
REDIS_PORT = 6379
Copy after login

Among them, the DUPEFILTER_CLASS parameter Specifies the deduplication strategy used by the deduplication tool. Here we use scrapy_redis.dupefilter.RFPDupeFilter, which is implemented based on the set data structure of Redis.

The SCHEDULER parameter specifies the scheduling strategy used by the scheduler. Here we use scrapy_redis.scheduler.Scheduler, which is implemented based on the sorted set data structure of Redis.

The SCHEDULER_PERSIST parameter specifies whether the scheduler needs to be persisted in Redis, that is, whether it needs to save the state of the last crawl to avoid re-crawling URLs that have already been crawled.

The REDIS_HOST and REDIS_PORT parameters specify the IP address and port number of the Redis database respectively. If the Redis database is not local, you need to set the corresponding IP address.

使用Redis去重器之后,需要在爬虫中添加redis_key参数,用来指定Redis中保存URL链接的key名。例如:

# spider.py
class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['http://example.com']

    custom_settings = {
        'REDIS_HOST': 'localhost',
        'REDIS_PORT': 6379,
        'DUPEFILTER_CLASS': 'scrapy_redis.dupefilter.RFPDupeFilter',
        'SCHEDULER': 'scrapy_redis.scheduler.Scheduler',
        'SCHEDULER_PERSIST': True,
        'SCHEDULER_QUEUE_CLASS': 'scrapy_redis.queue.SpiderPriorityQueue',
        'REDIS_URL': 'redis://user:pass@localhost:6379',
        'ITEM_PIPELINES': {
            'scrapy_redis.pipelines.RedisPipeline': 400,
        },
        'DOWNLOADER_MIDDLEWARES': {
            'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
            'scrapy_useragents.downloadermiddlewares.useragents.UserAgentsMiddleware': 500,
        },
        'FEED_URI': 'result.json',
        'FEED_FORMAT': 'json',
        'LOG_LEVEL': 'INFO',
        'SPIDER_MIDDLEWARES': {
            'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 300,
        }
    }

    def __init__(self, *args, **kwargs):
        domain = kwargs.pop('domain', '')
        self.allowed_domains = filter(None, domain.split(','))
        self.redis_key = '%s:start_urls' % self.name
        super(MySpider, self).__init__(*args, **kwargs)

    def parse(self, response):
        pass
Copy after login

以上是一个简单的爬虫示例,redis_key参数指定了在Redis中保存URL链接的键名为myspider:start_urls。在parse()方法中,需要编写自己的网页解析代码,提取出需要的信息。

三、总结

链接提取器和去重工具是Scrapy爬虫框架中非常重要的组件,它们可以大大简化我们编写爬虫的工作,并提高爬虫的效率。在使用Scrapy爬虫时,我们可以根据自己的需求选择不同的链接提取器和去重工具,从而实现更为高效和灵活的爬虫功能。

The above is the detailed content of Analysis of link extractor and deduplication tools in Scrapy. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template