Table of Contents
Use redis scheduler
Use redis deduplication strategy
If you do not clear the redis record, you can pause/resume crawling
Set the connection parameters of redis
encoding:utf-8
Home Backend Development Python Tutorial Distributed crawlers in Scrapy and methods to improve data crawling efficiency

Distributed crawlers in Scrapy and methods to improve data crawling efficiency

Jun 22, 2023 pm 09:25 PM
scrapy Distributed crawler Data capture efficiency

Scrapy is an efficient Python web crawler framework that can write crawler programs quickly and flexibly. However, when processing large amounts of data or complex websites, stand-alone crawlers may encounter performance and scalability issues. At this time, distributed crawlers need to be used to improve data crawling efficiency. This article introduces distributed crawlers in Scrapy and methods to improve data crawling efficiency.

1. What is a distributed crawler?

In the traditional stand-alone crawler architecture, all crawlers run on the same machine. When faced with large amounts of data or high-pressure crawling tasks, machine performance is often tight. Distributed crawlers distribute crawler tasks to multiple machines for processing. Through distributed computing and storage, the burden on a single machine is reduced, thereby improving the efficiency and stability of the crawler.

Distributed crawlers in Scrapy are usually implemented using the open source distributed scheduling framework Distributed Scrapy (DSC for short). DSC distributes Scrapy crawler programs to multiple machines for parallel processing, and uniformly summarizes the results to the central scheduling node.

2. How to implement distributed crawler?

1. Install Distributed Scrapy

Run the following command to install DSC:

pip install scrapy_redis

pip install pymongo

2. Modify Scrapy configuration file

Add the following configuration in the settings.py file of the Scrapy project:

Use redis scheduler

SCHEDULER = "scrapy_redis.scheduler.Scheduler"

Use redis deduplication strategy

DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

If you do not clear the redis record, you can pause/resume crawling

SCHEDULER_PERSIST=True

Set the connection parameters of redis

REDIS_HOST='localhost'
REDIS_PORT=6379

3. Write the crawler code

In the Scrapy crawler program , you need to modify the starting request method, use the starting method of scrapy-redis:

encoding:utf-8

import scrapy,re,json
from ..items import DouyuItem

from scrapy_redis.spiders import RedisSpider

class DouyuSpider(RedisSpider):

1

2

3

4

5

6

7

# 爬虫名字

name = 'douyu'

# redis-key,从redis中pop数据进行爬取

redis_key = 'douyu:start_urls'

 

def parse(self, response):

    # scrapy爬虫代码

Copy after login

4. Start the redis service

Execute the following command in the terminal to start the redis service :

redis-server

5. Start Distributed Scrapy

Enter the following command in the terminal to start the DSC node:

scrapy crawl douyu -s JOBDIR= job1

Among them, job1 can be a custom name, which is used for DSC to record the crawler status.

3. Optimize Scrapy crawler

Scrapy provides many methods to optimize crawler efficiency. If used with distributed crawlers, data crawling efficiency can be further improved.

1. Using CrawlerRunner

CrawlerRunner requires a Twisted class to extend the application. Compared to simply running a Python file, it allows you to run multiple crawlers simultaneously in the same process without using multiple processes or multiple machines. This can make task management easier.

The way to use CrawlerRunner is as follows:

from twisted.internet import reactor,defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from my_spider.spiders.my_spider import MySpider

runner = CrawlerRunner(get_project_settings())

@defer.inlineCallbacks
def crawl():

1

2

yield runner.crawl(MySpider)

reactor.stop()

Copy after login

crawl()
reactor.run()

2. Reduce the priority of the download middleware

If you need to process a large amount or complex data in the download middleware, you can use CONCURRENT_REQUESTS_PER_DOMAIN to reduce the priority of the download middleware. Priority:

CONCURRENT_REQUESTS_PER_DOMAIN = 2
DOWNLOAD_DELAY = 0.5
DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.MyCustomDownloaderMiddleware': 543,
}

3. Adjustment The CONCURRENT_REQUESTS and DOWNLOAD_DELAY parameters

CONCURRENT_REQUESTS indicate the maximum number of requests that each domain name can handle simultaneously, and can be reasonably adjusted according to machine configuration and task requirements.

DOWNLOAD_DELAY represents the delay time between each request. The crawler efficiency can be improved by increasing the delay or asynchronous requests.

4. Summary

Scrapy’s distributed crawler can help us quickly process large amounts of data and improve crawler efficiency. At the same time, crawler efficiency can be further improved by lowering the priority of the download middleware, adjusting the number of coroutines, and increasing the request delay. Distributed crawler is one of the important functions of Scrapy. Learning it can allow us to easily handle various crawler tasks.

The above is the detailed content of Distributed crawlers in Scrapy and methods to improve data crawling efficiency. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Scrapy implements crawling and analysis of WeChat public account articles Scrapy implements crawling and analysis of WeChat public account articles Jun 22, 2023 am 09:41 AM

Scrapy implements article crawling and analysis of WeChat public accounts. WeChat is a popular social media application in recent years, and the public accounts operated in it also play a very important role. As we all know, WeChat public accounts are an ocean of information and knowledge, because each public account can publish articles, graphic messages and other information. This information can be widely used in many fields, such as media reports, academic research, etc. So, this article will introduce how to use the Scrapy framework to crawl and analyze WeChat public account articles. Scr

Scrapy asynchronous loading implementation method based on Ajax Scrapy asynchronous loading implementation method based on Ajax Jun 22, 2023 pm 11:09 PM

Scrapy is an open source Python crawler framework that can quickly and efficiently obtain data from websites. However, many websites use Ajax asynchronous loading technology, making it impossible for Scrapy to obtain data directly. This article will introduce the Scrapy implementation method based on Ajax asynchronous loading. 1. Ajax asynchronous loading principle Ajax asynchronous loading: In the traditional page loading method, after the browser sends a request to the server, it must wait for the server to return a response and load the entire page before proceeding to the next step.

Scrapy case analysis: How to crawl company information on LinkedIn Scrapy case analysis: How to crawl company information on LinkedIn Jun 23, 2023 am 10:04 AM

Scrapy is a Python-based crawler framework that can quickly and easily obtain relevant information on the Internet. In this article, we will use a Scrapy case to analyze in detail how to crawl company information on LinkedIn. Determine the target URL First, we need to make it clear that our target is the company information on LinkedIn. Therefore, we need to find the URL of the LinkedIn company information page. Open the LinkedIn website, enter the company name in the search box, and

Scrapy optimization tips: How to reduce crawling of duplicate URLs and improve efficiency Scrapy optimization tips: How to reduce crawling of duplicate URLs and improve efficiency Jun 22, 2023 pm 01:57 PM

Scrapy is a powerful Python crawler framework that can be used to obtain large amounts of data from the Internet. However, when developing Scrapy, we often encounter the problem of crawling duplicate URLs, which wastes a lot of time and resources and affects efficiency. This article will introduce some Scrapy optimization techniques to reduce the crawling of duplicate URLs and improve the efficiency of Scrapy crawlers. 1. Use the start_urls and allowed_domains attributes in the Scrapy crawler to

Using Selenium and PhantomJS in Scrapy crawler Using Selenium and PhantomJS in Scrapy crawler Jun 22, 2023 pm 06:03 PM

Using Selenium and PhantomJS in Scrapy crawlers Scrapy is an excellent web crawler framework under Python and has been widely used in data collection and processing in various fields. In the implementation of the crawler, sometimes it is necessary to simulate browser operations to obtain the content presented by certain websites. In this case, Selenium and PhantomJS are needed. Selenium simulates human operations on the browser, allowing us to automate web application testing

In-depth use of Scrapy: How to crawl HTML, XML, and JSON data? In-depth use of Scrapy: How to crawl HTML, XML, and JSON data? Jun 22, 2023 pm 05:58 PM

Scrapy is a powerful Python crawler framework that can help us obtain data on the Internet quickly and flexibly. In the actual crawling process, we often encounter various data formats such as HTML, XML, and JSON. In this article, we will introduce how to use Scrapy to crawl these three data formats respectively. 1. Crawl HTML data and create a Scrapy project. First, we need to create a Scrapy project. Open the command line and enter the following command: scrapys

How does Scrapy implement Docker containerization and deployment? How does Scrapy implement Docker containerization and deployment? Jun 23, 2023 am 10:39 AM

As modern Internet applications continue to develop and increase in complexity, web crawlers have become an important tool for data acquisition and analysis. As one of the most popular crawler frameworks in Python, Scrapy has powerful functions and easy-to-use API interfaces, which can help developers quickly crawl and process web page data. However, when faced with large-scale crawling tasks, a single Scrapy crawler instance is easily limited by hardware resources, so Scrapy usually needs to be containerized and deployed to a Docker container.

How to use Scrapy to crawl Douban books and their ratings and comments? How to use Scrapy to crawl Douban books and their ratings and comments? Jun 22, 2023 am 10:21 AM

With the development of the Internet, people increasingly rely on the Internet to obtain information. For book lovers, Douban Books has become an indispensable platform. In addition, Douban Books also provides a wealth of book ratings and reviews, allowing readers to understand a book more comprehensively. However, manually obtaining this information is tantamount to finding a needle in a haystack. At this time, we can use the Scrapy tool to crawl data. Scrapy is an open source web crawler framework based on Python, which can help us efficiently

See all articles