How to use Scrapy to build an efficient crawler program

WBOY
Release: 2023-08-02 14:33:13
Original
682 people have browsed it

How to use Scrapy to build an efficient crawler program

With the advent of the information age, the amount of data on the Internet continues to increase, and the demand for obtaining large amounts of data is also increasing. And crawlers have become one of the best solutions to this need. As an excellent Python crawler framework, Scrapy is efficient, stable and easy to use, and is widely used in various fields. This article will introduce how to use Scrapy to build an efficient crawler program and give code examples.

  1. The basic structure of the crawler program

Scrapy's crawler program mainly consists of the following components:

  • Crawler program: defined How to crawl pages, parse data from them, follow links, etc.
  • Project pipeline: Responsible for processing the data extracted from the page by the crawler program and performing subsequent processing, such as storing it in a database or exporting it to a file.
  • Downloader middleware: Responsible for processing sending requests and obtaining page content. It can perform operations such as User-Agent settings and proxy IP switching.
  • Scheduler: Responsible for managing all requests to be fetched and scheduling them according to certain strategies.
  • Downloader: Responsible for downloading the requested page content and returning it to the crawler program.
  1. Writing a crawler program

In Scrapy, we need to create a new crawler project to write our crawler program. Execute the following command in the command line:

scrapy startproject myspider
Copy after login

This will create a project folder named "myspider" and contain some default files and folders. We can enter the folder and create a new crawler:

cd myspider
scrapy genspider example example.com
Copy after login

This will create a crawler named "example" to crawl data from the "example.com" website. We can write specific crawler logic in the generated "example_spider.py" file.

Here is a simple example for crawling news headlines and links on a website.

import scrapy

class ExampleSpider(scrapy.Spider):
    name = 'example'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com/news']

    def parse(self, response):
        for news in response.xpath('//div[@class="news-item"]'):
            yield {
                'title': news.xpath('.//h2/text()').get(),
                'link': news.xpath('.//a/@href').get(),
            }
        next_page = response.xpath('//a[@class="next-page"]/@href').get()
        if next_page:
            yield response.follow(next_page, self.parse)
Copy after login

In the above code, we define a crawler class named "ExampleSpider", which contains three attributes: name represents the name of the crawler, allowed_domains represents the domain name that is allowed to crawl the website, and start_urls represents the starting point URL. Then we rewrote the parse method, which parses the web page content, extracts news titles and links, and returns the results using yield.

  1. Configuring the project pipeline

In Scrapy, we can pipeline the crawled data through the project pipeline. Data can be stored in a database, written to a file, or otherwise processed later.

Open the "settings.py" file in the project folder, find the ITEM_PIPELINES configuration item in it, and uncomment it. Then add the following code:

ITEM_PIPELINES = {
    'myspider.pipelines.MyPipeline': 300,
}
Copy after login

This will enable the custom pipeline class "my spider.pipelines.MyPipeline" and specify a priority (the lower the number, the higher the priority).

Next, we need to create a pipeline class to process the data. Create a file named "pipelines.py" in the project folder and add the following code:

import json

class MyPipeline:

    def open_spider(self, spider):
        self.file = open('news.json', 'w')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + "
"
        self.file.write(line)
        return item
Copy after login

In this example, we define a pipeline class named "MyPipeline" which contains three Methods: open_spider, close_spider and process_item. In the open_spider method, we open a file to store the data. In the close_spider method, we close the file. In the process_item method, we convert the data into JSON format and write it to the file.

  1. Run the crawler program

After completing the writing of the crawler program and project pipeline, we can execute the following command on the command line to run the crawler program:

scrapy crawl example
Copy after login

This will launch the crawler named "example" and start crawling data. The crawled data will be processed as we defined it in the pipeline class.

The above is the basic process and sample code for using Scrapy to build an efficient crawler program. Of course, Scrapy also offers many other features and options that can be adjusted and extended according to specific needs. I hope this article can help readers better understand and use Scrapy and build efficient crawler programs.

The above is the detailed content of How to use Scrapy to build an efficient crawler program. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template