Use the Scrapy framework to crawl the Flickr image library

WBOY
Release: 2023-06-22 11:02:07
Original
781 people have browsed it

In today's information technology era, crawling massive data has become an important skill. With the rapid development of big data technology, data crawling technology is constantly being updated and improved. Among them, the Scrapy framework is undoubtedly the most commonly used and popular framework. It has unique advantages and flexibility in data crawling and processing.

This article will introduce how to use the Scrapy framework to crawl the Flickr image library. Flickr is a picture sharing website with hundreds of millions of pictures in its inventory and a very large amount of data resources. Through the use of the Scrapy framework, we can easily obtain these data resources, conduct research and analysis, or use them to build application models, so as to better utilize the power of big data.

1. Introduction to Scrapy framework

Scrapy is an open source web crawler framework based on Python language. It takes "efficiency" and "maintainability" as its design concepts and implements a comprehensive crawler framework, which is more suitable for crawling and processing large-scale data. The core part of the Scrapy framework includes the following main functional modules:

  • Engine: Responsible for processing the data flow of the entire system and controlling the interaction and data transfer between various components.
  • Scheduler (Scheduler): Responsible for sorting the requests (Requests) issued by the engine and delivering them to the Downloader (Downloader).
  • Downloader (Downloader): Responsible for downloading web page content, processing the content returned by the web page, and then handing it over to the engine.
  • Parser (Spider): Responsible for parsing the web pages downloaded by the downloader, extracting the desired data and organizing it into structured data.
  • Pipeline: Responsible for subsequent processing of processed data, such as saving to a database or file, etc.

2. Obtain the Flickr API Key

Before crawling data, we need to apply for the Flickr API Key to obtain permission to access the Flickr database. In the Flickr developer website (https://www.flickr.com/services/api/misc.api_keys.html), we can obtain an API KEY by registering. The specific application steps are as follows:

① First, we need to enter the https://www.flickr.com/services/apps/create/apply/ URL to apply for API KEY.

②After entering this website, we need to log in. If we do not have an account, we need to register one ourselves.

③After logging in, you need to fill in and submit the Flickr application form. In the form, you mainly need to fill in two aspects of information:

  • The name of a small application
  • A description of a "non-commercial" purpose

④After completing the application form, the system will generate an API KEY and a SECRET. We need to save these two pieces of information for later use.

3. Implementation of scraping Flickr image library using Scrapy framework

Next, we will introduce how to use Scrapy framework to crawl Flickr image library data.

1. Write a Scrapy crawler

First, we need to create a new Scrapy project and create a crawler file in the project. In the crawler file, we need to set the basic information of the Flickr API database and the storage location of the data:

import time
import json
import scrapy
from flickr.items import FlickrItem

class FlickrSpider(scrapy.Spider):
    name = 'flickr'
    api_key = 'YOUR_API_KEY'  # 这里填写你自己的API Key
    tags = 'cat,dog'  # 这里将cat和dog作为爬取的关键词,你可以自由定义
    format = 'json'
    nojsoncallback = '1'
    page = '1'
    per_page = '50'

    start_urls = [
        'https://api.flickr.com/services/rest/?method=flickr.photos.search&'
        'api_key={}'
        '&tags={}'
        '&page={}'
        '&per_page={}'
        '&format={}'
        '&nojsoncallback={}'.format(api_key, tags, page, per_page, format, nojsoncallback)
    ]

    def parse(self, response):
        results = json.loads(response.body_as_unicode())
        for photo in results['photos']['photo']:
            item = FlickrItem()
            item['image_title'] = photo['title']
            item['image_url'] = 'https://farm{}.staticflickr.com/{}/{}_{}.jpg'.format(
                photo['farm'], photo['server'], photo['id'], photo['secret'])
            yield item

        if int(self.page) <= results['photos']['pages']:
            self.page = str(int(self.page) + 1)
            next_page_url = 'https://api.flickr.com/services/rest/?method=flickr.photos.search&' 
                            'api_key={}' 
                            '&tags={}' 
                            '&page={}' 
                            '&per_page={}' 
                            '&format={}' 
                            '&nojsoncallback={}'.format(self.api_key, self.tags, self.page, self.per_page, self.format, self.nojsoncallback)
            time.sleep(1)  # 设置延时1秒钟
            yield scrapy.Request(url=next_page_url, callback=self.parse)
Copy after login

In the crawler file, we set the keywords "cat" and "dog" of the Flickr image library , and then set the page turning parameters and set the format to json. We extracted and processed the information of each image in the parse function and returned it using yield.

Next, we need to define the storage location and format of the data, and set it in settings.py:

ITEM_PIPELINES = {
   'flickr.pipelines.FlickrPipeline': 300,
}

IMAGES_STORE = 'images'
Copy after login

2. Write Item Pipeline

Next, we need to write an Item Pipeline to process and store the collected image data:

import scrapy
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem

class FlickrPipeline(object):
    def process_item(self, item, spider):
        return item

class FlickrImagesPipeline(ImagesPipeline):
    def get_media_requests(self, item, info):
        for image_url in item['image_url']:
            try:
                yield scrapy.Request(image_url)
            except Exception as e:
                pass

    def item_completed(self, results, item, info):
        image_paths = [x['path'] for ok, x in results if ok]
        if not image_paths:
            raise DropItem("Item contains no images")
        item['image_paths'] = image_paths
        return item
Copy after login

3. Run the program

When we complete the above After writing the code, you can run the Scrapy framework to implement data crawling operations. We need to enter the following instructions in the command line:

scrapy crawl flickr
Copy after login

After the program starts running, the crawler will crawl the pictures about "cat" and "dog" in the Flickr database and save the pictures in the specified storage location middle.

4. Summary

Through the introduction of this article, we have learned in detail how to use the Scrapy framework to crawl the Flickr image library. In actual applications, we can modify keywords, the number of pages, or the path of image storage according to our own needs. No matter from which aspect, the Scrapy framework is a mature and feature-rich crawler framework. Its constantly updated functions and flexible scalability provide strong support for our data crawling work.

The above is the detailed content of Use the Scrapy framework to crawl the Flickr image library. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template