How to use Scrapy to crawl Zhihu data?

王林
Release: 2023-06-22 14:51:15
Original
1709 people have browsed it

Scrapy is a Python web crawler tool that can easily help us obtain various data on the Internet. Zhihu is a popular social question and answer platform. Use Scrapy to quickly capture questions, answers, user information and other data on Zhihu. This article will introduce how to use Scrapy to crawl Zhihu data.

  1. Installing Scrapy

First you need to install Scrapy. You can use the pip command to install it directly:

pip install scrapy
Copy after login
  1. Create Scrapy project

Enter the directory where you want to create a Scrapy project in the terminal and use the following command to create the project:

scrapy startproject zhihu
Copy after login

This command will create a Scrapy project named "zhihu" in the current directory.

  1. Create Spider

Use the following command to create a Spider file named "zhihu_spider.py" in the project directory:

scrapy genspider zhihu_spider zhihu.com
Copy after login

This command will Create a "zhihu_spider.py" file in the "spiders" subdirectory under the project directory. This file contains a Spider with zhihu.com as the starting URL.

  1. Write Spider code

Open the "zhihu_spider.py" file and add the following code:

import scrapy

class ZhihuSpider(scrapy.Spider):
    name = 'zhihu'
    allowed_domains = ['zhihu.com']
    start_urls = ['https://www.zhihu.com/']

    def parse(self, response):
        pass
Copy after login

The code defines a file named "ZhihuSpider" Spider class. The Spider class needs to define the following attributes:

  • name: Spider name
  • allowed_domains: Accessed domain name
  • start_urls: Spider’s starting URL

In this example, Spider's starting URL is set to zhihu.com. Spider must also contain a method called "parse" for processing the data returned by the response. In this example, the "parse" method is not implemented yet, so an empty "pass" statement is added first.

  1. Parse page data

After completing the Spider creation, you need to add the code to parse the page data. In the "parse" method, use the following code:

def parse(self, response):
        questions = response.css('div[data-type="question"]')
        for question in questions:
            yield {
                'question': question.css('h2 a::text').get(),
                'link': question.css('h2 a::attr(href)').get(),
                'answers': question.css('div.zm-item-answer::text').getall(),
            }
Copy after login

This code gets the div element in the page that contains the "data-type" attribute without "question". Then, loop through each div element to extract the question title, link, and answer list.

In the above code, "yield" is a keyword in the Python language, used to generate a generator. A generator is an iterator containing elements. After each element is returned, execution is paused at the position of that element. In Scrapy, the "yield" keyword is used to return data parsed from the page into Scrapy.

  1. Run the crawler

After you finish writing the code, use the following command to run the crawler in the terminal:

scrapy crawl zhihu
Copy after login

This command will start the Scrapy framework and start Crawling Zhihu data. Scrapy will automatically access the starting URL specified in Spider and parse the returned page data through the "parse" method. The parsed data will be output to the terminal. If you need to save data, you can store the data in CSV, JSON, etc. files.

  1. Crawling user data

The above code can only crawl data such as questions and answers, but cannot obtain user information. If you need to crawl user data, you need to use Zhihu’s API interface. In Spider, you can use the following code to obtain the JSON format data returned by the API interface:

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'}
url = f'https://www.zhihu.com/api/v4/members/{user}?include=following_count,follower_count,badge[?(type=best_answerer)].topics&limit=20'
yield scrapy.Request(url, headers=headers, callback=self.parse_user)
Copy after login

This code obtains the specified user information from the API interface. Here, the f-string formatted string is used to insert the username of the user to be obtained into the URL.

In the callback function, use the following code to extract the required data from the JSON format data:

def parse_user(self, response):
        data = json.loads(response.body)['data']
        following_count = data['following_count']
        follower_count = data['follower_count']
        best_answerer = data['badge'][0]['topics']
        yield {
            'user_id': data['id'],
            'name': data['name'],
            'headline': data['headline'],
            'following_count': following_count,
            'follower_count': follower_count,
            'best_answerer': best_answerer,
        }
Copy after login

This code extracts the user ID, user nickname, avatar, and follow from the JSON data Number, number of fans, best answer questions and other data.

  1. Summary

This article introduces how to use Scrapy to crawl Zhihu data. First, you need to create a Scrapy project and create a Spider. Then, use CSS selectors to parse the data in the page and store the crawled data in the generator. Finally, store it in CSV, JSON, etc. files, or output it directly to the terminal. If you need to obtain user data, you can use the Zhihu API interface to extract relevant data from JSON data.

The above is the detailed content of How to use Scrapy to crawl Zhihu data?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template