Home > Backend Development > Python Tutorial > Python crawler technology introduction example code analysis

Python crawler technology introduction example code analysis

王林
Release: 2023-04-22 13:04:07
forward
1284 people have browsed it

Basic concepts of crawler technology

  1. Crawler: a program that automatically obtains network data.

  2. Web page structure: HTML, CSS, JavaScript, etc.

  3. HTTP request: The way the client requests data from the server.

  4. HTTP response: Data returned by the server to the client.

Requests and Responses

Use Python's requests library to send HTTP requests.

import requests
 
url = "https://www.example.com"
response = requests.get(url)
Copy after login

Get response content

html_content = response.text
Copy after login

HTML parsing and data extraction

Use the BeautifulSoup library to parse HTML content.

from bs4 import BeautifulSoup
 
soup = BeautifulSoup(html_content, "html.parser")
Copy after login

Use CSS selectors or other methods to extract data.

title = soup.title.string
Copy after login

Practical combat: Crawl the article information on the homepage of the Jianshu website

Send a request to obtain the HTML content of the Jianshu website's homepage.

import requests
from bs4 import BeautifulSoup
 
url = "https://www.jianshu.com"
response = requests.get(url)
html_content = response.text
Copy after login

Storing data

Store data in JSON format.

import json
 
with open("jianshu_articles.json", "w", encoding="utf-8") as f:
    json.dump(article_info_list, f, ensure_ascii=False, indent=4)
Copy after login

Testing and Optimization

1. When encountering an anti-crawler strategy, you can use User-Agent to pretend to be a browser.

headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"}
response = requests.get(url, headers=headers)
Copy after login

2. Use the time.sleep() function to control the request frequency.

import time
 
time.sleep(10)
Copy after login

3. Error handling and exception catching.

try:
    response = requests.get(url, headers=headers, timeout=5)
    response.raise_for_status()
except requests.exceptions.RequestException as e:
    print(f"Error: {e}")
Copy after login

Complete code for website crawler:

import requests
from bs4 import BeautifulSoup
import json
import time
 
def fetch_jianshu_articles():
    url = "https://www.jianshu.com"
    headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"}
 
    try:
        response = requests.get(url, headers=headers, timeout=5)
        response.raise_for_status()
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return
 
    html_content = response.text
    soup = BeautifulSoup(html_content, "html.parser")
    articles = soup.find_all("div", class_="content")
    article_info_list = []
 
    for article in articles:
        title = article.h3.text.strip()
        author = article.find("span", class_="name").text.strip()
        link = url + article.h3.a["href"]
 
        article_info = {"title": title, "author": author, "link": link}
        article_info_list.append(article_info)
 
    return article_info_list
 
def save_to_json(article_info_list, filename):
    with open(filename, "w", encoding="utf-8") as f:
        json.dump(article_info_list, f, ensure_ascii=False, indent=4)
 
if __name__ == "__main__":
    article_info_list = fetch_jianshu_articles()
    if article_info_list:
        save_to_json(article_info_list, "jianshu_articles.json")
        print("Jianshu articles saved to 'jianshu_articles.json'.")
    else:
        print("Failed to fetch Jianshu articles.")
Copy after login

Supplementary

In order to better understand this practical project, we need to understand some basic concepts and principles, which will help to master Python network programming and crawler technology. Here are some basic web crawling concepts:

  1. HTTP Protocol: Hypertext Transfer Protocol (HTTP) is an application layer protocol used to transmit hypermedia documents such as HTML. The HTTP protocol is used to transmit or post data from a web server to a web browser or other client.

  2. HTML, CSS, and JavaScript: HTML is a language used to describe web pages. CSS is a style used to express the structure of HTML. JavaScript is a scripting language for web programming, mainly used to achieve dynamic effects on web pages and interact with users.

  3. DOM: The Document Object Model (DOM) is a cross-platform programming interface for processing HTML and XML documents. DOM treats a document as a tree structure, where each node represents a part (such as an element, attribute, or text).

  4. URL: A Uniform Resource Locator (URL) is a string of characters used to specify the location of an Internet resource.

  5. Request Headers: In HTTP requests, request headers contain information about the client's environment, browser, etc. Common request header fields include: User-Agent, Accept, Referer, etc.

  6. Response Headers: In the HTTP response, the response header contains information about the server, response status code and other information. Common response header fields include: Content-Type, Content-Length, Server, etc.

  7. Web crawler strategies: Some websites will adopt some strategies to prevent crawlers from crawling data, such as: blocking IP, limiting access speed, using JavaScript to dynamically load data, etc. In practical applications, we need to take corresponding countermeasures based on these strategies, such as using proxy IP, limiting crawler crawling speed, using browser simulation libraries (such as Selenium), etc.

The above is the detailed content of Python crawler technology introduction example code analysis. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template