Home > Backend Development > Python Tutorial > Building an Async E-Commerce Web Scraper with Pydantic, Crawl & Gemini

Building an Async E-Commerce Web Scraper with Pydantic, Crawl & Gemini

Mary-Kate Olsen
Release: 2025-01-12 06:25:42
Original
239 people have browsed it

Building an Async E-Commerce Web Scraper with Pydantic, Crawl & Gemini

In short: This guide demonstrates building an e-commerce scraper using crawl4ai's AI-powered extraction and Pydantic data models. The scraper asynchronously retrieves both product listings (names, prices) and detailed product information (specifications, reviews).

Access the complete code on Google Colab


Tired of the complexities of traditional web scraping for e-commerce data analysis? This tutorial simplifies the process using modern Python tools. We'll leverage crawl4ai for intelligent data extraction and Pydantic for robust data modeling and validation.

Why Choose Crawl4AI and Pydantic?

  • crawl4ai: Streamlines web crawling and scraping using AI-driven extraction methods.
  • Pydantic: Provides data validation and schema management, ensuring structured and accurate scraped data.

Why Target Tokopedia?

Tokopedia, a major Indonesian e-commerce platform, serves as our example. (Note: The author is Indonesian and a user of the platform, but not affiliated.) The principles apply to other e-commerce sites. This scraping approach is beneficial for developers interested in e-commerce analytics, market research, or automated data collection.

What Sets This Approach Apart?

Instead of relying on complex CSS selectors or XPath, we utilize crawl4ai's LLM-based extraction. This offers:

  • Enhanced resilience to website structure changes.
  • Cleaner, more structured data output.
  • Reduced maintenance overhead.

Setting Up Your Development Environment

Begin by installing necessary packages:

<code class="language-bash">%pip install -U crawl4ai
%pip install nest_asyncio
%pip install pydantic</code>
Copy after login
Copy after login

For asynchronous code execution in notebooks, we'll also use nest_asyncio:

<code class="language-python">import crawl4ai
import asyncio
import nest_asyncio
nest_asyncio.apply()</code>
Copy after login

Defining Data Models with Pydantic

We use Pydantic to define the expected data structure. Here are the models:

<code class="language-python">from pydantic import BaseModel, Field
from typing import List, Optional

class TokopediaListingItem(BaseModel):
    product_name: str = Field(..., description="Product name from listing.")
    product_url: str = Field(..., description="URL to product detail page.")
    price: str = Field(None, description="Price displayed in listing.")
    store_name: str = Field(None, description="Store name from listing.")
    rating: str = Field(None, description="Rating (1-5 scale) from listing.")
    image_url: str = Field(None, description="Primary image URL from listing.")

class TokopediaProductDetail(BaseModel):
    product_name: str = Field(..., description="Product name from detail page.")
    all_images: List[str] = Field(default_factory=list, description="List of all product image URLs.")
    specs: str = Field(None, description="Technical specifications or short info.")
    description: str = Field(None, description="Long product description.")
    variants: List[str] = Field(default_factory=list, description="List of variants or color options.")
    satisfaction_percentage: Optional[str] = Field(None, description="Customer satisfaction percentage.")
    total_ratings: Optional[str] = Field(None, description="Total number of ratings.")
    total_reviews: Optional[str] = Field(None, description="Total number of reviews.")
    stock: Optional[str] = Field(None, description="Stock availability.")</code>
Copy after login

These models serve as templates, ensuring data validation and providing clear documentation.

The Scraping Process

The scraper operates in two phases:

1. Crawling Product Listings

First, we retrieve search results pages:

<code class="language-python">async def crawl_tokopedia_listings(query: str = "mouse-wireless", max_pages: int = 1):
    # ... (Code remains the same) ...</code>
Copy after login

2. Fetching Product Details

Next, for each product URL, we fetch detailed information:

<code class="language-python">async def crawl_tokopedia_detail(product_url: str):
    # ... (Code remains the same) ...</code>
Copy after login

Combining the Stages

Finally, we integrate both phases:

<code class="language-python">async def run_full_scrape(query="mouse-wireless", max_pages=2, limit=15):
    # ... (Code remains the same) ...</code>
Copy after login

Running the Scraper

Here's how to execute the scraper:

<code class="language-bash">%pip install -U crawl4ai
%pip install nest_asyncio
%pip install pydantic</code>
Copy after login
Copy after login

Pro Tips

  1. Rate Limiting: Respect Tokopedia's servers; introduce delays between requests for large-scale scraping.
  2. Caching: Enable crawl4ai's caching during development (cache_mode=CacheMode.ENABLED).
  3. Error Handling: Implement comprehensive error handling and retry mechanisms for production use.
  4. API Keys: Store Gemini API keys securely in environment variables, not directly in the code.

Next Steps

This scraper can be extended to:

  • Store data in a database.
  • Monitor price changes over time.
  • Analyze product trends and patterns.
  • Compare prices across multiple stores.

Conclusion

crawl4ai's LLM-based extraction significantly improves web scraping maintainability compared to traditional methods. The integration with Pydantic ensures data accuracy and structure.

Always adhere to a website's robots.txt and terms of service before scraping.


Important Links:

Crawl4AI

Pydantic


Note: The complete code is available in the Colab notebook. Feel free to experiment and adapt it to your specific needs.

The above is the detailed content of Building an Async E-Commerce Web Scraper with Pydantic, Crawl & Gemini. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template