In short: This guide demonstrates building an e-commerce scraper using crawl4ai's AI-powered extraction and Pydantic data models. The scraper asynchronously retrieves both product listings (names, prices) and detailed product information (specifications, reviews).
Tired of the complexities of traditional web scraping for e-commerce data analysis? This tutorial simplifies the process using modern Python tools. We'll leverage crawl4ai for intelligent data extraction and Pydantic for robust data modeling and validation.
Tokopedia, a major Indonesian e-commerce platform, serves as our example. (Note: The author is Indonesian and a user of the platform, but not affiliated.) The principles apply to other e-commerce sites. This scraping approach is beneficial for developers interested in e-commerce analytics, market research, or automated data collection.
Instead of relying on complex CSS selectors or XPath, we utilize crawl4ai's LLM-based extraction. This offers:
Begin by installing necessary packages:
<code class="language-bash">%pip install -U crawl4ai %pip install nest_asyncio %pip install pydantic</code>
For asynchronous code execution in notebooks, we'll also use nest_asyncio
:
<code class="language-python">import crawl4ai import asyncio import nest_asyncio nest_asyncio.apply()</code>
We use Pydantic to define the expected data structure. Here are the models:
<code class="language-python">from pydantic import BaseModel, Field from typing import List, Optional class TokopediaListingItem(BaseModel): product_name: str = Field(..., description="Product name from listing.") product_url: str = Field(..., description="URL to product detail page.") price: str = Field(None, description="Price displayed in listing.") store_name: str = Field(None, description="Store name from listing.") rating: str = Field(None, description="Rating (1-5 scale) from listing.") image_url: str = Field(None, description="Primary image URL from listing.") class TokopediaProductDetail(BaseModel): product_name: str = Field(..., description="Product name from detail page.") all_images: List[str] = Field(default_factory=list, description="List of all product image URLs.") specs: str = Field(None, description="Technical specifications or short info.") description: str = Field(None, description="Long product description.") variants: List[str] = Field(default_factory=list, description="List of variants or color options.") satisfaction_percentage: Optional[str] = Field(None, description="Customer satisfaction percentage.") total_ratings: Optional[str] = Field(None, description="Total number of ratings.") total_reviews: Optional[str] = Field(None, description="Total number of reviews.") stock: Optional[str] = Field(None, description="Stock availability.")</code>
These models serve as templates, ensuring data validation and providing clear documentation.
The scraper operates in two phases:
First, we retrieve search results pages:
<code class="language-python">async def crawl_tokopedia_listings(query: str = "mouse-wireless", max_pages: int = 1): # ... (Code remains the same) ...</code>
Next, for each product URL, we fetch detailed information:
<code class="language-python">async def crawl_tokopedia_detail(product_url: str): # ... (Code remains the same) ...</code>
Finally, we integrate both phases:
<code class="language-python">async def run_full_scrape(query="mouse-wireless", max_pages=2, limit=15): # ... (Code remains the same) ...</code>
Here's how to execute the scraper:
<code class="language-bash">%pip install -U crawl4ai %pip install nest_asyncio %pip install pydantic</code>
cache_mode=CacheMode.ENABLED
).This scraper can be extended to:
crawl4ai's LLM-based extraction significantly improves web scraping maintainability compared to traditional methods. The integration with Pydantic ensures data accuracy and structure.
Always adhere to a website's robots.txt
and terms of service before scraping.
The above is the detailed content of Building an Async E-Commerce Web Scraper with Pydantic, Crawl & Gemini. For more information, please follow other related articles on the PHP Chinese website!