Are your scrapers stuck when trying to load data from dynamic web pages? Are you frustrated with infinite scrolls or those pesky "Load more" buttons?
You're not alone. Many websites today implement these designs to improve user experience—but they can be challenging for web scrapers.
This tutorial will guide you through a beginner-friendly walkthrough for scraping a demo page with a Load More button. Here’s what the target web page looks like:
By the end, you'll learn how to:
Let's dive in!
Before diving in, ensure the following prerequisites:
Libraries Required:
You can install these libraries using the following command in your terminal:
pip install requests beautifulsoup4 selenium
Before using Selenium, you must install a web driver matching your browser. For this tutorial, we'll use Google Chrome and ChromeDriver. However, you can follow similar steps for other browsers like Firefox or Edge.
Install the Web Driver
Open Google Chrome and navigate to Help > About Google Chrome from the three-dot menu to find the Chrome version.
Download ChromeDriver:
Visit the ChromeDriver download page.
Download the driver version that matches your Chrome version.
Add ChromeDriver to your system PATH:
Extract the downloaded file and place it in a directory like /usr/local/bin (Mac/Linux) or C:WindowsSystem32 (Windows).
Verify Installation
Initialize a Python file scraper.py in your project directory and test that everything is set up correctly by running the following code snippet:
from selenium import webdriver driver = webdriver.Chrome() # Ensure ChromeDriver is installed and in PATH driver.get("https://www.scrapingcourse.com/button-click") print(driver.title) driver.quit()
You can execute the above file code by running the following command on your terminal:
pip install requests beautifulsoup4 selenium
If the above code runs without errors, it will spin up a browser interface and open the demo page URL as shown below:
Selenium will then extract the HTML and print the page title. You will see an output like this -
from selenium import webdriver driver = webdriver.Chrome() # Ensure ChromeDriver is installed and in PATH driver.get("https://www.scrapingcourse.com/button-click") print(driver.title) driver.quit()
This verifies that Selenium is ready to use. With all requirements installed and ready to use, you can start accessing the demo page's content.
The first step is to fetch the page's initial content, which gives you a baseline snapshot of the page's HTML. This will help you verify connectivity and ensure a valid starting point for the scraping process.
You will retrieve the HTML content of the page URL by sending a GET request using the Requests library in Python. Here's the code:
python scraper.py
The above code will output the raw HTML containing the data for the first 12 products.
This quick preview of the HTML ensures that the request was successful and that you're working with valid data.
To access the remaining products, you'll need to programmatically click the "Load more" button on the page until no more products are available. Since this interaction involves JavaScript, you will use Selenium to simulate the button click.
Before writing code, let’s inspect the page to locate:
You'll get all the products by loading more products, giving you a larger dataset by running the following code:
Load More Button Challenge to Learn Web Scraping - ScrapingCourse.com
This code opens the browser, navigates to the page, and interacts with the "Load more" button. The updated HTML, now containing more product data, is then extracted.
If you don’t want Selenium to open the browser every time you run this code, it also provides headless browser capabilities. A headless browser has all the functionalities of an actual web browser but no Graphical User Interface (GUI).
You can enable the headless mode for Chrome in Selenium by defining a ChromeOptions object and passing it to the WebDriver Chrome constructor like this:
import requests # URL of the demo page with products url = "https://www.scrapingcourse.com/button-click" # Send a GET request to the URL response = requests.get(url) # Check if the request was successful if response.status_code == 200: html_content = response.text print(html_content) # Optional: Preview the HTML else: print(f"Failed to retrieve content: {response.status_code}")
When you run the above code, Selenium will launch a headless Chrome instance, so you’ll no longer see a Chrome window. This is ideal for production environments where you don’t want to waste resources on the GUI when running the scraping script on a server.
Now that the complete HTML content is retrieved extracting specific details about each product is time.
In this step, you'll use BeautifulSoup to parse the HTML and identify product elements. Then, you'll extract key details for each product, such as the name, price, and links.
pip install requests beautifulsoup4 selenium
In the output, you should see a structured list of product details, including the name, image URL, price, and product page link, like this -
from selenium import webdriver driver = webdriver.Chrome() # Ensure ChromeDriver is installed and in PATH driver.get("https://www.scrapingcourse.com/button-click") print(driver.title) driver.quit()
The above code will organize the raw HTML data into a structured format, making it easier to work with and preparing the output data for further processing.
You can now organize the extracted data into a CSV file, which makes it easier to analyze or share. Python's CSV module helps with this.
python scraper.py
The above code will create a new CSV file with all the required product details.
Here's the complete code for an overview:
Load More Button Challenge to Learn Web Scraping - ScrapingCourse.com
The above code will create a products.csv which would look like this:
import requests # URL of the demo page with products url = "https://www.scrapingcourse.com/button-click" # Send a GET request to the URL response = requests.get(url) # Check if the request was successful if response.status_code == 200: html_content = response.text print(html_content) # Optional: Preview the HTML else: print(f"Failed to retrieve content: {response.status_code}")
Now, let's say you want to identify the top 5 highest-priced products and extract additional data (such as the product description and SKU code) from their individual pages. You can do that using the code as follows:
from selenium import webdriver from selenium.webdriver.common.by import By import time # Set up the WebDriver (make sure you have the appropriate driver installed, e.g., ChromeDriver) driver = webdriver.Chrome() # Open the page driver.get("https://www.scrapingcourse.com/button-click") # Loop to click the "Load More" button until there are no more products while True: try: # Find the "Load more" button by its ID and click it load_more_button = driver.find_element(By.ID, "load-more-btn") load_more_button.click() # Wait for the content to load (adjust time as necessary) time.sleep(2) except Exception as e: # If no "Load More" button is found (end of products), break out of the loop print("No more products to load.") break # Get the updated page content after all products are loaded html_content = driver.page_source # Close the browser window driver.quit()
Here's the complete code for an overview:
from selenium import webdriver from selenium.webdriver.common.by import By import time # instantiate a Chrome options object options = webdriver.ChromeOptions() # set the options to use Chrome in headless mode options.add_argument("--headless=new") # initialize an instance of the Chrome driver (browser) in headless mode driver = webdriver.Chrome(options=options) ...
This code sorts the products by price in descending order. Then, for the top 5 highest-priced products, the script opens their product pages and extracts the product description and SKU using BeautifulSoup.
The output of the above code will be like this:
from bs4 import BeautifulSoup # Parse the page content with BeautifulSoup soup = BeautifulSoup(html_content, 'html.parser') # Extract product details products = [] # Find all product items in the grid product_items = soup.find_all('div', class_='product-item') for product in product_items: # Extract the product name name = product.find('span', class_='product-name').get_text(strip=True) # Extract the product price price = product.find('span', class_='product-price').get_text(strip=True) # Extract the product link link = product.find('a')['href'] # Extract the image URL image_url = product.find('img')['src'] # Create a dictionary with the product details products.append({ 'name': name, 'price': price, 'link': link, 'image_url': image_url }) # Print the extracted product details for product in products[:2]: print(f"Name: {product['name']}") print(f"Price: {product['price']}") print(f"Link: {product['link']}") print(f"Image URL: {product['image_url']}") print('-' * 30)
The above code will update the products.csv and it will now have information like this:
Name: Chaz Kangeroo Hoodie Price: Link: https://scrapingcourse.com/ecommerce/product/chaz-kangeroo-hoodie Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh01-gray_main.jpg ------------------------------ Name: Teton Pullover Hoodie Price: Link: https://scrapingcourse.com/ecommerce/product/teton-pullover-hoodie Image URL: https://scrapingcourse.com/ecommerce/wp-content/uploads/2024/03/mh02-black_main.jpg ------------------------------ …
Scraping pages with infinite scrolling or "Load more" buttons can seem challenging, but using tools like Requests, Selenium, and BeautifulSoup simplifies the process.
This tutorial showed how to retrieve and process product data from a demo page, saving it in a structured format for quick and easy access.
See all the code snippets here.
The above is the detailed content of Scraping Infinite Scroll Pages with a Load More Button: A Step-by-Step Guide. For more information, please follow other related articles on the PHP Chinese website!