


How to Scrape Login-Protected Websites with Selenium (Step by Step Guide)
My Steps to Scrape a Password-Protected Website:
- Capture the HTML form elements: username ID, password ID, and login button class
- - Use a tool like requests or Selenium to automate the login: fill username, wait, fill password, wait, click login
- - Store session cookies for authentication
- - Continue scraping the authenticated pages
Disclaimer: I’ve built an API for this specific use case at https://www.scrapewebapp.com/. So if you want to just get it done fast, use it, otherwise read on.
Let’s use this example: let’s say I want to scrape my own API key from my account at https://www.scrapewebapp.com/. It is on this page: https://app.scrapewebapp.com/account/api_key
1. The Login Page
First, you need to find the login page. Most websites will give you a redirect 303 if you try to access a page behind login, so if you try to scrape directly https://app.scrapewebapp.com/account/api_key, you will automatically get the login page https://app.scrapewebapp.com/login. So it is a good way to automate finding the login page if not provided already.
Ok, now that we have the login page, we need to find the place to add username or email as well as password and the actual sign-in button. The best way is to create a simple script that finds the ID of the inputs using their type “email”, “username”, “password” and finds the button with the type “submit”. I made a code for you below:
from bs4 import BeautifulSoup def extract_login_form(html_content: str): """ Extracts the login form elements from the given HTML content and returns their CSS selectors. """ soup = BeautifulSoup(html_content, "html.parser") # Finding the username/email field username_email = ( soup.find("input", {"type": "email"}) or soup.find("input", {"name": "username"}) or soup.find("input", {"type": "text"}) ) # Fallback to input type text if no email type is found # Finding the password field password = soup.find("input", {"type": "password"}) # Finding the login button # Searching for buttons/input of type submit closest to the password or username field login_button = None # First try to find a submit button within the same form if password: form = password.find_parent("form") if form: login_button = form.find("button", {"type": "submit"}) or form.find( "input", {"type": "submit"} ) # If no button is found in the form, fall back to finding any submit button if not login_button: login_button = soup.find("button", {"type": "submit"}) or soup.find( "input", {"type": "submit"} ) # Extracting CSS selectors def generate_css_selector(element, element_type): if "id" in element.attrs: return f"#{element['id']}" elif "type" in element.attrs: return f"{element_type}[type='{element['type']}']" else: return element_type # Generate CSS selectors with the updated logic username_email_css_selector = None if username_email: username_email_css_selector = generate_css_selector(username_email, "input") password_css_selector = None if password: password_css_selector = generate_css_selector(password, "input") login_button_css_selector = None if login_button: login_button_css_selector = generate_css_selector( login_button, "button" if login_button.name == "button" else "input" ) return username_email_css_selector, password_css_selector, login_button_css_selector def main(html_content: str): # Call the extract_login_form function and return its result return extract_login_form(html_content)
2. Using Selenium to Actually Log In
Now you need to create a selenium webdriver. We will use chrome headless to run it with Python. This is how to install it:
# Install selenium and chromium !pip install selenium !apt-get update !apt install chromium-chromedriver !cp /usr/lib/chromium-browser/chromedriver /usr/bin import sys sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver')
Then actually log into our website and save the cookies. We will save all cookies, but you could only save the auth cookies if you wanted.
# Imports from selenium import webdriver from selenium.webdriver.common.by import By import requests import time # Set up Chrome options chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') # Initialize the WebDriver driver = webdriver.Chrome(options=chrome_options) try: # Open the login page driver.get("https://app.scrapewebapp.com/login") # Find the email input field by ID and input your email email_input = driver.find_element(By.ID, "email") email_input.send_keys("******@gmail.com") # Find the password input field by ID and input your password password_input = driver.find_element(By.ID, "password") password_input.send_keys("*******") # Find the login button and submit the form login_button = driver.find_element(By.CSS_SELECTOR, "button[type='submit']") login_button.click() # Wait for the login process to complete time.sleep(5) # Adjust this depending on your site's response time finally: # Close the browser driver.quit()
3. Store Cookies
It is as simple as saving them into a dictionary from the driver.getcookies() function.
def save_cookies(driver): """Save cookies from the Selenium WebDriver into a dictionary.""" cookies = driver.get_cookies() cookie_dict = {} for cookie in cookies: cookie_dict[cookie['name']] = cookie['value'] return cookie_dict
Save the cookies from the WebDriver
cookies = save_cookies(driver)
4. Get Data from Our Logged-in Session
In this part, we will use the simple library requests, but you could keep using selenium too.
Now we want to get the actual API from this page: https://app.scrapewebapp.com/account/api_key.
So we create a session from the requests library and add each cookie into it. Then request the URL and print the response text.
def scrape_api_key(cookies): """Use cookies to scrape the /account/api_key page.""" url = 'https://app.scrapewebapp.com/account/api_key' # Set up the session to persist cookies session = requests.Session() # Add cookies from Selenium to the requests session for name, value in cookies.items(): session.cookies.set(name, value) # Make the request to the /account/api_key page response = session.get(url) # Check if the request is successful if response.status_code == 200: print("API Key page content:") print(response.text) # Print the page content (could contain the API key) else: print(f"Failed to retrieve API key page, status code: {response.status_code}")
5. Get the Actual Data You Want (BONUS)
We got the page text we wanted, but there is a lot of data that we do not care about. We just want the api_key.
The best and easiest way to do that is to use AI like ChatGPT (GPT4o model).
Prompt the model like this: “You are an expert scraper and you will extract only the information asked from the context. I need the value of my api-key from {context}”
from bs4 import BeautifulSoup def extract_login_form(html_content: str): """ Extracts the login form elements from the given HTML content and returns their CSS selectors. """ soup = BeautifulSoup(html_content, "html.parser") # Finding the username/email field username_email = ( soup.find("input", {"type": "email"}) or soup.find("input", {"name": "username"}) or soup.find("input", {"type": "text"}) ) # Fallback to input type text if no email type is found # Finding the password field password = soup.find("input", {"type": "password"}) # Finding the login button # Searching for buttons/input of type submit closest to the password or username field login_button = None # First try to find a submit button within the same form if password: form = password.find_parent("form") if form: login_button = form.find("button", {"type": "submit"}) or form.find( "input", {"type": "submit"} ) # If no button is found in the form, fall back to finding any submit button if not login_button: login_button = soup.find("button", {"type": "submit"}) or soup.find( "input", {"type": "submit"} ) # Extracting CSS selectors def generate_css_selector(element, element_type): if "id" in element.attrs: return f"#{element['id']}" elif "type" in element.attrs: return f"{element_type}[type='{element['type']}']" else: return element_type # Generate CSS selectors with the updated logic username_email_css_selector = None if username_email: username_email_css_selector = generate_css_selector(username_email, "input") password_css_selector = None if password: password_css_selector = generate_css_selector(password, "input") login_button_css_selector = None if login_button: login_button_css_selector = generate_css_selector( login_button, "button" if login_button.name == "button" else "input" ) return username_email_css_selector, password_css_selector, login_button_css_selector def main(html_content: str): # Call the extract_login_form function and return its result return extract_login_form(html_content)
If you want all that in a simple and reliable API, please give a try to my new product https://www.scrapewebapp.com/
If you like this post, please give me claps and follow me. It does help a lot!
The above is the detailed content of How to Scrape Login-Protected Websites with Selenium (Step by Step Guide). For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

Fastapi ...

Using python in Linux terminal...

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...
