


Beautiful Soup parse list of many entries and save in dataframe
Currently I will be collecting data from dioceses around the world.
My method works with bs4 and pandas. I'm currently working on scraping logic.
import requests from bs4 import BeautifulSoup import pandas as pd url = "http://www.catholic-hierarchy.org/" # Send a GET request to the website response = requests.get(url) #my approach to parse the HTML content of the page soup = BeautifulSoup(response.text, 'html.parser') # Find the relevant elements containing diocese information diocese_elements = soup.find_all("div", class_="diocesan") # Initialize empty lists to store data dioceses = [] addresses = [] # Extract now data from each diocese element for diocese_element in diocese_elements: # Example: Extracting diocese name diocese_name = diocese_element.find("a").text.strip() dioceses.append(diocese_name) # Example: Extracting address address = diocese_element.find("div", class_="address").text.strip() addresses.append(address) # to save the whole data we create a DataFrame using pandas data = {'Diocese': dioceses, 'Address': addresses} df = pd.DataFrame(data) # Display the DataFrame print(df)
Currently I discovered something strange on my pycharm. I'm trying to find a way to collect all the data using pandas methods.
Correct Answer
This example will get you started - it will parse all parish pages to get the parish name url and store it into a dataframe in panda.
You can then iterate over these urls and get more information you need.
import pandas as pd import requests from bs4 import beautifulsoup chars = "abcdefghijklmnopqrstuvwxyz" url = "http://www.catholic-hierarchy.org/diocese/la{char}.html" all_data = [] for char in chars: u = url.format(char=char) while true: print(f"parsing {u}") soup = beautifulsoup(requests.get(u).content, "html.parser") for a in soup.select("li a[href^=d]"): all_data.append( { "name": a.text, "url": "http://www.catholic-hierarchy.org/diocese/" + a["href"], } ) next_page = soup.select_one('a:has(img[alt="[next page]"])') if not next_page: break u = "http://www.catholic-hierarchy.org/diocese/" + next_page["href"] df = pd.dataframe(all_data).drop_duplicates() print(df.head(10))
Print:
... Parsing http://www.catholic-hierarchy.org/diocese/lax.html Parsing http://www.catholic-hierarchy.org/diocese/lay.html Parsing http://www.catholic-hierarchy.org/diocese/laz.html Name URL 0 Holy See http://www.catholic-hierarchy.org/diocese/droma.html 1 Diocese of Rome http://www.catholic-hierarchy.org/diocese/droma.html 2 Aachen http://www.catholic-hierarchy.org/diocese/da549.html 3 Aachen http://www.catholic-hierarchy.org/diocese/daach.html 4 Aarhus (Århus) http://www.catholic-hierarchy.org/diocese/da566.html 5 Aba http://www.catholic-hierarchy.org/diocese/dabaa.html 6 Abaetetuba http://www.catholic-hierarchy.org/diocese/dabae.html 8 Abakaliki http://www.catholic-hierarchy.org/diocese/dabak.html 9 Abancay http://www.catholic-hierarchy.org/diocese/daban.html 10 Abaradira http://www.catholic-hierarchy.org/diocese/d2a01.html
The above is the detailed content of Beautiful Soup parse list of many entries and save in dataframe. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

Regular expressions are powerful tools for pattern matching and text manipulation in programming, enhancing efficiency in text processing across various applications.

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

In Python, how to dynamically create an object through a string and call its methods? This is a common programming requirement, especially if it needs to be configured or run...

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H
