Home Backend Development Python Tutorial Webscraping with Python: using CSV as a database

Webscraping with Python: using CSV as a database

Dec 30, 2024 am 09:09 AM

Webscraping com Python: usando CSV como base de dados

I had a very interesting demand these days. A person was migrating data from one place to another using CSV. The data are book registrations for a reading project. At one point, she said to me: “well, now the rest of the work is for the robot. I’ll have to get the ISBN of each title.” As she said, it's a robot's job, so why not let a robot do it?

Sigla para International Standard Book Number. 
Copy after login

A work can have several ISBNs, this happens because the editions have their own ISBN. In this case, any ISBN would work, if the media is compatible. The following were registered in the CSV:
-> ebook
-> physical
-> audio

Let's get to the logic:
-> Upload and open the CSV file.
-> Extract the column with the titles.
-> Extract the media column.
-> For each title, search on Google by ISBN.
-> Extract the title from the page.
-> Extract a list of ISBNs.
-> Extract a list of media.
-> Check the registration media and search for the nearest ISBN. If our criteria is not found, return the first item in the list.
-> Inform which media we took the ISBN from for later verification.

Let's look at the necessary libs:

import requests # para fazer as requisições
from bs4 import BeautifulSoup # para manipular o html recebido
import pandas as pd # para manipular os arquivos CSV
import time
import random # as duas são para gerarmos intervalos aleatórios de acesso
Copy after login

This list of books has more than 600 items, and as I don't want to be blocked by Google, we're going to make random accesses and with a more human space. We'll also use a header to say that we want the browser version of the page. To do this, go to “network” in your browser and search for "User-Agent".

To search on Google, we use the following URL pattern:

url_base = "https://www.google.com/search?q=isbn" # o que vem depois '=' é a pesquisa
Copy after login

Remember that URLs do not have spaces, so we will replace spaces in titles with “ ”. In pandas, “spreadsheets” are called DataFrame and it is very common to use df as an abbreviation. Lastly, maybe you're on Windows like me, in which case system address bars are invested relative to Unix. Let's write a function that takes the URL we paste and reverses it to the other format.

path = r"C:\caminho\livros.csv"

def invert_url_pattern(url):
    return url.replace("\","/")

path = invert_url_pattern(path)

def search_book(path):
    url_base = "https://www.google.com/search?q=isbn"
    headers = {
    "User-Agent":"seu pc"
    }
    
    df = pd.read_csv(path, encoding='utf-8')
    books = df["Name"].tolist()
    media = df["media"].tolist()
    # vamos colocar as pesquisas aqui e depois inserir todas no DataFrame
    title_books = []
    isbn_books = []
    media_books = []  

    for index, book in enumerate(books):
        time.sleep(random.uniform(60, 90))
        
        url = url_base + "+" + book.replace(" ", "+")
        req = requests.get(url, headers=headers)

        site = BeautifulSoup(req.text, "html.parser")
        #usamos as class para buscar o conteúdo
        title = site.find("span", class_="Wkr6U")
        isbns = site.find_all("div", class_="bVj5Zb")
        medias = site.find_all("div", class_="TCYkdd")
        #se algo falhar, retornamos uma string vazia
        if(title.text == None):
            title_books.append("")
            isbn_books.append("")
            media_books.append("")
            continue

        # No loop, o último item acessado será o mais recente, 
        # pois percorremos a lista de cima para baixo. 
        # Por isso, invertendo a lista de ISBNs, garantimos que 
        # o mais novo de cada categoria seja processado por último.

        isbns = isbns[::-1]
        unified_data = {}

        for i in range(len(medias)):
            unified_data[medias[i].text] = isbns[i].text

        match media[index]:
            case "ebook":
                isbn_books.append(unified_data["Livro digital"])
                media_books.append("Livro digital")
            case "fisical":
                isbn_books.append(unified_data["Livro capa dura"])
                media_books.append("Livro capa dura")
            case "audio":
                isbn_books.append(unified_data["Audiolivro"])
                media_books.append("Audiolivro")
            case _:
                isbn_books.append(unified_data[0])
                media_books.append("")

        title_books.append(title.text)

    df["Titulo do Livro"] = title_books
    df["ISBN"] = isbn_books
    df["Tipo de Livro"] = media_books

    return df
Copy after login

Okay, everything ready for us to test! I'll leave an example line of what I received so you can test it.

Name language media
this other eden ?? english audio
df = search_book(path)

df.to_csv(invert_url_pattern("C:seu\caminho\para\salvar\nome_do_arquivo.csv"), encoding='utf-8', index=False)
Copy after login

I hope it was useful for you, and that you can automate something in your day-to-day life!

The above is the detailed content of Webscraping with Python: using CSV as a database. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to solve the permissions problem encountered when viewing Python version in Linux terminal? How to solve the permissions problem encountered when viewing Python version in Linux terminal? Apr 01, 2025 pm 05:09 PM

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading? Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

How to teach computer novice programming basics in project and problem-driven methods within 10 hours? How to teach computer novice programming basics in project and problem-driven methods within 10 hours? Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? How to efficiently copy the entire column of one DataFrame into another DataFrame with different structures in Python? Apr 01, 2025 pm 11:15 PM

When using Python's pandas library, how to copy whole columns between two DataFrames with different structures is a common problem. Suppose we have two Dats...

How does Uvicorn continuously listen for HTTP requests without serving_forever()? How does Uvicorn continuously listen for HTTP requests without serving_forever()? Apr 01, 2025 pm 10:51 PM

How does Uvicorn continuously listen for HTTP requests? Uvicorn is a lightweight web server based on ASGI. One of its core functions is to listen for HTTP requests and proceed...

How to solve permission issues when using python --version command in Linux terminal? How to solve permission issues when using python --version command in Linux terminal? Apr 02, 2025 am 06:36 AM

Using python in Linux terminal...

How to get news data bypassing Investing.com's anti-crawler mechanism? How to get news data bypassing Investing.com's anti-crawler mechanism? Apr 02, 2025 am 07:03 AM

Understanding the anti-crawling strategy of Investing.com Many people often try to crawl news data from Investing.com (https://cn.investing.com/news/latest-news)...

See all articles