I had a very interesting demand these days. A person was migrating data from one place to another using CSV. The data are book registrations for a reading project. At one point, she said to me: “well, now the rest of the work is for the robot. I’ll have to get the ISBN of each title.” As she said, it's a robot's job, so why not let a robot do it?
Sigla para International Standard Book Number.Copy after login
A work can have several ISBNs, this happens because the editions have their own ISBN. In this case, any ISBN would work, if the media is compatible. The following were registered in the CSV:
-> ebook
-> physical
-> audio
Let's get to the logic:
-> Upload and open the CSV file.
-> Extract the column with the titles.
-> Extract the media column.
-> For each title, search on Google by ISBN.
-> Extract the title from the page.
-> Extract a list of ISBNs.
-> Extract a list of media.
-> Check the registration media and search for the nearest ISBN. If our criteria is not found, return the first item in the list.
-> Inform which media we took the ISBN from for later verification.
Let's look at the necessary libs:
import requests # para fazer as requisições from bs4 import BeautifulSoup # para manipular o html recebido import pandas as pd # para manipular os arquivos CSV import time import random # as duas são para gerarmos intervalos aleatórios de acesso
This list of books has more than 600 items, and as I don't want to be blocked by Google, we're going to make random accesses and with a more human space. We'll also use a header to say that we want the browser version of the page. To do this, go to “network” in your browser and search for "User-Agent".
To search on Google, we use the following URL pattern:
url_base = "https://www.google.com/search?q=isbn" # o que vem depois '=' é a pesquisa
Remember that URLs do not have spaces, so we will replace spaces in titles with “ ”. In pandas, “spreadsheets” are called DataFrame and it is very common to use df as an abbreviation. Lastly, maybe you're on Windows like me, in which case system address bars are invested relative to Unix. Let's write a function that takes the URL we paste and reverses it to the other format.
path = r"C:\caminho\livros.csv" def invert_url_pattern(url): return url.replace("\","/") path = invert_url_pattern(path) def search_book(path): url_base = "https://www.google.com/search?q=isbn" headers = { "User-Agent":"seu pc" } df = pd.read_csv(path, encoding='utf-8') books = df["Name"].tolist() media = df["media"].tolist() # vamos colocar as pesquisas aqui e depois inserir todas no DataFrame title_books = [] isbn_books = [] media_books = [] for index, book in enumerate(books): time.sleep(random.uniform(60, 90)) url = url_base + "+" + book.replace(" ", "+") req = requests.get(url, headers=headers) site = BeautifulSoup(req.text, "html.parser") #usamos as class para buscar o conteúdo title = site.find("span", class_="Wkr6U") isbns = site.find_all("div", class_="bVj5Zb") medias = site.find_all("div", class_="TCYkdd") #se algo falhar, retornamos uma string vazia if(title.text == None): title_books.append("") isbn_books.append("") media_books.append("") continue # No loop, o último item acessado será o mais recente, # pois percorremos a lista de cima para baixo. # Por isso, invertendo a lista de ISBNs, garantimos que # o mais novo de cada categoria seja processado por último. isbns = isbns[::-1] unified_data = {} for i in range(len(medias)): unified_data[medias[i].text] = isbns[i].text match media[index]: case "ebook": isbn_books.append(unified_data["Livro digital"]) media_books.append("Livro digital") case "fisical": isbn_books.append(unified_data["Livro capa dura"]) media_books.append("Livro capa dura") case "audio": isbn_books.append(unified_data["Audiolivro"]) media_books.append("Audiolivro") case _: isbn_books.append(unified_data[0]) media_books.append("") title_books.append(title.text) df["Titulo do Livro"] = title_books df["ISBN"] = isbn_books df["Tipo de Livro"] = media_books return df
Okay, everything ready for us to test! I'll leave an example line of what I received so you can test it.
Name | language | media |
---|---|---|
this other eden | ?? english | audio |
df = search_book(path) df.to_csv(invert_url_pattern("C:seu\caminho\para\salvar\nome_do_arquivo.csv"), encoding='utf-8', index=False)
I hope it was useful for you, and that you can automate something in your day-to-day life!
The above is the detailed content of Webscraping with Python: using CSV as a database. For more information, please follow other related articles on the PHP Chinese website!