So I have a code that removes the name and price of a mineral from 14 pages (so far) and saves it to a .txt file. I first tried using only Page1, then I wanted to add more pages to get more data. But then the code grabs something it shouldn't - random names/strings. I didn't expect it to grab that one, but it did and assigned the wrong price to this one! It happens after a mineral has this "unexpected name" and then the entire rest of the list has the wrong price. See below:
So, since the string is different from other strings, further code cannot split it and gives error:
cutted2 = split2.pop(1) ^^^^^^^^^^^^^ IndexError: pop index out of range
I tried to ignore these errors and use one of the methods used in different Stackoverflow pages:
try: cutted2 = split2.pop(1) except IndexError: continue
It does work, no errors occur...but then it assigns the wrong price to the wrong mineral (as I noticed)! How can I change the code to ignore these "weird" names and continue with the list? Below is the full code, I remember it stopped at URL5 and gave this popup index error:
import requests from bs4 import BeautifulSoup import re def collecter(URL): headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36"} soup = BeautifulSoup(requests.get(URL, headers=headers).text, "lxml") names = [n.getText(strip=True) for n in soup.select("table tr td font a")] prices = [ p.getText(strip=True).split("Price:")[-1] for p in soup.select("table tr td font font") ] names[:] = [" ".join(n.split()) for n in names if not n.startswith("[")] prices[:] = [p for p in prices if p] with open("Minerals.txt", "a+", encoding='utf-8') as file: for name, price in zip(names, prices): # print(f"{name}\n{price}") # print("-" * 50) filename = str(name)+" "+str(price)+"\n" split1 = filename.split(' / ') cutted1 = split1.pop(0) split2 = cutted1.split(": ") try: cutted2 = split2.pop(1) except IndexError: continue two_prices = cutted2+" "+split1.pop(0)+"\n" file.write(two_prices) URL1 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=0" URL2 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=25" URL3 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=50" URL4 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=75" URL5 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=100" URL6 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=125" URL7 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=150" URL8 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=175" URL9 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=200" URL10 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=225" URL11 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=250" URL12 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=275" URL13 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=300" URL14 = "https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First=325" collecter(URL1) collecter(URL2) collecter(URL3) collecter(URL4) collecter(URL5) collecter(URL6) collecter(URL7) collecter(URL8) collecter(URL9) collecter(URL10) collecter(URL11) collecter(URL12) collecter(URL13) collecter(URL14)
EDIT: This is the fully valid code below, thanks to the helper!
import requests from bs4 import BeautifulSoup import re for URL in range(0,2569,25): headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36"} soup = BeautifulSoup(requests.get(f'https://www.fabreminerals.com/search_results.php?LANG=EN&SearchTerms=&submit=Buscar&MineralSpeciment=&Country=&Locality=&PriceRange=&checkbox=enventa&First={URL}', headers=headers).text, "lxml") names = [n.getText(strip=True) for n in soup.select("table tr td font>a")] prices = [p.getText(strip=True).split("Price:")[-1] for p in soup.select("table tr td font>font")] names[:] = [" ".join(n.split()) for n in names if not n.startswith("[") ] prices[:] = [p for p in prices if p] with open("MineralsList.txt", "a+", encoding='utf-8') as file: for name, price in zip(names, prices): # print(f"{name}\n{price}") # print("-" * 50) filename = str(name)+" "+str(price)+"\n" split1 = filename.split(' / ') cutted1 = split1.pop(0) split2 = cutted1.split(": ") cutted2 = split2.pop(1) try: two_prices = cutted2+" "+split1.pop(0)+"\n" except IndexError: two_prices = cutted2+"\n" file.write(two_prices)
But after some changes it stops with a new error - it cannot find the string by the given property, hence the error "IndexError: popping from empty list"...even soup.select( "table tr td font>font" )
provides help, as it does in "name"
You can try the next example along with pagination
Output:
You just need to make the CSS selector more specific so that only links that are directly inside the font element (rather than a few levels down) are recognized:
Adding a further condition that the link points to a single item rather than the next/previous page links at the bottom of the page would also help: