Python是當今最熱門的程式語言之一,在不同的領域都得到了廣泛的應用,如資料科學、人工智慧、網路安全等。其中,Python在網路爬蟲領域表現出色,許多企業和個人利用Python進行資料收集和分析。本篇文章將介紹如何使用Python爬取豆瓣圖書信息,幫助讀者初步了解Python網絡爬蟲的實現方法和技術。
首先,對於豆瓣圖書資訊爬蟲,我們需要用到Python中的兩個重要的庫:urllib和beautifulsoup4。其中,urllib庫主要用於網路請求和資料讀取,而beautifulsoup4庫則可用於解析HTML和XML等結構化文檔,從而提取所需的資訊。在使用這些庫之前,我們需要先安裝它們,使用pip命令即可完成安裝。安裝完成後,就可以開始我們的實戰了。
在使用Python進行爬蟲時,首先需要先明確爬取目標。對於本篇文章而言,我們的目標是爬取豆瓣圖書的基本訊息,如書名、作者、出版社、出版日期、評分等。此外,我們還需要爬取多頁圖書資訊。
確定了爬取目標之後,我們需要進一步分析豆瓣圖書的HTML結構,以便確定所需資訊的位置和特徵。我們可以使用Chrome或Firefox等瀏覽器自帶的開發者工具來查看頁面原始碼。透過觀察HTML結構,我們可以找到需要爬取的標籤和屬性,進而編寫Python程式碼進行實作。
接下來,我們在Python中寫豆瓣圖書爬蟲程式碼。程式碼的核心是:
import urllib.request from bs4 import BeautifulSoup url = 'https://book.douban.com/top250' books = [] def get_html(url): headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'} req = urllib.request.Request(url, headers=headers) response = urllib.request.urlopen(req) html = response.read().decode('utf-8') return html def parse_html(html): soup = BeautifulSoup(html,'html.parser') book_list_soup = soup.find('div', attrs={'class': 'article'}) for book_soup in book_list_soup.find_all('table'): book_title_soup = book_soup.find('div', attrs={'class': 'pl2'}) book_title_link = book_title_soup.find('a') book_title = book_title_link.get('title') book_url = book_title_link.get('href') book_info_soup = book_soup.find('p', attrs={'class': 'pl'}) book_info = book_info_soup.string.strip() book_rating_num_soup = book_soup.find('span', attrs={'class': 'rating_nums'}) book_rating_num = book_rating_num_soup.string.strip() book_rating_people_num_span_soup = book_soup.find('span', attrs={'class': 'pl'}) book_rating_people_num = book_rating_people_num_span_soup.string.strip()[1:-4] book_author_and_publish_soup = book_soup.find('p',attrs={'class':'pl'}).next_sibling.string.strip() book_author_and_publish = book_author_and_publish_soup.split('/') book_author = book_author_and_publish[0] book_publish = book_author_and_publish[-3] book_year = book_author_and_publish[-2] books.append({ 'title': book_title, 'url': book_url, 'info': book_info, 'author':book_author, 'publish':book_publish, 'year':book_year, 'rating_num':book_rating_num, 'rating_people_num':book_rating_people_num }) def save_data(): with open('douban_top250.txt','w',encoding='utf-8') as f: for book in books: f.write('书名:{0} '.format(book['title'])) f.write('链接:{0} '.format(book['url'])) f.write('信息:{0} '.format(book['info'])) f.write('作者:{0} '.format(book['author'])) f.write('出版社:{0} '.format(book['publish'])) f.write('出版年份:{0} '.format(book['year'])) f.write('评分:{0} '.format(book['rating_num'])) f.write('评分人数:{0} '.format(book['rating_people_num'])) if __name__ == '__main__': for i in range(10): start = i*25 url = 'https://book.douban.com/top250?start={0}'.format(start) html = get_html(url) parse_html(html) save_data()
以上是Python中的爬蟲實戰:豆瓣圖書爬蟲的詳細內容。更多資訊請關注PHP中文網其他相關文章!