The method to use Python and BeautifulSoup to parse HTML documents is as follows: load the HTML document and create a BeautifulSoup object. Use BeautifulSoup objects to find and process tag elements, such as: Find a specific tag: soup.find(tag_name) Find all specific tags: soup.find_all(tag_name) Find tags with specific attributes: soup.find(tag_name, {'attribute': 'value'}) extracts the text content or attribute value of the label. Adjust the code as needed to obtain specific information.
Objective:
Learn how to parse HTML documents using Python and the BeautifulSoup library.
Required knowledge:
##Code :
from bs4 import BeautifulSoup # 加载 HTML 文档 html_doc = """ <html> <head> <title>HTML 文档</title> </head> <body> <h1>标题</h1> <p>段落</p> </body> </html> """ # 创建 BeautifulSoup 对象 soup = BeautifulSoup(html_doc, 'html.parser') # 获取标题标签 title_tag = soup.find('title') print(title_tag.text) # 输出:HTML 文档 # 获取所有段落标签 paragraph_tags = soup.find_all('p') for paragraph in paragraph_tags: print(paragraph.text) # 输出:段落 # 获取特定属性的值 link_tag = soup.find('link', {'rel': 'stylesheet'}) print(link_tag['href']) # 输出:样式表链接
Practical case: A simple practical case is a crawler that uses BeautifulSoup to extract specified information from a web page. For example, you can use the following code to pull questions and answers from Stack Overflow:
import requests from bs4 import BeautifulSoup url = 'https://stackoverflow.com/questions/31207139/using-beautifulsoup-to-extract-specific-attribute' response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') questions = soup.find_all('div', {'class': 'question-summary'}) for question in questions: question_title = question.find('a', {'class': 'question-hyperlink'}).text question_body = question.find('div', {'class': 'question-snippet'}).text print(f'问题标题:{question_title}') print(f'问题内容:{question_body}') print('---')
The above is the detailed content of HTML paragraphs are automatically indented by two spaces. For more information, please follow other related articles on the PHP Chinese website!