This article brings you a detailed introduction to BeautifulSoup in python (with code). It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.
Beautiful Soup provides some simple, python-style functions to handle navigation, search, modification of parse trees and other functions. It is a toolbox that provides users with the data they need to crawl by parsing documents. Because it is simple, you can write a complete application without much code. Beautiful Soup automatically converts input documents to Unicode encoding and output documents to UTF-8 encoding. You don't need to consider the encoding method unless the document does not specify an encoding method, in which case Beautiful Soup cannot automatically identify the encoding method. Then, you just need to specify the original encoding method. Beautiful Soup has become an excellent Python interpreter like lxml and html6lib, providing users with the flexibility to provide different parsing strategies or strong speed.
Installation
pip install BeautifulSoup4 easy_install BeautifulSoup4
Create BeautifulSoup object
First you should import the BeautifulSoup class library from bs4 import BeautifulSoup
Start creating the object below , before starting, in order to facilitate the demonstration, first create an html text, as follows:
html = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title" name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>, <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """
Create object: soup=BeautifulSoup(html,'lxml'), the lxml here is the parsing class library, for now, personally I think it is the best parser. I have been using this one. Installation method: pip install lxml
Tag
Tag is a tag in html, you can use BeautifulSoup Parse out the specific content of the Tag. The specific format is soup.name, where name is the tag under html. The specific example is as follows:
print soup.title outputs the content under the title tag, including this tag. This will Will output
print soup.head
Note:
The format here can only get these The first one of the tags, the method of obtaining multiple tags will be discussed later. There are two important attributes for Tag, name and attrs, which represent names and attributes respectively. The introduction is as follows:
name: For Tag, its name is itself, such as soup.p.name is p
attrs is a dictionary type, corresponding to attribute-value, such as print soup.p.attrs, the output is {'class': ['title'], 'name': 'dromouse'}, of course You can also get specific values, such as print soup.p.attrs['class']. The output is that [title] is a list type, because one attribute may correspond to multiple values. Of course, you can also get it through the get method. Attributes, such as: print soup.p.get('class'). You can also directly use print soup.p['class']
get
get method to get the attribute value under the label. Note that this is an important method. , can be used in many situations. For example, if you want to get the image URL under the tag , then you can use soup.img.get('src'). The specific analysis is as follows:
print soup.p.get("class") #得到第一个p标签下的src属性
string
Get the text content under the tag. The content can be returned only if there is no sub-tag under this tag, or there is only one sub-tag. Otherwise, None is returned. The specific examples are as follows :
print soup.p.string #在上面的一段文本中p标签没有子标签,因此能够正确返回文本的内容 print soup.html.string #这里得到的就是None,因为这里的html中有很多的子标签
get_text()
You can get all the text content in a tag, including the content of descendant nodes. This is the most commonly used method
Search document tree
find_all( name , attrs , recursive , text , **kwargs )
find_all is used to search for all nodes in the node that meet the filtering conditions
1.name parameter: is Tag names, such as p, p, title .....
soup.find_all("p") finds all p tags and returns [The Dormouse's story], which can be traversed Get each node, as follows:
ps=soup.find_all("p") for p in ps: print p.get('class') #得到p标签下的class属性
Pass in the regular expression: soap.find_all(re.compile(r'^b') to find all tags starting with b, the body and b tags here will be Find the
incoming class list: If you pass in a list parameter, BeautifulSoup will return content matching any element in the list. The following code finds all <a>
tags in the document and<b>
tag
soup.find_all(["a", "b"])
2.KeyWords parameter is to pass in the attribute and corresponding attribute value, or some other expression
soup.find_all( id='link2'), this will search and find all tags with the id attribute link2. Pass in the regular expression soup.find_all(href=re.compile("elsie")), this will find all href attributes that satisfy Regular expression tag
Pass in multiple values: soup.find_all(id='link2',class_='title'), this will find tags that satisfy both attributes. The class here must be used class_pass in parameters, because class is a keyword in python
Some attributes cannot be directly searched through the above method, such as the data-* attributes in html5, but you can specify a dictionary parameter through the attrs parameter to search for items containing Tags for special attributes, as follows:
# [<p data-foo="value">foo!</p>] data_soup.find_all(attrs={"data-foo": "value"}) #注意这里的atts不仅能够搜索特殊属性,亦可以搜索普通属性 soup.find_all("p",attrs={'class':'title','id':'value'}) #相当与soup.find_all('p',class_='title',id='value')
3. text parameter: The text parameter can be used to search for string content in the document. Like the optional value of the name parameter, the text parameter accepts strings, regular expressions , list, True
soup.find_all(text="Elsie") # [u'Elsie'] soup.find_all(text=["Tillie", "Elsie", "Lacie"]) # [u'Elsie', u'Lacie', u'Tillie'] soup.find_all(text=re.compile("Dormouse")) [u"The Dormouse's story", u"The Dormouse's story"]
4.limit参数:find_all() 方法返回全部的搜索结构,如果文档树很大那么搜索会很慢.如果我们不需要全部结果,可以使用 limit 参数限制返回结果的数量.效果与SQL中的limit关键字类似,当搜索到的结果数量达到 limit 的限制时,就停止搜索返回结果.
文档树中有3个tag符合搜索条件,但结果只返回了2个,因为我们限制了返回数量,代码如下:
soup.find_all("a", limit=2) # [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
5.recursive 参数:调用tag的 find_all() 方法时,BeautifulSoup会检索当前tag的所有子孙节点,如果只想搜索tag的直接子节点,可以使用参数 recursive=False
find( name , attrs , recursive , text , **kwargs )
它与 find_all() 方法唯一的区别是 find_all() 方法的返回结果是值包含一个元素的列表,而 find() 方法直接返回结果,就是直接返回第一匹配到的元素,不是列表,不用遍历,如soup.find("p").get("class")
css选择器
我们在写 CSS 时,标签名不加任何修饰,类名前加点,id名前加#,在这里我们也可以利用类似的方法来筛选元素,用到的方法是 soup.select(),返回类型是 list
通过标签名查找
print soup.select('title') #[<title>The Dormouse's story</title>] print soup.select('a') #[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
通过类名查找
print soup.select('.sister') #[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
通过id名查找
print soup.select('#link1') #[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]
组合查找
学过css的都知道css选择器,如p #link1是查找p标签下的id属性为link1的标签
print soup.select('p #link1') #查找p标签中内容为id属性为link1的标签 #[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>] print soup.select("head > title") #直接查找子标签 #[<title>The Dormouse's story</title>]
属性查找
查找时还可以加入属性元素,属性需要用中括号括起来,注意属性和标签属于同一节点,所以中间不能加空格,否则会无法匹配到。
print soup.select('a[class="sister"]') #[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>] print soup.select('a[href="http://example.com/elsie"]') #[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]
同样,属性仍然可以与上述查找方式组合,不在同一节点的空格隔开,同一节点的不加空格,代码如下:
print soup.select('p a[href="http://example.com/elsie"]') #[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]
以上的 select 方法返回的结果都是列表形式,可以遍历形式输出,然后用 get_text() 方法来获取它的内容
soup = BeautifulSoup(html, 'lxml') print type(soup.select('title')) print soup.select('title')[0].get_text() for title in soup.select('title'): print title.get_text()
修改文档树
Beautiful
Soup的强项是文档树的搜索,但同时也可以方便的修改文档树,这个虽说对于一些其他的爬虫并不适用,因为他们都是爬文章的内容的,并不需要网页的源码并且修改它们
修改tag的名称和属性
html=""" <p><a href='#'>修改文档树</a></p> """ soup=BeautifulSoup(html,'lxml') tag=soup.a #得到标签a,可以使用print tag.name输出标签 tag['class']='content' #修改标签a的属性class和p tag['p']='nav'
修改.string
注意这里如果标签的中还嵌套了子孙标签,那么如果直接使用string这个属性会将这里的所有的子孙标签都覆盖掉
html=""" <p><a href='#'>修改文档树</a></p> """ soup=BeautifulSoup(html,'lxml') tag=soup.a tag.string='博客' #这里会将修改文档树变成修改的内容 print tag soup.p.string='博客' #这里修改了p标签的内容,那么就会覆盖掉a标签,直接变成的修改后的文本 print soup
append
append的方法的作用是在在原本标签文本后面附加文本,就像python中列表的append方法
html=""" <p><a href='#'>修改文档树</a></p> """ soup=BeautifulSoup(html,'lxml') soup.a.append("博客") #在a标签和面添加文本,这里的文本内容将会变成修改文档树陈加兵的博客 print soup print soup.a.contents #这里输出a标签的内容,这里的必定是一个带有两个元素的列表
注意这里的append方法也可以将一个新的标签插入到文本的后面,下面将会讲到
new_tag
相信学过js的朋友都知道怎样创建一个新的标签,这里的方法和js中的大同小异,使用的new_taghtml=""" <p><p> """ soup=BeautifulSoup(html,'lxml') tag=soup.p new_tag=soup.new_tag('a') #创建一个新的标签a new_tag['href']='#' #添加属性 new_tag.string='博客' #添加文本 print new_tag tag.append(new_tag) #将新添加的标签写入到p标签中 print tag
insert
Tag.insert() 方法与 Tag.append() 方法类似,区别是不会把新元素添加到父节点 .contentshtml=""" <p><p> """ soup=BeautifulSoup(html,'lxml') tag=soup.p new_tag=soup.new_tag('a') new_tag['href']='#' new_tag.string='博客' tag.append("欢迎来到") #这里向p标签中插入文本,这个文本在contents下的序号为0 tag.insert(1,new_tag) #在contents序号为1的位置插入新的标签,如果这里修改成0,那么将会出现a标签将会出现在欢饮来到的前面 print tag
insert_before() 和 insert_after()
insert_before() 方法在当前tag或文本节点前插入内容,insert_after() 方法在当前tag或文本节点后插入内容:
soup = BeautifulSoup("<b>stop</b>") tag = soup.new_tag("i") tag.string = "Don't" soup.b.string.insert_before(tag) soup.b # <b><i>Don't</i>stop</b> soup.b.i.insert_after(soup.new_string(" ever ")) soup.b # <b><i>Don't</i> ever stop</b> soup.b.contents # [<i>Don't</i>, u' ever ', u'stop']
clear
clear用来移除当前节点的所有的内容,包括其中的子孙节点和文本内容
html=""" <p><p> """ soup=BeautifulSoup(html,'lxml') tag=soup.p new_tag=soup.new_tag('a') new_tag['href']='#' new_tag.string='博客' tag.append("欢迎来到") tag.insert(1,new_tag) tag.clear() #这里将会移除所有内容 print tag
The above is the detailed content of Detailed introduction to BeautifulSoup in python (with code). For more information, please follow other related articles on the PHP Chinese website!