Bagaimana untuk menggunakan Python untuk xpath, JsonPath, dan bs4?

WBOY
Lepaskan: 2023-05-09 21:04:06
ke hadapan
1550 orang telah melayarinya

1.xpath

1.1 penggunaan xpath

  • Google memasang pemalam xpath terlebih dahulu, tekan ctrl + shift + x dan kotak hitam kecil akan muncul

  • Pasang perpustakaan lxmlpip install lxml ‐i https://pypi.douban.com/simple

  • Import lxml.etreefrom lxml import etree

  • etree .parse() untuk menghuraikan fail setempat html_tree = etree.parse('XX.html')

  • etree.HTML() fail respons pelayanhtml_tree = etree.HTML(response.read().decode('utf‐8')

  • .html_tree.xpath (laluan xpath)

1.2 Sintaks asas xpath

1 Pertanyaan Laluan

  • Cari semua nod keturunan, tanpa mengira hubungan hierarki

  • Cari nod anak langsung

2 pertanyaan predikat

//div[@id] 
//div[@id="maincontent"]
Salin selepas log masuk

3 🎜>4. Pertanyaan kabur

//@class
Salin selepas log masuk

5. Pertanyaan kandungan

//div[contains(@id, "he")] 
//div[starts‐with(@id, "he")]
Salin selepas log masuk

6. 🎜>

//div/h2/text()
Salin selepas log masuk
//div[@id="head" and @class="s_down"] 
//title | //price
Salin selepas log masuk

1.4

Merangkak nilai butang carian Baidu

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8"/>
    <title>Title</title>
</head>
<body>
    <ul>
        <li id="l1" class="class1">北京</li>
        <li id="l2" class="class2">上海</li>
        <li id="d1">广州</li>
        <li>深圳</li>
    </ul>
</body>
</html>
Salin selepas log masuk

1.5 Merangkak gambar bahan webmaster

from lxml import etree

# xpath解析
# 本地文件:                                          etree.parse
# 服务器相应的数据    response.read().decode(&#39;utf-8&#39;)  etree.HTML()


tree = etree.parse(&#39;xpath.html&#39;)

# 查找url下边的li
li_list = tree.xpath(&#39;//body/ul/li&#39;)
print(len(li_list))  # 4

# 获取标签中的内容
li_list = tree.xpath(&#39;//body/ul/li/text()&#39;)
print(li_list)  # [&#39;北京&#39;, &#39;上海&#39;, &#39;广州&#39;, &#39;深圳&#39;]

# 获取带id属性的li
li_list = tree.xpath(&#39;//ul/li[@id]&#39;)
print(len(li_list))  # 3

# 获取id为l1的标签内容
li_list = tree.xpath(&#39;//ul/li[@id="l1"]/text()&#39;)
print(li_list)  # [&#39;北京&#39;]

# 获取id为l1的class属性值
c1 = tree.xpath(&#39;//ul/li[@id="l1"]/@class&#39;)
print(c1)  # [&#39;class1&#39;]

# 获取id中包含l的标签
li_list = tree.xpath(&#39;//ul/li[contains(@id, "l")]/text()&#39;)
print(li_list)  # [&#39;北京&#39;, &#39;上海&#39;]
# 获取id以d开头的标签
li_list = tree.xpath(&#39;//ul/li[starts-with(@id,"d")]/text()&#39;)
print(li_list)  # [&#39;广州&#39;]
# 获取id为l2并且class为class2的标签
li_list = tree.xpath(&#39;//ul/li[@id="l2" and @class="class2"]/text()&#39;)
print(li_list)  # [&#39;上海&#39;]
# 获取id为l2或id为d1的标签
li_list = tree.xpath(&#39;//ul/li[@id="l2"]/text() | //ul/li[@id="d1"]/text()&#39;)
print(li_list)  # [&#39;上海&#39;, &#39;广州&#39;]
Salin selepas log masuk
2. JsonPath

Pemasangan 2.1 pip

import urllib.request
from lxml import etree
url = &#39;http://www.baidu.com&#39;
headers = {
    &#39;User-Agent&#39;: &#39;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36&#39;
}
request = urllib.request.Request(url=url, headers=headers)
response = urllib.request.urlopen(request)
content = response.read().decode(&#39;utf-8&#39;)
tree = etree.HTML(content)
value = tree.xpath(&#39;//input[@id="su"]/@value&#39;)
print(value)
Salin selepas log masuk
Bagaimana untuk menggunakan Python untuk xpath, JsonPath, dan bs4?2.2 Penggunaan jsonpath

# 需求 下载的前十页的图片
# https://sc.chinaz.com/tupian/qinglvtupian.html   1
# https://sc.chinaz.com/tupian/qinglvtupian_page.html
import urllib.request
from lxml import etree
def create_request(page):
    if (page == 1):
        url = &#39;https://sc.chinaz.com/tupian/qinglvtupian.html&#39;
    else:
        url = &#39;https://sc.chinaz.com/tupian/qinglvtupian_&#39; + str(page) + &#39;.html&#39;
    headers = {
        &#39;User-Agent&#39;: &#39;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36&#39;,
    }
    request = urllib.request.Request(url=url, headers=headers)
    return request
def get_content(request):
    response = urllib.request.urlopen(request)
    content = response.read().decode(&#39;utf-8&#39;)
    return content
def down_load(content):
    #     下载图片
    # urllib.request.urlretrieve(&#39;图片地址&#39;,&#39;文件的名字&#39;)
    tree = etree.HTML(content)
    name_list = tree.xpath(&#39;//div[@id="container"]//a/img/@alt&#39;)
    # 一般设计图片的网站都会进行懒加载
    src_list = tree.xpath(&#39;//div[@id="container"]//a/img/@src2&#39;)
    print(src_list)
    for i in range(len(name_list)):
        name = name_list[i]
        src = src_list[i]
        url = &#39;https:&#39; + src
        urllib.request.urlretrieve(url=url, filename=&#39;./loveImg/&#39; + name + &#39;.jpg&#39;)
if __name__ == &#39;__main__&#39;:
    start_page = int(input(&#39;请输入起始页码&#39;))
    end_page = int(input(&#39;请输入结束页码&#39;))

    for page in range(start_page, end_page + 1):
        # (1) 请求对象的定制
        request = create_request(page)
        # (2)获取网页的源码
        content = get_content(request)
        # (3)下载
        down_load(content)
Salin selepas log masuk

Perbandingan elemen sintaks JSONPath dan elemen XPath yang sepadan:

Contoh:

jsonpath.json

Bagaimana untuk menggunakan Python untuk xpath, JsonPath, dan bs4?

pip install jsonpath
Salin selepas log masuk
obj = json.load(open(&#39;json文件&#39;, &#39;r&#39;, encoding=&#39;utf‐8&#39;)) 
ret = jsonpath.jsonpath(obj, &#39;jsonpath语法&#39;)
Salin selepas log masuk

3 >

3.1 Pengenalan asas1. Pasang

pip pasang bs4

2 bs4 import BeautifulSoup

3 Cipta objek

Sup objek penjanaan fail respons pelayan = BeautifulSoup(response.read().decode(), 'lxml'. )

Sup objek penjanaan fail tempatan = BeautifulSoup(open('1.html'), 'lxml')

Nota:
    Lalai Format pengekodan fail terbuka ialah gbk, jadi anda perlu menentukan format pengekodan pembukaan utf-8
  • 3.2 Pemasangan dan penciptaan
  • { "store": {
        "book": [
          { "category": "修真",
            "author": "六道",
            "title": "坏蛋是怎样练成的",
            "price": 8.95
          },
          { "category": "修真",
            "author": "天蚕土豆",
            "title": "斗破苍穹",
            "price": 12.99
          },
          { "category": "修真",
            "author": "唐家三少",
            "title": "斗罗大陆",
            "isbn": "0-553-21311-3",
            "price": 8.99
          },
          { "category": "修真",
            "author": "南派三叔",
            "title": "星辰变",
            "isbn": "0-395-19395-8",
            "price": 22.99
          }
        ],
        "bicycle": {
          "author": "老马",
          "color": "黑色",
          "price": 19.95
        }
      }
    }
    Salin selepas log masuk
  • 3.3 Kedudukan nod

    import json
    import jsonpath
    
    obj = json.load(open(&#39;jsonpath.json&#39;, &#39;r&#39;, encoding=&#39;utf-8&#39;))
    
    # 书店所有书的作者
    author_list = jsonpath.jsonpath(obj, &#39;$.store.book[*].author&#39;)
    print(author_list)  # [&#39;六道&#39;, &#39;天蚕土豆&#39;, &#39;唐家三少&#39;, &#39;南派三叔&#39;]
    
    # 所有的作者
    author_list = jsonpath.jsonpath(obj, &#39;$..author&#39;)
    print(author_list)  # [&#39;六道&#39;, &#39;天蚕土豆&#39;, &#39;唐家三少&#39;, &#39;南派三叔&#39;, &#39;老马&#39;]
    
    # store下面的所有的元素
    tag_list = jsonpath.jsonpath(obj, &#39;$.store.*&#39;)
    print(
        tag_list)  # [[{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;六道&#39;, &#39;title&#39;: &#39;坏蛋是怎样练成的&#39;, &#39;price&#39;: 8.95}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;天蚕土豆&#39;, &#39;title&#39;: &#39;斗破苍穹&#39;, &#39;price&#39;: 12.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;唐家三少&#39;, &#39;title&#39;: &#39;斗罗大陆&#39;, &#39;isbn&#39;: &#39;0-553-21311-3&#39;, &#39;price&#39;: 8.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}], {&#39;author&#39;: &#39;老马&#39;, &#39;color&#39;: &#39;黑色&#39;, &#39;price&#39;: 19.95}]
    
    # store里面所有东西的price
    price_list = jsonpath.jsonpath(obj, &#39;$.store..price&#39;)
    print(price_list)  # [8.95, 12.99, 8.99, 22.99, 19.95]
    
    # 第三个书
    book = jsonpath.jsonpath(obj, &#39;$..book[2]&#39;)
    print(book)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;唐家三少&#39;, &#39;title&#39;: &#39;斗罗大陆&#39;, &#39;isbn&#39;: &#39;0-553-21311-3&#39;, &#39;price&#39;: 8.99}]
    
    # 最后一本书
    book = jsonpath.jsonpath(obj, &#39;$..book[(@.length-1)]&#39;)
    print(book)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}]
    # 	前面的两本书
    book_list = jsonpath.jsonpath(obj, &#39;$..book[0,1]&#39;)
    # book_list = jsonpath.jsonpath(obj,&#39;$..book[:2]&#39;)
    print(
        book_list)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;六道&#39;, &#39;title&#39;: &#39;坏蛋是怎样练成的&#39;, &#39;price&#39;: 8.95}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;天蚕土豆&#39;, &#39;title&#39;: &#39;斗破苍穹&#39;, &#39;price&#39;: 12.99}]
    
    # 条件过滤需要在()的前面添加一个?
    # 	 过滤出所有的包含isbn的书。
    book_list = jsonpath.jsonpath(obj, &#39;$..book[?(@.isbn)]&#39;)
    print(
        book_list)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;唐家三少&#39;, &#39;title&#39;: &#39;斗罗大陆&#39;, &#39;isbn&#39;: &#39;0-553-21311-3&#39;, &#39;price&#39;: 8.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}]
    # 哪本书超过了10块钱
    book_list = jsonpath.jsonpath(obj, &#39;$..book[?(@.price>10)]&#39;)
    print(
        book_list)  # [{&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;天蚕土豆&#39;, &#39;title&#39;: &#39;斗破苍穹&#39;, &#39;price&#39;: 12.99}, {&#39;category&#39;: &#39;修真&#39;, &#39;author&#39;: &#39;南派三叔&#39;, &#39;title&#39;: &#39;星辰变&#39;, &#39;isbn&#39;: &#39;0-395-19395-8&#39;, &#39;price&#39;: 22.99}]
    Salin selepas log masuk

    3.5 Maklumat nod
  • 1.根据标签名查找节点 
    	soup.a 【注】只能找到第一个a 
    		soup.a.name 
    		soup.a.attrs 
    2.函数 
    	(1).find(返回一个对象) 
    		find(&#39;a&#39;):只找到第一个a标签
    		find(&#39;a&#39;, title=&#39;名字&#39;) 
    		find(&#39;a&#39;, class_=&#39;名字&#39;) 
    	(2).find_all(返回一个列表) 
    		find_all(&#39;a&#39;) 查找到所有的a 
    		find_all([&#39;a&#39;, &#39;span&#39;]) 返回所有的a和span 
    		find_all(&#39;a&#39;, limit=2) 只找前两个a 
    	(3).select(根据选择器得到节点对象)【推荐】 
    		1.element 
    			eg:p 
    		2..class 
    			eg:.firstname 
    		3.#id
    			eg:#firstname 
    		4.属性选择器 
    			[attribute] 
    				eg:li = soup.select(&#39;li[class]&#39;) 
    			[attribute=value] 
    				eg:li = soup.select(&#39;li[class="hengheng1"]&#39;) 
    		5.层级选择器 
    			element element 
    				div p 
    			element>element 
    				div>p 
    			element,element 
    				div,p 
    					eg:soup = soup.select(&#39;a,span&#39;)
    Salin selepas log masuk
    Salin selepas log masuk
    1.根据标签名查找节点 
    	soup.a 【注】只能找到第一个a 
    		soup.a.name 
    		soup.a.attrs 
    2.函数 
    	(1).find(返回一个对象) 
    		find(&#39;a&#39;):只找到第一个a标签
    		find(&#39;a&#39;, title=&#39;名字&#39;) 
    		find(&#39;a&#39;, class_=&#39;名字&#39;) 
    	(2).find_all(返回一个列表) 
    		find_all(&#39;a&#39;) 查找到所有的a 
    		find_all([&#39;a&#39;, &#39;span&#39;]) 返回所有的a和span 
    		find_all(&#39;a&#39;, limit=2) 只找前两个a 
    	(3).select(根据选择器得到节点对象)【推荐】 
    		1.element 
    			eg:p 
    		2..class 
    			eg:.firstname 
    		3.#id
    			eg:#firstname 
    		4.属性选择器 
    			[attribute] 
    				eg:li = soup.select(&#39;li[class]&#39;) 
    			[attribute=value] 
    				eg:li = soup.select(&#39;li[class="hengheng1"]&#39;) 
    		5.层级选择器 
    			element element 
    				div p 
    			element>element 
    				div>p 
    			element,element 
    				div,p 
    					eg:soup = soup.select(&#39;a,span&#39;)
    Salin selepas log masuk
    Salin selepas log masuk
3.6 Contoh penggunaan

bs4.html

(1).获取节点内容:适用于标签中嵌套标签的结构 
	obj.string 
	obj.get_text()【推荐】 
(2).节点的属性 
	tag.name 获取标签名 
		eg:tag = find(&#39;li) 
			print(tag.name) 
	tag.attrs将属性值作为一个字典返回 
(3).获取节点属性 
	obj.attrs.get(&#39;title&#39;)【常用】 
	obj.get(&#39;title&#39;) 
	obj[&#39;title&#39;]
Salin selepas log masuk
Salin selepas log masuk
(1).获取节点内容:适用于标签中嵌套标签的结构 
	obj.string 
	obj.get_text()【推荐】 
(2).节点的属性 
	tag.name 获取标签名 
		eg:tag = find(&#39;li) 
			print(tag.name) 
	tag.attrs将属性值作为一个字典返回 
(3).获取节点属性 
	obj.attrs.get(&#39;title&#39;)【常用】 
	obj.get(&#39;title&#39;) 
	obj[&#39;title&#39;]
Salin selepas log masuk
Salin selepas log masuk
nama produk
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>

    <div>
        <ul>
            <li id="l1">张三</li>
            <li id="l2">李四</li>
            <li>王五</li>
            <a href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" " class="a1">google</a>
            <span>嘿嘿嘿</span>
        </ul>
    </div>


    <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>

    <div id="d1">
        <span>
            哈哈哈
        </span>
    </div>

    <p id="p1" class="p1">呵呵呵</p>
</body>
</html>
Salin selepas log masuk
from bs4 import BeautifulSoup
# 通过解析本地文件 来将bs4的基础语法进行讲解
# 默认打开的文件的编码格式是gbk 所以在打开文件的时候需要指定编码
soup = BeautifulSoup(open(&#39;bs4.html&#39;, encoding=&#39;utf-8&#39;), &#39;lxml&#39;)
# 根据标签名查找节点
# 找到的是第一个符合条件的数据
print(soup.a)  # <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>
# 获取标签的属性和属性值
print(soup.a.attrs)  # {&#39;href&#39;: &#39;&#39;, &#39;id&#39;: &#39;&#39;, &#39;class&#39;: [&#39;a1&#39;]}
# bs4的一些函数
# (1)find
# 返回的是第一个符合条件的数据
print(soup.find(&#39;a&#39;))  # <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>
# 根据title的值来找到对应的标签对象
print(soup.find(&#39;a&#39;, title="a2"))  # <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>

# 根据class的值来找到对应的标签对象  注意的是class需要添加下划线
print(soup.find(&#39;a&#39;, class_="a1"))  # <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>

# (2)find_all  返回的是一个列表 并且返回了所有的a标签
print(soup.find_all(&#39;a&#39;))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>]

# 如果想获取的是多个标签的数据 那么需要在find_all的参数中添加的是列表的数据
print(soup.find_all([&#39;a&#39;,&#39;span&#39;]))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <span>嘿嘿嘿</span>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百</a><spa哈</span>]

# limit的作用是查找前几个数据
print(soup.find_all(&#39;li&#39;, limit=2))  # [<li id="l1">张三</li>, <li id="l2">李四</li>]

# (3)select(推荐)
# select方法返回的是一个列表  并且会返回多个数据
print(soup.select(&#39;a&#39;))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>]

# 可以通过.代表class  我们把这种操作叫做类选择器
print(soup.select(&#39;.a1&#39;))  # [<a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>]

print(soup.select(&#39;#l1&#39;))  # [<li id="l1">张三</li>]

# 属性选择器---通过属性来寻找对应的标签
# 查找到li标签中有id的标签
print(soup.select(&#39;li[id]&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>]

# 查找到li标签中id为l2的标签
print(soup.select(&#39;li[id="l2"]&#39;))  # [<li id="l2">李四</li>]

# 层级选择器
#  后代选择器
# 找到的是div下面的li
print(soup.select(&#39;div li&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>]

# 子代选择器
#  某标签的第一级子标签
# 注意:很多的计算机编程语言中 如果不加空格不会输出内容  但是在bs4中 不会报错 会显示内容
print(soup.select(&#39;div > ul > li&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>]

# 找到a标签和li标签的所有的对象
print(soup.select(
    &#39;a,li&#39;))  # [<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>, <a class="a1" href="" id=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ">google</a>, <a href="" title=" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" a2">百度</a>]

# 节点信息
#    获取节点内容
obj = soup.select(&#39;#d1&#39;)[0]
# 如果标签对象中 只有内容 那么string和get_text()都可以使用
# 如果标签对象中 除了内容还有标签 那么string就获取不到数据 而get_text()是可以获取数据
# 我们一般情况下  推荐使用get_text()
print(obj.string)  # None
print(obj.get_text())  # 哈哈哈

# 节点的属性
obj = soup.select(&#39;#p1&#39;)[0]
# name是标签的名字
print(obj.name)  # p
# 将属性值左右一个字典返回
print(obj.attrs)  # {&#39;id&#39;: &#39;p1&#39;, &#39;class&#39;: [&#39;p1&#39;]}

# 获取节点的属性
obj = soup.select(&#39;#p1&#39;)[0]
#
print(obj.attrs.get(&#39;class&#39;))  # [&#39;p1&#39;]
print(obj.get(&#39;class&#39;))  # [&#39;p1&#39;]
print(obj[&#39;class&#39;])  # [&#39;p1&#39;]
Salin selepas log masuk

3.9cks Starbucks

import urllib.request
url = &#39;https://www.starbucks.com.cn/menu/&#39;
response = urllib.request.urlopen(url)
content = response.read().decode(&#39;utf-8&#39;)
from bs4 import BeautifulSoup
soup = BeautifulSoup(content,&#39;lxml&#39;)
# //ul[@class="grid padded-3 product"]//strong/text()
# 一般先用xpath方式通过google插件写好解析的表达式
name_list = soup.select(&#39;ul[class="grid padded-3 product"] strong&#39;)
for name in name_list:
    print(name.get_text())
Salin selepas log masuk

Atas ialah kandungan terperinci Bagaimana untuk menggunakan Python untuk xpath, JsonPath, dan bs4?. Untuk maklumat lanjut, sila ikut artikel berkaitan lain di laman web China PHP!

Label berkaitan:
sumber:yisu.com
Kenyataan Laman Web ini
Kandungan artikel ini disumbangkan secara sukarela oleh netizen, dan hak cipta adalah milik pengarang asal. Laman web ini tidak memikul tanggungjawab undang-undang yang sepadan. Jika anda menemui sebarang kandungan yang disyaki plagiarisme atau pelanggaran, sila hubungi admin@php.cn
Tutorial Popular
Lagi>
Muat turun terkini
Lagi>
kesan web
Kod sumber laman web
Bahan laman web
Templat hujung hadapan