首页 > 后端开发 > Python教程 > python爬虫之xpath的基本使用详解

python爬虫之xpath的基本使用详解

不言
发布: 2018-04-27 11:01:56
原创
4484 人浏览过

本篇文章主要介绍了python爬虫之xpath的基本使用详解,现在分享给大家,也给大家做个参考。一起过来看看吧

一、简介

XPath 是一门在 XML 文档中查找信息的语言。XPath 可用来在 XML 文档中对元素和属性进行遍历。XPath 是 W3C XSLT 标准的主要元素,并且 XQuery 和 XPointer 都构建于 XPath 表达之上。 

二、安装

pip3 install lxml
登录后复制

三、使用

1、导入

from lxml import etree
登录后复制

2、基本使用

from lxml import etree
wb_data = """
    <p>
      <ul>
         <li class="item-0"><a href="link1.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >first item</a></li>

         <li class="item-1"><a href="link2.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >second item</a></li>

         <li class="item-inactive"><a href="link3.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >third item</a></li>

         <li class="item-1"><a href="link4.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fourth item</a></li>

         <li class="item-0"><a href="link5.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fifth item</a>
       </ul>
     </p>

    """
html = etree.HTML(wb_data)
print(html)
result = etree.tostring(html)
print(result.decode("utf-8"))
登录后复制

从下面的结果来看,我们打印机html其实就是一个python对象,etree.tostring(html)则是不全里html的基本写法,补全了缺胳膊少腿的标签。

 <Element html at 0x39e58f0>
<html><body><p>
      <ul>
         <li class="item-0"><a href="link1.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >first item</a></li>

         <li class="item-1"><a href="link2.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >second item</a></li>

         <li class="item-inactive"><a href="link3.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >third item</a></li>

         <li class="item-1"><a href="link4.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fourth item</a></li>

         <li class="item-0"><a href="link5.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fifth item</a>

       </li></ul>
     </p>
    </body></html>
登录后复制

3、获取某个标签的内容(基本使用),注意,获取a标签的所有内容,a后面就不用再加正斜杠,否则报错。

写法一

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;/html/body/p/ul/li/a&#39;)

print(html)

for i in html_data:

  print(i.text)

<Element html at 0x12fe4b8>

first item

second item

third item

fourth item

fifth item
登录后复制

写法二(直接在需要查找内容的标签后面加一个/text()就行)

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;/html/body/p/ul/li/a/text()&#39;)

print(html)

for i in html_data:

  print(i) 

<Element html at 0x138e4b8>

first item

second item

third item

fourth item

fifth item
登录后复制

4、打开读取html文件

#使用parse打开html的文件

html = etree.parse(&#39;test.html&#39;)

html_data = html.xpath(&#39;//*&#39;)<br>#打印是一个列表,需要遍历

print(html_data)

for i in html_data:

  print(i.text)
登录后复制

html = etree.parse(&#39;test.html&#39;)

html_data = etree.tostring(html,pretty_print=True)

res = html_data.decode(&#39;utf-8&#39;)

print(res)

 

打印:

<p>

   <ul>

     <li class="item-0"><a href="link1.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >first item</a></li>

     <li class="item-1"><a href="link2.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" >second item</a></li>

     <li class="item-inactive"><a href="link3.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >third item</a></li>

     <li class="item-1"><a href="link4.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fourth item</a></li>

     <li class="item-0"><a href="link5.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" >fifth item</a></li>

   </ul>

</p>
登录后复制

5、打印指定路径下a标签的属性(可以通过遍历拿到某个属性的值,查找标签的内容)

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;/html/body/p/ul/li/a/@href&#39;)

for i in html_data:

  print(i)
登录后复制

打印:

link1.html

link2.html

link3.html

link4.html

link5.html

6、我们知道我们使用xpath拿到得都是一个个的ElementTree对象,所以如果需要查找内容的话,还需要遍历拿到数据的列表。

查到绝对路径下a标签属性等于link2.html的内容。

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;/html/body/p/ul/li/a[@href="link2.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ]/text()&#39;)

print(html_data)

for i in html_data:

  print(i)
登录后复制

打印:

['second item']

second item

7、上面我们找到全部都是绝对路径(每一个都是从根开始查找),下面我们查找相对路径,例如,查找所有li标签下的a标签内容。

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;//li/a/text()&#39;)

print(html_data)

for i in html_data:

  print(i)
登录后复制

打印:

['first item', 'second item', 'third item', 'fourth item', 'fifth item']

first item

second item

third item

fourth item

fifth item

8、上面我们使用绝对路径,查找了所有a标签的属性等于href属性值,利用的是/---绝对路径,下面我们使用相对路径,查找一下l相对路径下li标签下的a标签下的href属性的值,注意,a标签后面需要双//。

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;//li/a//@href&#39;)

print(html_data)

for i in html_data:

  print(i)
登录后复制

打印:

['link1.html', 'link2.html', 'link3.html', 'link4.html', 'link5.html']

link1.html

link2.html

link3.html

link4.html

link5.html

9、相对路径下跟绝对路径下查特定属性的方法类似,也可以说相同。

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;//li/a[@href="link2.html" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ]&#39;)

print(html_data)

for i in html_data:

  print(i.text)
登录后复制

打印:

[]

second item

10、查找最后一个li标签里的a标签的href属性

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;//li[last()]/a/text()&#39;)

print(html_data)

for i in html_data:

  print(i)
登录后复制

打印:

['fifth item']

fifth item

11、查找倒数第二个li标签里的a标签的href属性

html = etree.HTML(wb_data)

html_data = html.xpath(&#39;//li[last()-1]/a/text()&#39;)

print(html_data)

for i in html_data:

  print(i)
登录后复制

打印:

['fourth item']

fourth item

12、如果在提取某个页面的某个标签的xpath路径的话,可以如下图:

//*[@id="kw"]
登录后复制

解释:使用相对路径查找所有的标签,属性id等于kw的标签。

常用

#!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import HtmlResponse
html = """<!DOCTYPE html>
<html>
  <head lang="en">
    <meta charset="UTF-8">
    <title></title>
  </head>
  <body>
    <ul>
      <li class="item-"><a id=&#39;i1&#39; href="link.html" rel="external nofollow" rel="external nofollow" >first item</a></li>
      <li class="item-0"><a id=&#39;i2&#39; href="llink.html" rel="external nofollow" >first item</a></li>
      <li class="item-1"><a href="llink2.html" rel="external nofollow" rel="external nofollow" >second item<span>vv</span></a></li>
    </ul>
    <p><a href="llink2.html" rel="external nofollow" rel="external nofollow" >second item</a></p>
  </body>
</html>
"""
response = HtmlResponse(url=&#39;http://example.com&#39;, body=html,encoding=&#39;utf-8&#39;)
# hxs = HtmlXPathSelector(response)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[2]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[@id]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[@id="i1"]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[@href="link.html" rel="external nofollow" rel="external nofollow" ][@id="i1"]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[contains(@href, "link")]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[starts-with(@href, "link")]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[re:test(@id, "i\d+")]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[re:test(@id, "i\d+")]/text()&#39;).extract()
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[re:test(@id, "i\d+")]/@href&#39;).extract()
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;/html/body/ul/li/a/@href&#39;).extract()
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//body/ul/li/a/@href&#39;).extract_first()
# print(hxs)
 
# ul_list = Selector(response=response).xpath(&#39;//body/ul/li&#39;)
# for item in ul_list:
#   v = item.xpath(&#39;./a/span&#39;)
#   # 或
#   # v = item.xpath(&#39;a/span&#39;)
#   # 或
#   # v = item.xpath(&#39;*/a/span&#39;)
#   print(v)
登录后复制

相关推荐:

python爬虫 使用真实浏览器打开网页的两种方法总结


以上是python爬虫之xpath的基本使用详解的详细内容。更多信息请关注PHP中文网其他相关文章!

相关标签:
来源:php.cn
本站声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
作者最新文章
热门教程
更多>
最新下载
更多>
网站特效
网站源码
网站素材
前端模板