Home Backend Development Python Tutorial Crawler parsing method five: XPath

Crawler parsing method five: XPath

Jun 05, 2019 pm 03:36 PM
python xpath reptile

Many languages ​​​​can crawl, but crawlers based on python are more concise and convenient. Crawlers have also become an essential part of the python language. There are also many ways to parse crawlers. The previous article told you about the fourth method of parsing crawlers: PyQuery. Today I bring you another method, XPath.

Crawler parsing method five: XPath

The basic use of xpath for python crawlers

1. Introduction

XPath is a language for finding information in XML documents. XPath can be used to traverse elements and attributes in XML documents. XPath is a major element of the W3C XSLT standard, and both XQuery and XPointer are built on XPath expressions.

 

2. Installation

pip3 install lxml
Copy after login

 

3. Use

 1 , Import

from lxml import etree
Copy after login
Copy after login

2. Basic usage

from lxml import etree
Copy after login
Copy after login
wb_data = """
        <div>
            <ul>
                 <li class="item-0"><a href="link1.html">first item</a></li>
                 <li class="item-1"><a href="link2.html">second item</a></li>
                 <li class="item-inactive"><a href="link3.html">third item</a></li>
                 <li class="item-1"><a href="link4.html">fourth item</a></li>
                 <li class="item-0"><a href="link5.html">fifth item</a>
             </ul>
         </div>
        """
html = etree.HTML(wb_data)
print(html)
result = etree.tostring(html)
print(result.decode("utf-8"))
Copy after login

From the results below, our printer html is actually a python object, and etree.tostring(html) is The basic writing method of html in Buquanli completes the tags that are missing arms and legs.​

<Element html at 0x39e58f0>
<html><body><div>
            <ul>
                 <li class="item-0"><a href="link1.html">first item</a></li>
                 <li class="item-1"><a href="link2.html">second item</a></li>
                 <li class="item-inactive"><a href="link3.html">third item</a></li>
                 <li class="item-1"><a href="link4.html">fourth item</a></li>
                 <li class="item-0"><a href="link5.html">fifth item</a>
             </li></ul>
         </div>
        </body></html>
Copy after login

​ 3. Get the content of a certain tag (basic use). Note that to get all the content of the a tag, there is no need to add a forward slash after a, otherwise an error will be reported.

  Writing method one

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;/html/body/div/ul/li/a&#39;)
print(html)
for i in html_data:
    print(i.text)
Copy after login

<Element html at 0x12fe4b8>
first item
second item
third item
fourth item
fifth item
Copy after login

Writing method two (Directly in the tag where you need to find the content Just add a /text() after it)

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;/html/body/div/ul/li/a/text()&#39;)
print(html)
for i in html_data:
    print(i)
Copy after login

<Element html at 0x138e4b8>
first item
second item
third item
fourth item
fifth item
Copy after login

4. Open and read the html file

#使用parse打开html的文件
html = etree.parse(&#39;test.html&#39;)
html_data = html.xpath(&#39;//*&#39;)<br>#打印是一个列表,需要遍历
print(html_data)
for i in html_data:
    print(i.text)
Copy after login

html = etree.parse(&#39;test.html&#39;)
html_data = etree.tostring(html,pretty_print=True)
res = html_data.decode(&#39;utf-8&#39;)
print(res)
Copy after login

Print:

<div>
     <ul>
         <li class="item-0"><a href="link1.html">first item</a></li>
         <li class="item-1"><a href="link2.html">second item</a></li>
         <li class="item-inactive"><a href="link3.html">third item</a></li>
         <li class="item-1"><a href="link4.html">fourth item</a></li>
         <li class="item-0"><a href="link5.html">fifth item</a></li>
     </ul>
</div>
Copy after login

5. Print the attributes of the a tag under the specified path (you can get a certain value by traversing The value of an attribute, find the content of the tag)

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;/html/body/div/ul/li/a/@href&#39;)
for i in html_data:
    print(i)
Copy after login

Print:

link1.html
link2.html
link3.html
link4.html
link5.html
Copy after login

6. We know that we use xpath to get ElementTree objects one by one. So if you need to find content, you also need to traverse the list of data.

Find the content of the a tag attribute equal to link2.html under the absolute path.

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;/html/body/div/ul/li/a[@href="link2.html"]/text()&#39;)
print(html_data)
for i in html_data:
    print(i)
Copy after login

Print:

['second item']

second item

7. Above we found all absolute paths (each one is searched from the root), below we find relative paths, for example, find the a tag content under all li tags.

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;//li/a/text()&#39;)
print(html_data)
for i in html_data:
    print(i)
Copy after login

Print:

[&#39;first item&#39;, &#39;second item&#39;, &#39;third item&#39;, &#39;fourth item&#39;, &#39;fifth item&#39;]
first item
second item
third item
fourth item
fifth item
Copy after login

8. Above we used the absolute path to find the attributes of all a tags that are equal to the href attribute value, using It is /---absolute path. Next we use relative path to find the value of the href attribute under the a tag under the li tag under the l relative path. Note that double // is required after the a tag.

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;//li/a//@href&#39;)
print(html_data)
for i in html_data:
    print(i)
Copy after login

Print:

[&#39;link1.html&#39;, &#39;link2.html&#39;, &#39;link3.html&#39;, &#39;link4.html&#39;, &#39;link5.html&#39;]
link1.html
link2.html
link3.html
link4.html
link5.html
Copy after login

9. The method of checking specific attributes under relative paths is similar to that under absolute paths, or it can be said to be the same.

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;//li/a[@href="link2.html"]&#39;)
print(html_data)
for i in html_data:
    print(i.text)
Copy after login

Print:

[<Element a at 0x216e468>]
second item
Copy after login

  10、查找最后一个li标签里的a标签的href属性

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;//li[last()]/a/text()&#39;)
print(html_data)
for i in html_data:
    print(i)
Copy after login

打印:

[&#39;fifth item&#39;]
fifth item
Copy after login

  11、查找倒数第二个li标签里的a标签的href属性

html = etree.HTML(wb_data)
html_data = html.xpath(&#39;//li[last()-1]/a/text()&#39;)
print(html_data)
for i in html_data:
    print(i)
Copy after login

打印:

[&#39;fourth item&#39;]
fourth item
Copy after login

  12、如果在提取某个页面的某个标签的xpath路径的话,可以如下图:

  //*[@id="kw"]

  解释:使用相对路径查找所有的标签,属性id等于kw的标签。

Crawler parsing method five: XPath

#!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import HtmlResponse
html = """<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title></title>
    </head>
    <body>
        <ul>
            <li><a id=&#39;i1&#39; href="link.html">first item</a></li>
            <li><a id=&#39;i2&#39; href="llink.html">first item</a></li>
            <li><a href="llink2.html">second item<span>vv</span></a></li>
        </ul>
        <div><a href="llink2.html">second item</a></div>
    </body>
</html>
"""
response = HtmlResponse(url=&#39;http://example.com&#39;, body=html,encoding=&#39;utf-8&#39;)
# hxs = HtmlXPathSelector(response)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[2]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[@id]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[@id="i1"]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[@href="link.html"][@id="i1"]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[contains(@href, "link")]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[starts-with(@href, "link")]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[re:test(@id, "i\d+")]&#39;)
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[re:test(@id, "i\d+")]/text()&#39;).extract()
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//a[re:test(@id, "i\d+")]/@href&#39;).extract()
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;/html/body/ul/li/a/@href&#39;).extract()
# print(hxs)
# hxs = Selector(response=response).xpath(&#39;//body/ul/li/a/@href&#39;).extract_first()
# print(hxs)
 
# ul_list = Selector(response=response).xpath(&#39;//body/ul/li&#39;)
# for item in ul_list:
#     v = item.xpath(&#39;./a/span&#39;)
#     # 或
#     # v = item.xpath(&#39;a/span&#39;)
#     # 或
#     # v = item.xpath(&#39;*/a/span&#39;)
#     print(v)
Copy after login

The above is the detailed content of Crawler parsing method five: XPath. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

What is the function of C language sum? What is the function of C language sum? Apr 03, 2025 pm 02:21 PM

There is no built-in sum function in C language, so it needs to be written by yourself. Sum can be achieved by traversing the array and accumulating elements: Loop version: Sum is calculated using for loop and array length. Pointer version: Use pointers to point to array elements, and efficient summing is achieved through self-increment pointers. Dynamically allocate array version: Dynamically allocate arrays and manage memory yourself, ensuring that allocated memory is freed to prevent memory leaks.

Who gets paid more Python or JavaScript? Who gets paid more Python or JavaScript? Apr 04, 2025 am 12:09 AM

There is no absolute salary for Python and JavaScript developers, depending on skills and industry needs. 1. Python may be paid more in data science and machine learning. 2. JavaScript has great demand in front-end and full-stack development, and its salary is also considerable. 3. Influencing factors include experience, geographical location, company size and specific skills.

Is distinctIdistinguish related? Is distinctIdistinguish related? Apr 03, 2025 pm 10:30 PM

Although distinct and distinct are related to distinction, they are used differently: distinct (adjective) describes the uniqueness of things themselves and is used to emphasize differences between things; distinct (verb) represents the distinction behavior or ability, and is used to describe the discrimination process. In programming, distinct is often used to represent the uniqueness of elements in a collection, such as deduplication operations; distinct is reflected in the design of algorithms or functions, such as distinguishing odd and even numbers. When optimizing, the distinct operation should select the appropriate algorithm and data structure, while the distinct operation should optimize the distinction between logical efficiency and pay attention to writing clear and readable code.

How to understand !x in C? How to understand !x in C? Apr 03, 2025 pm 02:33 PM

!x Understanding !x is a logical non-operator in C language. It booleans the value of x, that is, true changes to false, false changes to true. But be aware that truth and falsehood in C are represented by numerical values ​​rather than boolean types, non-zero is regarded as true, and only 0 is regarded as false. Therefore, !x deals with negative numbers the same as positive numbers and is considered true.

Can C language user identifiers contain spaces? Can C language user identifiers contain spaces? Apr 03, 2025 pm 01:51 PM

C language identifiers cannot contain spaces because they can cause confusion and difficulty in maintaining. The specific rules are as follows: they must start with letters or underscores. Can contain letters, numbers, or underscores. Cannot contain illegal characters (such as special symbols).

What does sum mean in C language? What does sum mean in C language? Apr 03, 2025 pm 02:36 PM

There is no built-in sum function in C for sum, but it can be implemented by: using a loop to accumulate elements one by one; using a pointer to access and accumulate elements one by one; for large data volumes, consider parallel calculations.

How to apply snake nomenclature in C language? How to apply snake nomenclature in C language? Apr 03, 2025 pm 01:03 PM

In C language, snake nomenclature is a coding style convention, which uses underscores to connect multiple words to form variable names or function names to enhance readability. Although it won't affect compilation and operation, lengthy naming, IDE support issues, and historical baggage need to be considered.

Does H5 page production require continuous maintenance? Does H5 page production require continuous maintenance? Apr 05, 2025 pm 11:27 PM

The H5 page needs to be maintained continuously, because of factors such as code vulnerabilities, browser compatibility, performance optimization, security updates and user experience improvements. Effective maintenance methods include establishing a complete testing system, using version control tools, regularly monitoring page performance, collecting user feedback and formulating maintenance plans.

See all articles