What libraries are used to write crawlers in Python?

silencement
Release: 2019-06-21 15:34:32
Original
7316 people have browsed it

What libraries are used to write crawlers in Python?

Python crawler, the full name is Python web crawler, is a program or script that automatically captures World Wide Web information according to certain rules. It is mainly used to capture securities trading data and weather data. , website user data, image data, etc., Python has a large number of built-in libraries to support the normal functions of web crawlers, mainly of several types. The following article will introduce it to you.

1. Python crawler network library

Python crawler network library mainly includes: urllib, requests, grab, pycurl, urllib3, httplib2, RoboBrowser, MechanicalSoup, mechanism, socket, Unirest for Python, hyper , PySocks, treq and aiohttp, etc.

2. Python web crawler framework

The Python web crawler framework mainly includes: grab, scrapy, pyspider, cola, portia, restkit and demiurge, etc.

3. HTML/XML parser

● lxml: an efficient HTML/XML processing library written in C language. Supports XPath.

● cssselect: Parse DOM tree and CSS selector.

● Pyquery: Parse DOM tree and jQuery selector.

●BeautifulSoup: Inefficient HTML/XML processing library, implemented in pure Python.

● html5lib: Generates the DOM of HTML/XML documents according to the WHATWG specification. This specification is used in all current browsers.

● Feedparser: Parse RSS/ATOM feeds.

●MarkupSafe: Provides safe escaped strings for XML/HTML/XHTML.

● xmltodict: A Python module that makes processing XML feel like JSON.

● xhtml2pdf: Convert HTML/CSS to PDF.

● untangle: Easily convert XML files into Python objects.

4. Text processing

A library for parsing and manipulating simple text.

● difflib: (Python standard library) helps perform differential comparisons.

● Levenshtein: Quickly calculate Levenshtein distance and string similarity.

● fuzzywuzzy: fuzzy string matching.

● esmre: Regular expression accelerator.

● ftfy: Automatically organize Unicode text to reduce fragmentation.

5. Specific format file processing

Library for parsing and processing specific text formats.

● tablib: A module that exports data to XLS, CSV, JSON, YAML and other formats.

●textract: Extract text from various files, such as Word, PowerPoint, PDF, etc.

●messytables: A tool for parsing messy tabular data.

● Rows: A common data interface that supports many formats (currently supports CSV, HTML, XLS, TXT: more will be provided in the future!).

The above is the detailed content of What libraries are used to write crawlers in Python?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!