Web crawlers (also known as web spiders, web robots, and more commonly known as web page chasers in the FOAF community) are a type of web crawler that automatically follows certain rules. A program or script that crawls information from the World Wide Web. Now let's learn this together.
1.Scrapy
Scrapy is an application framework written to crawl website data and extract structured data. It can be used in a series of programs including data mining, information processing or storing historical data. . Using this framework, you can easily crawl down data such as Amazon product information.
Project address: https://scrapy.org/
2.PySpider
pyspider is a A powerful web crawler system implemented in python. It can write scripts, schedule functions and view crawling results in real time on the browser interface. The backend uses commonly used databases to store crawling results and can also set timings. Tasks and task priorities, etc.
Project address: https://github.com/binux/pyspider
3.Crawley
Crawley can crawl the content of the corresponding website at high speed, supports relational and non-relational databases, and the data can be exported to JSON, XML, etc.
Project address: http://project.crawley-cloud.com/
4.Portia
Portia is an open source visual crawler tool that allows you to crawl websites without any programming knowledge! Simply annotate the pages that interest you and Portia will create a spider to extract data from similar pages.
Project address: https://github.com/scrapinghub/portia
5.Newspaper
Newspaper can be used to extract news, articles and content analysis. Use multi-threading, support more than 10 languages, etc.
Project address: https://github.com/codelucas/newspaper
6.Beautiful Soup
Beautiful Soup is a Python library that can extract data from HTML or XML files. It can realize the usual way of document navigation, search and modification through your favorite converter. Beautiful Soup will save you hours or even days. working hours.
Project address: https://www.crummy.com/software/BeautifulSoup/bs4/doc/
7.Grab
Grab is a Python framework for building web scrapers. With Grab, you can build web scrapers of varying complexity, from simple 5-line scripts to complex asynchronous website scrapers that handle millions of web pages. Grab provides an API for performing network requests and processing received content, such as interacting with the DOM tree of an HTML document.
Project address: http://docs.grablib.org/en/latest/#grab-spider-user-manual
8 .Cola
Cola is a distributed crawler framework. For users, they only need to write a few specific functions without paying attention to the details of distributed operation. Tasks are automatically distributed across multiple machines, and the entire process is transparent to the user.
Project address: https://github.com/chineking/cola
Thank you for reading, I hope you will benefit a lot.
Reprinted to: https://www.toutiao.com/i6560240315519730190/
Recommended tutorial: "python tutorial"
The above is the detailed content of The most efficient Python crawler framework in history (recommended). For more information, please follow other related articles on the PHP Chinese website!