


Detailed explanation of scrapy examples of python crawler framework
Generate Project
Scrapy provides a tool to generate projects. Some files are preset in the generated project, and users need to add their own code to these files.
Open the command line and execute: scrapy startproject tutorial. The generated project has a structure similar to the following
tutorial/
scrapy.cfg
tutorial/
__init__.py
items.py
pipelines.py
settings .py
spiders/
The name attribute is important , different spiders cannot use the same name
start_urls is the starting point for spiders to crawl web pages, and can include multiple URLs
parse method is the callback called by default after spider captures a web page, avoid using this name to define your own method .
When the spider gets the content of the url, it will call the parse method and pass it a response parameter. The response contains the content of the captured web page. In the parse method, you can parse the data from the captured web page. The code above simply saves the web page content to a file.
Start crawlingYou can open the command line, enter the generated project root directory tutorial/, and execute scrapy crawl dmoz, where dmoz is the name of the spider.from scrapy.spider import BaseSpider class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): filename = response.url.split("/")[-2] open(filename, 'wb').write(response.body)
//ul/li means to select all ul tags The li tag below
a/@href means selecting the href attribute of all a tags
a/text() means selecting the a tag text
a[@href="abc"] means selecting all a whose href attribute is abc Tag
We can save the parsed data in an object that scrapy can use, and then scrapy can help us save these objects without having to save the data to a file ourselves. We need to add some classes to items.py, which are used to describe the data we want to save
from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//ul/li') for site in sites: title = site.select('a/text()').extract() link = site.select('a/@href').extract() desc = site.select('text()').extract() print title, link, desc
When executing scrapy on the command line, we can add two parameters to let scrapy output the items returned by the parse method to json In the file
scrapy crawl dmoz -o items.json -t json
items.json will be placed in the root directory of the project
Let scrapy automatically crawl all links on the webpageIn the example above, scrapy Only the contents of the two URLs in start_urls are crawled, but usually what we want to achieve is for scrapy to automatically discover all the links on a web page, and then crawl the contents of these links. In order to achieve this, we can extract the links we need in the parse method, then construct some Request objects and return them. Scrapy will automatically crawl these links. The code is similar:from scrapy.item import Item, Field class DmozItem(Item): title = Field() link = Field() desc = Field() 然后在spider的parse方法中,我们把解析出来的数据保存在DomzItem对象中。 from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from tutorial.items import DmozItem class DmozSpider(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): hxs = HtmlXPathSelector(response) sites = hxs.select('//ul/li') items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() item['desc'] = site.select('text()').extract() items.append(item) return items
In order to make such work easier, scrapy provides another spider base class, using which we can easily implement automatic crawling of links. We need to use CrawlSpider
class MySpider(BaseSpider): name = 'myspider' start_urls = ( 'http://example.com/page1', 'http://example.com/page2', ) def parse(self, response): # collect `item_urls` for item_url in item_urls: yield Request(url=item_url, callback=self.parse_item) def parse_item(self, response): item = MyItem() # populate `item` fields yield Request(url=item_details_url, meta={'item': item}, callback=self.parse_details) def parse_details(self, response): item = response.meta['item'] # populate more `item` fields return item
Compared with BaseSpider, the new class has an additional rules attribute. This attribute is a list, which can contain multiple Rules. Each Rule describes which links need to be crawled and which do not. This is the documentation for the Rule class http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.contrib.spiders.Rule
These rules can have callbacks or not, when there is no callback , scrapy simply follows all these links.
Usage of pipelines.py
In pipelines.py we can add some classes to filter out the items we don’t want and save the items to the database.
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor class MininovaSpider(CrawlSpider): name = 'mininova.org' allowed_domains = ['mininova.org'] start_urls = ['http://www.mininova.org/today'] rules = [Rule(SgmlLinkExtractor(allow=['/tor/\d+'])), Rule(SgmlLinkExtractor(allow=['/abc/\d+']), 'parse_torrent')] def parse_torrent(self, response): x = HtmlXPathSelector(response) torrent = TorrentItem() torrent['url'] = response.url torrent['name'] = x.select("//h1/text()").extract() torrent['description'] = x.select("//div[@id='description']").extract() torrent['size'] = x.select("//div[@id='info-left']/p[2]/text()[2]").extract() return torrent
If the item does not meet the requirements, then an exception will be thrown and the item will not be output to the json file.
To use pipelines, we also need to modify settings.py
Add a line
ITEM_PIPELINES = ['dirbot.pipelines.FilterWordsPipeline']
Now execute scrapy crawl dmoz -o items.json -t json, which does not meet the requirements The item was filtered out

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Python is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

Python is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.

To maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

Python excels in automation, scripting, and task management. 1) Automation: File backup is realized through standard libraries such as os and shutil. 2) Script writing: Use the psutil library to monitor system resources. 3) Task management: Use the schedule library to schedule tasks. Python's ease of use and rich library support makes it the preferred tool in these areas.
