使用scrapy实现爬网站例子和实现网络爬虫(蜘蛛)的步骤
代码如下:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from cnbeta.items import CnbetaItem
class CBSpider(CrawlSpider):
name = 'cnbeta'
allowed_domains = ['cnbeta.com']
start_urls = ['http://www.bitsCN.com']
rules = (
Rule(SgmlLinkExtractor(allow=('/articles/.*\.htm', )),
callback='parse_page', follow=True),
)
def parse_page(self, response):
item = CnbetaItem()
sel = Selector(response)
item['title'] = sel.xpath('//title/text()').extract()
item['url'] = response.url
return item
实现蜘蛛爬虫步骤
1.实例初级目标:从一个网站的列表页抓取文章列表,然后存入数据库中,数据库包括文章标题、链接、时间
首先生成一个项目:scrapy startproject fjsen
先定义下items,打开items.py:
我们开始建模的项目,我们想抓取的标题,地址和时间的网站,我们定义域为这三个属性。这样做,我们编辑items.py,发现在开放目录目录。我们的项目看起来像这样:
代码如下:
from scrapy.item import Item, Field
class FjsenItem(Item):
# define the fields for your item here like:
# name = Field()
title=Field()
link=Field()
addtime=Field()
第二步:定义一个spider,就是爬行蜘蛛(注意在工程的spiders文件夹下),他们确定一个初步清单的网址下载,如何跟随链接,以及如何分析这些内容的页面中提取项目(我们要抓取的网站是http://www.fjsen.com/j/node_94962.htm 这列表的所有十页的链接和时间)。
新建一个fjsen_spider.py,内容如下:
代码如下:
#-*- coding: utf-8 -*-
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from fjsen.items import FjsenItem
class FjsenSpider(BaseSpider):
name="fjsen"
allowed_domains=["fjsen.com"]
start_urls=['http://www.fjsen.com/j/node_94962_'+str(x)+'.htm' for x in range(2,11)]+['http://www.fjsen.com/j/node_94962.htm']
def parse(self,response):
hxs=HtmlXPathSelector(response)
sites=hxs.select('//ul/li')
items=[]
for site in sites:
item=FjsenItem()
item['title']=site.select('a/text()').extract()
item['link'] = site.select('a/@href').extract()
item['addtime']=site.select('span/text()').extract()
items.append(item)
return items
name:是确定蜘蛛的名称。它必须是独特的,就是说,你不能设置相同的名称不同的蜘蛛。
allowed_domains:这个很明显,就是允许的域名,或者说爬虫所允许抓取的范围仅限这个列表里面的域名。
start_urls:是一个网址列表,蜘蛛会开始爬。所以,第一页将被列在这里下载。随后的网址将生成先后从数据中包含的起始网址。我这里直接是列出十个列表页。
parse():是蜘蛛的一个方法,当每一个开始下载的url返回的Response对象都会执行该函数。
这里面,我抓取每一个列表页中的
- 下的
- 下的数据,包括title,链接,还有时间,并插入到一个列表中
第三步,将抓取到的数据存入数据库中,这里就得在pipelines.py这个文件里面修改了代码如下:
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
from os import path
from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
class FjsenPipeline(object):
def __init__(self):
self.conn=None
dispatcher.connect(self.initialize,signals.engine_started)
dispatcher.connect(self.finalize,signals.engine_stopped)
def process_item(self,item,spider):
self.conn.execute('insert into fjsen values(?,?,?,?)',(None,item['title'][0],'http://www.bitsCN.com/'+item['link'][0],item['addtime'][0]))
return item
def initialize(self):
if path.exists(self.filename):
self.conn=sqlite3.connect(self.filename)
else:
self.conn=self.create_table(self.filename)
def finalize(self):
if self.conn is not None:
self.conn.commit()
self.conn.close()
self.conn=None
def create_table(self,filename):
conn=sqlite3.connect(filename)
conn.execute("""create table fjsen(id integer primary key autoincrement,title text,link text,addtime text)""")
conn.commit()
return conn这里我暂时不解释,先继续,让这个蜘蛛跑起来再说。
第四步:修改setting.py这个文件:将下面这句话加进去
代码如下:
ITEM_PIPELINES=['fjsen.pipelines.FjsenPipeline']接着,跑起来吧,执行:
代码如下:
scrapy crawl fjsen
就会在目前下生成一个data.sqlite的数据库文件,所有抓取到的数据都会存在这里。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Scrapy implements article crawling and analysis of WeChat public accounts. WeChat is a popular social media application in recent years, and the public accounts operated in it also play a very important role. As we all know, WeChat public accounts are an ocean of information and knowledge, because each public account can publish articles, graphic messages and other information. This information can be widely used in many fields, such as media reports, academic research, etc. So, this article will introduce how to use the Scrapy framework to crawl and analyze WeChat public account articles. Scr

Scrapy is a Python-based crawler framework that can quickly and easily obtain relevant information on the Internet. In this article, we will use a Scrapy case to analyze in detail how to crawl company information on LinkedIn. Determine the target URL First, we need to make it clear that our target is the company information on LinkedIn. Therefore, we need to find the URL of the LinkedIn company information page. Open the LinkedIn website, enter the company name in the search box, and

Scrapy is an open source Python crawler framework that can quickly and efficiently obtain data from websites. However, many websites use Ajax asynchronous loading technology, making it impossible for Scrapy to obtain data directly. This article will introduce the Scrapy implementation method based on Ajax asynchronous loading. 1. Ajax asynchronous loading principle Ajax asynchronous loading: In the traditional page loading method, after the browser sends a request to the server, it must wait for the server to return a response and load the entire page before proceeding to the next step.

How to build a powerful web crawler application using React and Python Introduction: A web crawler is an automated program used to crawl web page data through the Internet. With the continuous development of the Internet and the explosive growth of data, web crawlers are becoming more and more popular. This article will introduce how to use React and Python, two popular technologies, to build a powerful web crawler application. We will explore the advantages of React as a front-end framework and Python as a crawler engine, and provide specific code examples. 1. For

Scrapy is a powerful Python crawler framework that can be used to obtain large amounts of data from the Internet. However, when developing Scrapy, we often encounter the problem of crawling duplicate URLs, which wastes a lot of time and resources and affects efficiency. This article will introduce some Scrapy optimization techniques to reduce the crawling of duplicate URLs and improve the efficiency of Scrapy crawlers. 1. Use the start_urls and allowed_domains attributes in the Scrapy crawler to

Using Selenium and PhantomJS in Scrapy crawlers Scrapy is an excellent web crawler framework under Python and has been widely used in data collection and processing in various fields. In the implementation of the crawler, sometimes it is necessary to simulate browser operations to obtain the content presented by certain websites. In this case, Selenium and PhantomJS are needed. Selenium simulates human operations on the browser, allowing us to automate web application testing

Use Vue.js and Perl languages to develop efficient web crawlers and data scraping tools. In recent years, with the rapid development of the Internet and the increasing importance of data, the demand for web crawlers and data scraping tools has also increased. In this context, it is a good choice to combine Vue.js and Perl language to develop efficient web crawlers and data scraping tools. This article will introduce how to develop such a tool using Vue.js and Perl language, and attach corresponding code examples. 1. Introduction to Vue.js and Perl language

Scrapy is a powerful Python crawler framework that can help us obtain data on the Internet quickly and flexibly. In the actual crawling process, we often encounter various data formats such as HTML, XML, and JSON. In this article, we will introduce how to use Scrapy to crawl these three data formats respectively. 1. Crawl HTML data and create a Scrapy project. First, we need to create a Scrapy project. Open the command line and enter the following command: scrapys
