84669 person learning
152542 person learning
20005 person learning
5487 person learning
7821 person learning
359900 person learning
3350 person learning
180660 person learning
48569 person learning
18603 person learning
40936 person learning
1549 person learning
1183 person learning
32909 person learning
比如我爬去了豆瓣的Top100页面,将每本书的详情页面的URL存入MongoDB中,然后我用Redis去重。然后从Redis中获取url去爬详细的数据。现在有个问题
在Scrapy中,怎么处理MongoDB中url字段的值进入Redis。或许说。Scrapy怎么从数据库中获取url。
谢谢
认证0级讲师
Can’t I write it in start_requests? For example
def start_requests(self):
r = Redis.Redis() while true: url = r.lpop('xxxx') yield scrapy.Request(url)
Can’t I write it in start_requests?
For example
def start_requests(self):