python - 现在用scrapy爬一个网站始终遇到521错误,是怎么回事呢?
天蓬老师
天蓬老师 2017-04-18 09:36:32
0
2
1569

www.cnvd.org.cn是个很奇怪的网站,如过你用浏览器访问时正常的,但是如果用http请求就会出现各种错。比如:
wget http://www.cnvd.org.cn 返回:
--2016-08-26 20:37:00-- http://www.cnvd.org.cn/
Resolving www.cnvd.org.cn (www.cnvd.org.cn)... 113.200.91.208, 42.48.109.207
Connecting to www.cnvd.org.cn (www.cnvd.org.cn)|113.200.91.208|:80... connected.
HTTP request sent, awaiting response... 521
2016-08-26 20:37:00 ERROR 521: (no description).
如果用curl执行则会返回一段JS代码
js也研究了下。是动态设置cookie的。
一个月前整站已经被扒下来了,最近发现没有增加数据量才知道爬虫被ban了,前段时间调试的时候将浏览器的请求头全部复制到爬虫中科院正常运行,但是这两天此方法已经失效。。。
请大家给我个思路,感觉瞬间没爱了!

天蓬老师
天蓬老师

欢迎选择我的课程,让我们一起见证您的进步~~

reply all(2)
小葫芦

Your crawler must have been detected by the website. If the headers are not working, then you can only check whether it is restricted by your IP or account. If you do not need to log in, you can try changing the IP first to see if it is normal, or directly in the crawler. Manually access the server to see if it is successful

------Update-------

At the request of the questioner, I posted the test code. The cookie here was obtained by manual access just now. It will be no problem to bring it in for access. As for the expiration time of the cookie, I did not read it carefully. If the questioner does not understand, I'll take a better look.

import requests

url = 'http://www.cnvd.org.cn'
headers = {
            'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
            'Accept-Encoding': 'gzip, deflate, sdch',
            'Accept-Language':'zh-CN,zh;q=0.8',
            'AlexaToolbar-ALX_NS_PH': 'AlexaToolbar/alx-4.0',
            'Cache-Control': 'max-age=0',
            'Connection':'keep-alive',
            'Host':'www.cnvd.org.cn',
            'Referer':'http://www.cnvd.org.cn/',
            'Upgrade-Insecure-Requests':'1',
            'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36',
            'Cookie':'__jsluid=224f8fc054404821896d6b6bd2415533; __jsl_clearance=1472619039.65|0|4opPNLlmaO6pFXwTMO%2BQ5UAhfEA%3D; JSESSIONID=AE8735BE6328B81C7CD4352B75F25316; bdshare_firstime=1472619047205'

        }
cont = requests.get(url, headers=headers).text
print(cont)

Hope it helps you

Ty80

Friend, I have been browsing this website recently, and I would like to give you some opinions. You can take a look. If you are willing to communicate, please add me as a friend.
cnvd can be climbed under normal circumstances.

www.cnvd.org.cn This thing is quite disgusting, many heads turned to 521.

# -*- coding:utf-8 -*-
#coding = utf-8
import urllib
import urllib2
import re
import random
import socket
import MySQLdb as mdb
import cookielib

pagenumber=0
url = 'http://ics.cnvd.org.cn/?max=100&offset='+str(pagenumber)  
# url='http://www.cnvd.org.cn/flaw/show/CNVD-2016-05694'  
  
cookie_support= urllib2.HTTPCookieProcessor(cookielib.CookieJar())
opener = urllib2.build_opener(cookie_support,urllib2.HTTPHandler)
urllib2.install_opener(opener)
opener2= urllib2.build_opener(cookie_support,urllib2.HTTPHandler)
urllib2.install_opener(opener2)

user_agents = [
             'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11',
             'Opera/9.25 (Windows NT 5.1; U; en)',
             'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',
             'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)',
             'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12',
             'Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9',
             "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.7 (KHTML, like Gecko) Ubuntu/11.04 Chromium/16.0.912.77 Chrome/16.0.912.77 Safari/535.7",
             "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 ",
             "Mozilla-Firefox-Spider(Wenanry)"
             ] 

agent = random.choice(user_agents)

UserAgent = "Mozilla-Firefox-Spider(Wenanry)"

opener.addheaders = [("User-agent",UserAgent),("Accept","*/*"),('Referer','http://www.miit.gov.cn/')]

con = mdb.connect('127.0.0.1', 'root', 'root', 'test', port=3307,charset="utf8");
with con:
    cur=con.cursor()
    cur.execute("CREATE TABLE IF NOT EXISTS \
        BUGnews(Id INT PRIMARY KEY AUTO_INCREMENT, WebUrl VARCHAR(50))")
    try:
        res = opener.open(url)
        content= res.read()        
#         request = urllib2.Request(url,headers = header1)
#         response = urllib2.urlopen(request)
#         content= response.read().decode('utf-8')

        print content
        pattern= re.compile('<a.*?href="(.*?)" title=',re.S)
        items= re.findall(pattern,content)
        for item in items:
            url2=item
#             opener2.addheaders = [("User-agent","Mozilla-Firefox-Spider(Wenanry)"),("Accept","*/*"),('Referer','http://www.google.com')]
#             res2=opener2.open(url2)
#             contentnews= res2.read()
#             print contentnews
            sql="insert into BUGnews(WebUrl) VALUES (%s)"
            params=item.encode('utf-8')
            cur.execute(sql,params)
    #         res2 = opener2.open(url2)
    #         content2= res2.read()
    #         print content2
    
    except urllib2.URLError, e:
            if hasattr(e,"code"):
                print e.code
            if hasattr(e,"reason"):
                print e.reason
                
                
            
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template