Solution to garbled code problem in Python web crawler

高洛峰
Release: 2017-02-11 13:13:46
Original
1432 people have browsed it

This article mainly introduces in detail the solution to the problem of garbled characters in Python web crawlers. It has certain reference value. Interested friends can refer to it.

There are many ways to solve the problem of garbled characters in crawlers. Various problems, here are not only Chinese garbled characters, encoding conversion, but also some garbled characters such as Japanese, Korean, Russian, Tibetan, etc., because the solutions are the same, so they are explained here.

The reason why the web crawler appears garbled code

The encoding format of the source web page is inconsistent with the encoding format after crawling.
If the source web page is a byte stream encoded by gbk, and after we grab it, the program directly uses utf-8 to encode and output it to the storage file. This will inevitably cause garbled code when the source web page is encoded and captured. When the program directly uses the processing encoding to be consistent, there will be no garbled characters; at this time, if the character encoding is unified, there will be no garbled characters.

Pay attention to the distinction

  • Source network code A,

  • code B used directly by the program,

  • code C for unified conversion of characters.

Solution to garbled codes

Determine the code A of the source web page, code A is often in three positions in the web page

1.Content-Type of http header
The site that obtains the server header can use it to tell the browser some information about the page content. The Content-Type entry is written as "text/html; charset=utf-8".

2.meta charset


3. Document definition in the web page header

<script type="text/javascript"> 
if(document.charset){ 
 alert(document.charset+"!!!!"); 
 document.charset = 'GBK'; 
 alert(document.charset); 
} 
else if(document.characterSet){ 
 alert(document.characterSet+"????"); 
 document.characterSet = 'GBK'; 
 alert(document.characterSet); 
}
Copy after login

When obtaining the source web page encoding, judge these three in order Part of the data is enough, from front to back, and the same is true for priority.
There is no encoding information among the above three. Generally, third-party web page encoding intelligent identification tools such as chardet are used for

Installation: pip install chardet

Python chardet character encoding judgment

Using chardet can easily realize encoding detection of strings/files. Although HTML pages have charset tags, they are sometimes incorrect. Then chardet can help us a lot.
chardet example

import urllib 
rawdata = urllib.urlopen('http://www.php.cn/').read() 
import chardet 
chardet.detect(rawdata) 
{'confidence': 0.99, 'encoding': 'GB2312'}
Copy after login

chardet can directly use the detect function to detect the encoding of the given character. The function return value is a dictionary with two elements, one is the detection credibility, and the other is the detected encoding.

How to deal with Chinese character encoding in the process of developing your own crawler?
The following are all for python2.7. If not processed, all the collected characters will be garbled. The solution is Process html into unified utf-8 encoding and encounter windows-1252 encoding, which belongs to the chardet encoding recognition training that has not been completed

import chardet 
a='abc' 
type(a) 
str 
chardet.detect(a) 
{'confidence': 1.0, 'encoding': 'ascii'} 
 
 
a ="我" 
chardet.detect(a) 
{'confidence': 0.73, 'encoding': 'windows-1252'} 
a.decode('windows-1252') 
u'\xe6\u02c6\u2018' 
chardet.detect(a.decode('windows-1252').encode('utf-8')) 
type(a.decode('windows-1252')) 
unicode 
type(a.decode('windows-1252').encode('utf-8')) 
str 
chardet.detect(a.decode('windows-1252').encode('utf-8')) 
{'confidence': 0.87625, 'encoding': 'utf-8'} 
 
 
a ="我是中国人" 
type(a) 
str 
{'confidence': 0.9690625, 'encoding': 'utf-8'} 
chardet.detect(a) 
# -*- coding:utf-8 -*- 
import chardet 
import urllib2 
#抓取网页html 
html = urllib2.urlopen('http://www.jb51.net/').read() 
print html 
mychar=chardet.detect(html) 
print mychar 
bianma=mychar['encoding'] 
if bianma == 'utf-8' or bianma == 'UTF-8': 
 html=html.decode('utf-8','ignore').encode('utf-8') 
else: 
 html =html.decode('gb2312','ignore').encode('utf-8') 
print html 
print chardet.detect(html)
Copy after login

python code file Encoding
py file defaults to ASCII encoding. When Chinese is displayed, a conversion from ASCII to the system default encoding will occur. At this time, an error will occur: SyntaxError: Non-ASCII character. It is necessary to add encoding instructions in the first line of the code file:

# -*- coding:utf-8 -*- 
 
print '中文'
Copy after login

The string input directly as above is encoded according to the code file'utf -8' to process
If unicode encoding is used, the following method is used:

s1 = u'Chinese' #u means using unicode encoding to store information

decode is a method that any string has to convert the string into unicode format. The parameter indicates the encoding format of the source string.
encode is also a method that any string has, converting the string into the format specified by the parameter.

The above is the entire content of this article. I hope it will be helpful to everyone's learning. I also hope that everyone will support the PHP Chinese website.

For more related articles on solutions to garbled code problems in Python web crawlers, please pay attention to the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!