Home > Backend Development > Python Tutorial > What kind of data can the crawler obtain and the specific analysis method?

What kind of data can the crawler obtain and the specific analysis method?

爱喝马黛茶的安东尼
Release: 2019-06-05 13:12:32
forward
4911 people have browsed it

#With the rapid development of the Internet, more and more data are flooding this era. Obtaining and processing data has become an essential part of our lives, and crawlers have emerged as the times require.

Many languages ​​​​can crawl, but the crawler based on python is more concise and convenient. Crawler has also become an indispensable part of the python language. So what kind of data can we obtain through crawlers? What kind of analysis method is there?

In the previous article, I introduced to you an introduction to the basic process of Request and Response,What this article brings to you is what kind of data the crawler can obtain and its specific analysis method.

What kind of data can the crawler obtain and the specific analysis method?


What kind of data can be captured?

Web page text: such as HTML documents, Json format text loaded by Ajax, etc.;

Pictures, videos, etc.: The binary files obtained are saved as pictures or videos. Format;

Anything else that can be requested can be obtained.

Demo

import requests
 
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'}
resp = requests.get('http://www.baidu.com/img/baidu_jgylogo3.gif',headers=headers)
print(resp.content) # 二进制文件使用content
# 保存图片
with open('logo.gif','wb') as f:
    f.write(resp.content)
    print('Ok')
Copy after login

After successful operation, you can see the binary data of the printed image, and you can save the printed OK after success. , at this time we can see the downloaded pictures when we open the folder. These few lines of code simply demonstrate the process of the crawler saving files.



What are the parsing methods?

Direct processing, such as simple page documents, just remove some space data;

Json parsing and processing Ajax loaded page;

regular expression;

BeautifulSoup library;

PyQuery;

XPath.


##Summary

See this, Do you already have a clear understanding of the basic working principles of crawlers? Of course, Rome was not built in a day. As long as you accumulate enough experience, you will definitely become a reptile master. I believe that everyone will succeed after reading the relevant information I shared.

The above is the detailed content of What kind of data can the crawler obtain and the specific analysis method?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:csdn.net
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template