Home > Backend Development > Python Tutorial > Introduction to the basic writing method of Python web crawler function

Introduction to the basic writing method of Python web crawler function

高洛峰
Release: 2017-03-13 18:12:29
Original
2000 people have browsed it

This article mainly introduces the basic writing method of Python web crawler function. Web crawler, namely Web Spider, is a very vivid name. Comparing the Internet to a spider web, then Spider is a spider crawling around on the web. Friends who are interested in web crawlers can refer to this article

Web crawlers, namely Web Spider, are A very vivid name. If the Internet is compared to a spider web, then a spider is a spider crawling around on the web.

1. The definition of web crawler

Web spiders search for web pages through the link addresses of web pages. Starting from a certain page of the website (usually the home page), read the content of the web page, find other link addresses in the web page, and then find the next web page through these link addresses, and continue loop until the All pages of this website have been crawled. If the entire Internet is regarded as a website, then web spiders can use this principle to crawl all web pages on the Internet. In this way, a web crawler is a crawler, a program that crawls web pages. The basic operation of a web crawler is to crawl web pages.

2. The process of browsing the webpage

The process of crawling the webpage is actually the same as the way readers usually use IE browserto browse the webpage . For example, you enter the address www.baidu.com in the address bar of the browser.

The process of opening a web page is actually that the browser, as a browsing "client", sends a request to the server, "grabs" the server-side files locally, and then interprets and displays them.

HTML is a markup language that uses tags to mark content and parse and differentiate it. The function of the browser is to parse the obtained HTML code, and then convert the original code into the website page we see directly.

3. Web crawler function based on python

1). Get html page with python

In fact, the most basic website grabbing is just two sentences:


import urllib2
content = urllib2.urlopen('http://XXXX').read()
Copy after login

In this way, you can get the entire html document. The key issue is that we may need to start from this document. Get the useful information we need instead of the entire document. This requires parsing html filled with various tags.

2). How to parse html after python crawler crawls the page

python crawler html parsing library SGMLParser

Python comes with parsers such as HTMLParser and SGMLParser by default. The former is really difficult to use, so I wrote a sample program using SGMLParser:


import urllib2
from sgmllib import SGMLParser
 
class ListName(SGMLParser):
def init(self):
SGMLParser.init(self)
self.is_h4 = ""
self.name = []
def start_h4(self, attrs):
self.is_h4 = 1
def end_h4(self):
self.is_h4 = ""
def handle_data(self, text):
if self.is_h4 == 1:
self.name.append(text)
 
content = urllib2.urlopen('http://169it.com/xxx.htm').read()
listname = ListName()
listname.feed(content)
for item in listname.name:
print item.decode('gbk').encode('utf8')
Copy after login

It's very simple. A class called ListName is defined here, Inherits the methods in SGMLParser. Use a variable is_h4 as a mark to determine the h4 tag in the html file. If an h4 tag is encountered, the content in the tag is added to the List variable name. Explain start_h4() and end_h4() function, their prototype is


##

start_tagname(self, attrs)
end_tagname(self)
Copy after login

tagname in SGMLParser. Tagname is the tag name, for example, when encountering < pre>, start_pre will be called, and end_pre will be called when is encountered. attrs is the parameter of the label, returned in the form of [(attribute, value), (attribute, value), ...].

python crawler html parsing library pyQuery

pyQuery is the implementation of

jQuery in python, and can use jQuery syntax It is very convenient to operate and parse HTML documents. You need to install it before use, easy_install pyquery, or under Ubuntu


sudo apt-get install python-pyquery
Copy after login

The following example:


from pyquery import PyQuery as pyq
doc=pyq(url=r&#39;http://169it.com/xxx.html&#39;)
cts=doc(&#39;.market-cat&#39;)
 
for i in cts:
print &#39;====&#39;,pyq(i).find(&#39;h4&#39;).text() ,&#39;====&#39;
for j in pyq(i).find(&#39;.sub&#39;):
print pyq(j).text() ,
print &#39;\n&#39;
Copy after login

Python crawler html parsing library BeautifulSoup

One of the headaches is that most web pages are not written in full compliance with standards, and there are all kinds of inexplicable errors that make people want to Find the person who wrote the web page and beat him up. In order to solve this problem, we can choose the famous BeautifulSoup to parse HTML documents, which has good fault tolerance.

The above is the entire content of this article. It provides a detailed analysis and introduction to the implementation of the Python web crawler function. I hope it will be helpful to everyone's learning.

The above is the detailed content of Introduction to the basic writing method of Python web crawler function. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template