Home Backend Development Python Tutorial [Python] Web Crawler (4): Introduction and practical applications of Opener and Handler

[Python] Web Crawler (4): Introduction and practical applications of Opener and Handler

Jan 21, 2017 pm 01:50 PM

Before starting the following content, let’s first explain the two methods in urllib2: info and geturl

The response object response (or HTTPError instance) returned by urlopen has two very useful methods info () and geturl()

1.geturl():

This returns the real URL obtained. This is very useful because urlopen (or used by the opener object) may There are redirects. The obtained URL may be different from the request URL.

Take a hyperlink in Renren as an example,


We build a urllib2_test10.py to compare the original URL and the redirected link:

from urllib2 import Request, urlopen, URLError, HTTPError  
  
  
old_url = 'http://rrurl.cn/b1UZuP'  
req = Request(old_url)  
response = urlopen(req)    
print 'Old url :' + old_url  
print 'Real url :' + response.geturl()
Copy after login

After running, you can see the URL pointed to by the real link:

[Python] Web Crawler (4): Introduction and practical applications of Opener and Handler

2.info():

This returns a dictionary of objects Object, this dictionary describes the obtained page situation. Usually specific headers sent by the server. Currently an instance of httplib.HTTPMessage.

Classic headers include "Content-length", "Content-type", and other content.


We build a urllib2_test11.py to test the application of info:

from urllib2 import Request, urlopen, URLError, HTTPError  
  
old_url = 'http://www.baidu.com'  
req = Request(old_url)  
response = urlopen(req)    
print 'Info():'  
print response.info()
Copy after login

The running results are as follows, you can see the relevant information of the page :

[Python] Web Crawler (4): Introduction and practical applications of Opener and Handler

Let’s talk about two important concepts in urllib2: Openers and Handlers.

1.Openers:

When you get a URL you use an opener (an instance of urllib2.OpenerDirector).

Normally, we use the default opener: through urlopen.

But you can create personalized openers.

2.Handles:

Openers use processor handlers, and all "heavy" work is handled by the handlers.

Each handler knows how to open URLs over a specific protocol, or how to handle various aspects of opening a URL.

For example, HTTP redirection or HTTP cookies.


You will want to create an opener if you want to fetch URLs with a specific handler, for example to get an opener that can handle cookies, or to get an opener that does not redirect.


To create an opener, instantiate an OpenerDirector,

and then call .add_handler(some_handler_instance).

Similarly, you can use build_opener, which is a more convenient function for creating opener objects. It only requires one function call.
build_opener adds several processors by default, but provides a quick way to add or update the default processors.

Other handlers You may want to handle proxies, authentication, and other common but somewhat special cases.


install_opener is used to create a (global) default opener. This means that calling urlopen will use the opener you installed.

The Opener object has an open method.

This method can be used directly to obtain urls like the urlopen function: it is usually not necessary to call install_opener, except for convenience.


After talking about the above two contents, let’s take a look at the basic authentication content. The Opener and Handler mentioned above will be used here.

Basic Authentication Basic Authentication

To demonstrate creating and installing a handler, we will use HTTPBasicAuthHandler.

When basic verification is required, the server sends a header (401 error code) to request verification. This specifies the scheme and a 'realm' and looks like this: Www-authenticate: SCHEME realm="REALM".

For example
Www-authenticate: Basic realm="cPanel Users"

The client must use a new request and include the correct name and password in the request header.

This is "basic authentication". In order to simplify this process, we can create an instance of HTTPBasicAuthHandler and let opener use this handler.


HTTPBasicAuthHandler uses a password management object to handle URLs and realms to map usernames and passwords.

If you know what realm (in the header sent from the server) is, you can use HTTPPasswordMgr.


Usually people don’t care what realm is. In that case, the convenient HTTPPasswordMgrWithDefaultRealm can be used.

This will specify a default username and password for your URL.

This will be provided when you provide an other combination for a specific realm.

We indicate this situation by specifying None for the realm parameter provided to add_password.


The highest-level URL is the first URL that requires authentication. Deeper URLs you pass to .add_password() will be equally suitable.

Having said so much nonsense, let’s use an example to demonstrate what is said above.


We build a urllib2_test12.py to test the info application:

# -*- coding: utf-8 -*-  
import urllib2  
  
# 创建一个密码管理者  
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()  
  
# 添加用户名和密码  
  
top_level_url = "http://example.com/foo/"  
  
# 如果知道 realm, 我们可以使用他代替 ``None``.  
# password_mgr.add_password(None, top_level_url, username, password)  
password_mgr.add_password(None, top_level_url,'why', '1223')  
  
# 创建了一个新的handler  
handler = urllib2.HTTPBasicAuthHandler(password_mgr)  
  
# 创建 "opener" (OpenerDirector 实例)  
opener = urllib2.build_opener(handler)  
  
a_url = 'http://www.baidu.com/'  
  
# 使用 opener 获取一个URL  
opener.open(a_url)  
  
# 安装 opener.  
# 现在所有调用 urllib2.urlopen 将用我们的 opener.  
urllib2.install_opener(opener)
Copy after login

Note: In the above example, we only provide our HHTTPasicAuthHandler to build_opener.

The default openers have normal handlers: ProxyHandler, UnknownHandler, HTTPHandler, HTTPDefaultErrorHandler, HTTPRedirectHandler, FTPHandler, FileHandler, HTTPErrorProcessor.

The top_level_url in the code can actually be a complete URL (including "http:", as well as the host name and optional port number).


For example: http://example.com/.

can also be an "authority" (i.e. hostname and optionally port number).

For example: "example.com" or "example.com:8080".

The latter contains the port number.

The above is the content of [Python] Web Crawler (4): Introduction and example applications of Opener and Handler. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!


Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to build a powerful web crawler application using React and Python How to build a powerful web crawler application using React and Python Sep 26, 2023 pm 01:04 PM

How to build a powerful web crawler application using React and Python Introduction: A web crawler is an automated program used to crawl web page data through the Internet. With the continuous development of the Internet and the explosive growth of data, web crawlers are becoming more and more popular. This article will introduce how to use React and Python, two popular technologies, to build a powerful web crawler application. We will explore the advantages of React as a front-end framework and Python as a crawler engine, and provide specific code examples. 1. For

Develop efficient web crawlers and data scraping tools using Vue.js and Perl languages Develop efficient web crawlers and data scraping tools using Vue.js and Perl languages Jul 31, 2023 pm 06:43 PM

Use Vue.js and Perl languages ​​to develop efficient web crawlers and data scraping tools. In recent years, with the rapid development of the Internet and the increasing importance of data, the demand for web crawlers and data scraping tools has also increased. In this context, it is a good choice to combine Vue.js and Perl language to develop efficient web crawlers and data scraping tools. This article will introduce how to develop such a tool using Vue.js and Perl language, and attach corresponding code examples. 1. Introduction to Vue.js and Perl language

What is a web crawler What is a web crawler Jun 20, 2023 pm 04:36 PM

A web crawler (also known as a web spider) is a robot that searches and indexes content on the Internet. Essentially, web crawlers are responsible for understanding the content on a web page in order to retrieve it when a query is made.

How to write a simple web crawler using PHP How to write a simple web crawler using PHP Jun 14, 2023 am 08:21 AM

A web crawler is an automated program that automatically visits websites and crawls information from them. This technology is becoming more and more common in today's Internet world and is widely used in data mining, search engines, social media analysis and other fields. If you want to learn how to write a simple web crawler using PHP, this article will provide you with basic guidance and advice. First, you need to understand some basic concepts and techniques. Crawling target Before writing a crawler, you need to select a crawling target. This can be a specific website, a specific web page, or the entire Internet

Detailed explanation of HTTP request method of PHP web crawler Detailed explanation of HTTP request method of PHP web crawler Jun 17, 2023 am 11:53 AM

With the development of the Internet, all kinds of data are becoming more and more accessible. As a tool for obtaining data, web crawlers have attracted more and more attention and attention. In web crawlers, HTTP requests are an important link. This article will introduce in detail the common HTTP request methods in PHP web crawlers. 1. HTTP request method The HTTP request method refers to the request method used by the client when sending a request to the server. Common HTTP request methods include GET, POST, and PU

How to use PHP and swoole for large-scale web crawler development? How to use PHP and swoole for large-scale web crawler development? Jul 21, 2023 am 09:09 AM

How to use PHP and swoole for large-scale web crawler development? Introduction: With the rapid development of the Internet, big data has become one of the important resources in today's society. In order to obtain this valuable data, web crawlers came into being. Web crawlers can automatically visit various websites on the Internet and extract required information from them. In this article, we will explore how to use PHP and the swoole extension to develop efficient, large-scale web crawlers. 1. Understand the basic principles of web crawlers. The basic principles of web crawlers are very simple.

PHP simple web crawler development example PHP simple web crawler development example Jun 13, 2023 pm 06:54 PM

With the rapid development of the Internet, data has become one of the most important resources in today's information age. As a technology that automatically obtains and processes network data, web crawlers are attracting more and more attention and application. This article will introduce how to use PHP to develop a simple web crawler and realize the function of automatically obtaining network data. 1. Overview of Web Crawler Web crawler is a technology that automatically obtains and processes network resources. Its main working process is to simulate browser behavior, automatically access specified URL addresses and extract all information.

PHP study notes: web crawlers and data collection PHP study notes: web crawlers and data collection Oct 08, 2023 pm 12:04 PM

PHP study notes: Web crawler and data collection Introduction: A web crawler is a tool that automatically crawls data from the Internet. It can simulate human behavior, browse web pages and collect the required data. As a popular server-side scripting language, PHP also plays an important role in the field of web crawlers and data collection. This article will explain how to write a web crawler using PHP and provide practical code examples. 1. Basic principles of web crawlers The basic principles of web crawlers are to send HTTP requests, receive and parse the H response of the server.

See all articles