python web crawler tutorial
When we browse the Internet every day, we often see some good-looking pictures, and we want to save and download these pictures, or use them as desktop wallpapers, or as design materials. The following article will introduce to you the relevant information about using python to implement the simplest web crawler. Friends in need can refer to it. Let's take a look together.
Preface
Web crawlers (also known as web spiders, web robots, among the FOAF community, more often called web crawlers ) is a program or script that automatically captures World Wide Web information according to certain rules. Recently, I have become very interested in python crawlers. I would like to share my learning path here and welcome your suggestions. We communicate with each other and make progress together. Not much to say, let’s take a look at the detailed introduction:
1. Development tools
The author uses The best tool is sublime text3. Its short and concise (maybe men don't like this word) fascinates me very much. It is recommended for everyone to use. Of course, if your computer configuration is good, pycharm may be more suitable for you.
sublime text3 builds a python development environment. It is recommended to view this article:
[sublime builds a python development environment][http://www.jb51.net/article/ 51838.htm]
##2. Introduction to crawlers
## As the name suggests, crawlers crawl on the Internet like bugs. Zhang Dawang. In this way, we can get what we want.
Since we want to crawl on the Internet, we need to understand the URL, the legal name "Uniform Resource Locator", and the nickname "Link". Its structure mainly consists of three parts:
(1) Protocol: such as the HTTP protocol we commonly see in URLs.
(2) Domain name or IP address: Domain name, such as: www.baidu.com, IP address, that is, the corresponding IP after domain name resolution.
(3) Path: directory or file, etc.
3. urllib develops the simplest crawler
Introduce | |
---|---|
urllib.parse | |
urllib.request | |
urllib.response | |
urllib.robotparser | |
The Baidu homepage is simple and elegant, which is very suitable for our crawlers.
The crawler code is as follows:
from urllib import request def visit_baidu(): URL = "http://www.baidu.com" # open the URL req = request.urlopen(URL) # read the URL html = req.read() # decode the URL to utf-8 html = html.decode("utf_8") print(html) if __name__ == '__main__': visit_baidu()
The result is as shown below:
We can compare with our running results by right-clicking on the blank space of Baidu homepage and viewing the review elements.
Of course, request can also generate a request object, which can be opened using the urlopen method.
The code is as follows:
from urllib import request def vists_baidu(): # create a request obkect req = request.Request('http://www.baidu.com') # open the request object response = request.urlopen(req) # read the response html = response.read() html = html.decode('utf-8') print(html) if __name__ == '__main__': vists_baidu()
The running result is the same as before.
(3) Error handling
Error handling is handled through the urllib module, mainly including URLError and HTTPError errors, of which HTTPError error is URLError error Subclasses of HTTRPError can also be caught by URLError.
HTTPError can be captured through its code attribute.
The code for handling HTTPError is as follows:
##
from urllib import request from urllib import error def Err(): url = "https://segmentfault.com/zzz" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.HTTPError as e: print(e.code) if __name__ == '__main__': Err()
URLError can be captured through its reason attribute.
##
from urllib import request from urllib import error def Err(): url = "https://segmentf.com/" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.URLError as e: print(e.reason) if __name__ == '__main__': Err()
The running result is as shown in the figure:
The code is as follows:
from urllib import request from urllib import error # 第一种方法,URLErroe和HTTPError def Err(): url = "https://segmentfault.com/zzz" req = request.Request(url) try: response = request.urlopen(req) html = response.read().decode("utf-8") print(html) except error.HTTPError as e: print(e.code) except error.URLError as e: print(e.reason)
The above is the detailed content of python web crawler tutorial. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











PHP is mainly procedural programming, but also supports object-oriented programming (OOP); Python supports a variety of paradigms, including OOP, functional and procedural programming. PHP is suitable for web development, and Python is suitable for a variety of applications such as data analysis and machine learning.

PHP is suitable for web development and rapid prototyping, and Python is suitable for data science and machine learning. 1.PHP is used for dynamic web development, with simple syntax and suitable for rapid development. 2. Python has concise syntax, is suitable for multiple fields, and has a strong library ecosystem.

PHP originated in 1994 and was developed by RasmusLerdorf. It was originally used to track website visitors and gradually evolved into a server-side scripting language and was widely used in web development. Python was developed by Guidovan Rossum in the late 1980s and was first released in 1991. It emphasizes code readability and simplicity, and is suitable for scientific computing, data analysis and other fields.

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.

To run Python code in Sublime Text, you need to install the Python plug-in first, then create a .py file and write the code, and finally press Ctrl B to run the code, and the output will be displayed in the console.

Writing code in Visual Studio Code (VSCode) is simple and easy to use. Just install VSCode, create a project, select a language, create a file, write code, save and run it. The advantages of VSCode include cross-platform, free and open source, powerful features, rich extensions, and lightweight and fast.

VS Code can be used to write Python and provides many features that make it an ideal tool for developing Python applications. It allows users to: install Python extensions to get functions such as code completion, syntax highlighting, and debugging. Use the debugger to track code step by step, find and fix errors. Integrate Git for version control. Use code formatting tools to maintain code consistency. Use the Linting tool to spot potential problems ahead of time.

Running Python code in Notepad requires the Python executable and NppExec plug-in to be installed. After installing Python and adding PATH to it, configure the command "python" and the parameter "{CURRENT_DIRECTORY}{FILE_NAME}" in the NppExec plug-in to run Python code in Notepad through the shortcut key "F6".
