


Simple example analysis of how to implement crawler images in Python
This article mainly introduces the relevant information about the simple implementation of Python crawlerPicture. Friends in need can refer to
Simple implementation of Python crawler picture
I often browse Zhihu, and sometimes I want to save pictures of some questions together. Hence this program. This is a very simple image crawler program that can only crawl the part of the image that has been brushed out. Since I am not familiar with this part of the content, I will just say a few words and record the code without explaining too much. If you are interested, you can use it directly. Personal testing is available on websites such as Zhihu.
The previous article shared how to open images through URLs. The purpose is to first see what the crawled images look like, and then filter and save them.
The requests library is used here to obtain page information. It should be noted that when obtaining page information, a header is needed to disguise the program as a browser to access the server, otherwise May be rejected by the server. Then use BeautifulSoup to filter excess information to get the image address. After getting the picture, filter out some small pictures such as avatars and emoticons based on the size of the picture. Finally, you have more choices when opening or saving images, including OpenCV, skimage, PIL, etc.
The procedure is as follows:
# -*- coding=utf-8 -*- import requests as req from bs4 import BeautifulSoup from PIL import Image from io import BytesIO import os from skimage import io url = "https://www.zhihu.com/question/37787176" headers = {'User-Agent' : 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Mobile Safari/537.36'} response = req.get(url,headers=headers) content = str(response.content) #print content soup = BeautifulSoup(content,'lxml') images = soup.find_all('img') print u"共有%d张图片" % len(images) if not os.path.exists("images"): os.mkdir("images") for i in range(len(images)): img = images[i] print u"正在处理第%d张图片..." % (i+1) img_src = img.get('src') if img_src.startswith("http"): ## use PIL ''' print img_src response = req.get(img_src,headers=headers) image = Image.open(BytesIO(response.content)) w,h = image.size print w,h img_path = "images/" + str(i+1) + ".jpg" if w>=500 and h>500: #image.show() image.save(img_path) ''' ## use OpenCV import numpy as np import urllib import cv2 resp = urllib.urlopen(img_src) image = np.asarray(bytearray(resp.read()), dtype="uint8") image = cv2.imdecode(image, cv2.IMREAD_COLOR) w,h = image.shape[:2] print w,h img_path = "images/" + str(i+1) + ".jpg" if w>=400 and h>400: cv2.imshow("Image", image) cv2.waitKey(3000) ##cv2.imwrite(img_path,image) ## use skimage ## image = io.imread(img_src) ## w,h = image.shape[:2] ## print w,h #io.imshow(image) #io.show() ## img_path = "images/" + str(i+1) + ".jpg" ## if w>=500 and h>500: ## image.show() ## image.save(img_path) ## io.imsave(img_path,image) print u"处理完成!"
The above is the detailed content of Simple example analysis of how to implement crawler images in Python. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



HadiDB: A lightweight, high-level scalable Python database HadiDB (hadidb) is a lightweight database written in Python, with a high level of scalability. Install HadiDB using pip installation: pipinstallhadidb User Management Create user: createuser() method to create a new user. The authentication() method authenticates the user's identity. fromhadidb.operationimportuseruser_obj=user("admin","admin")user_obj.

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

It is impossible to view MongoDB password directly through Navicat because it is stored as hash values. How to retrieve lost passwords: 1. Reset passwords; 2. Check configuration files (may contain hash values); 3. Check codes (may hardcode passwords).

As a data professional, you need to process large amounts of data from various sources. This can pose challenges to data management and analysis. Fortunately, two AWS services can help: AWS Glue and Amazon Athena.

MySQL database performance optimization guide In resource-intensive applications, MySQL database plays a crucial role and is responsible for managing massive transactions. However, as the scale of application expands, database performance bottlenecks often become a constraint. This article will explore a series of effective MySQL performance optimization strategies to ensure that your application remains efficient and responsive under high loads. We will combine actual cases to explain in-depth key technologies such as indexing, query optimization, database design and caching. 1. Database architecture design and optimized database architecture is the cornerstone of MySQL performance optimization. Here are some core principles: Selecting the right data type and selecting the smallest data type that meets the needs can not only save storage space, but also improve data processing speed.

The steps to start a Redis server include: Install Redis according to the operating system. Start the Redis service via redis-server (Linux/macOS) or redis-server.exe (Windows). Use the redis-cli ping (Linux/macOS) or redis-cli.exe ping (Windows) command to check the service status. Use a Redis client, such as redis-cli, Python, or Node.js, to access the server.

To read a queue from Redis, you need to get the queue name, read the elements using the LPOP command, and process the empty queue. The specific steps are as follows: Get the queue name: name it with the prefix of "queue:" such as "queue:my-queue". Use the LPOP command: Eject the element from the head of the queue and return its value, such as LPOP queue:my-queue. Processing empty queues: If the queue is empty, LPOP returns nil, and you can check whether the queue exists before reading the element.
