Table of Contents
What is concurrent programming
Application of concurrent programming in crawlers
Single-threaded version
Multi-threaded version
Asynchronous I/O version
Home Backend Development Python Tutorial How to apply concurrent programming in Python crawlers

How to apply concurrent programming in Python crawlers

May 14, 2023 pm 02:34 PM
python

What is concurrent programming

Concurrent programming refers to a program design that can perform multiple operations within a period of time. It is usually represented by multiple tasks in the program that are started at the same time and can run and interact with each other. There will be no impact. The benefit of concurrent programming is that it can improve the performance and responsiveness of the program.

Application of concurrent programming in crawlers

Crawler programs are typical I/O-intensive tasks. For I/O-intensive tasks, multi-threading and asynchronous I/O are A good choice, because when a certain part of the program is blocked due to I/O operations, other parts of the program can still run, so we don't have to waste a lot of time waiting and blocking.

Single-threaded version

Let’s first look at the single-threaded version of the crawler program. This crawler program uses the requests library to obtain JSON data and saves the image locally through the open function.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

"""

example04.py - 单线程版本爬虫

"""

import os

import requests

def download_picture(url):

    filename = url[url.rfind('/') + 1:]

    resp = requests.get(url)

    if resp.status_code == 200:

        with open(f'images/beauty/{filename}', 'wb') as file:

            file.write(resp.content)

def main():

    if not os.path.exists('images/beauty'):

        os.makedirs('images/beauty')

    for page in range(3):

        resp = requests.get(f&#39;<https://image.so.com/zjl?ch=beauty&sn=>{page * 30}&#39;)

        if resp.status_code == 200:

            pic_dict_list = resp.json()[&#39;list&#39;]

            for pic_dict in pic_dict_list:

                download_picture(pic_dict[&#39;qhimg_url&#39;])

if __name__ == &#39;__main__&#39;:

    main()

Copy after login

On macOS or Linux systems, we can use the time command to understand the execution time of the above code and the CPU utilization, as shown below.

time python3 example04.py

The following is the result of the single-threaded crawler code executed on my computer.

python3 example04.py 2.36s user 0.39s system 12% cpu 21.578 total

Here we only need to pay attention to the total time consumption of the code which is 21.578 seconds, CPU utilization is 12%.

Multi-threaded version

We use the thread pool technology mentioned before to modify the above code into a multi-threaded version.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

"""

example05.py - 多线程版本爬虫

"""

import os

from concurrent.futures import ThreadPoolExecutor

import requests

def download_picture(url):

    filename = url[url.rfind(&#39;/&#39;) + 1:]

    resp = requests.get(url)

    if resp.status_code == 200:

        with open(f&#39;images/beauty/{filename}&#39;, &#39;wb&#39;) as file:

            file.write(resp.content)

def main():

    if not os.path.exists(&#39;images/beauty&#39;):

        os.makedirs(&#39;images/beauty&#39;)

    with ThreadPoolExecutor(max_workers=16) as pool:

        for page in range(3):

            resp = requests.get(f&#39;<https://image.so.com/zjl?ch=beauty&sn=>{page * 30}&#39;)

            if resp.status_code == 200:

                pic_dict_list = resp.json()[&#39;list&#39;]

                for pic_dict in pic_dict_list:

                    pool.submit(download_picture, pic_dict[&#39;qhimg_url&#39;])

if __name__ == &#39;__main__&#39;:

    main()

Copy after login

Execute the command shown below.

time python3 example05.py

The execution result of the code is as follows:

python3 example05.py 2.65s user 0.40s system 95% cpu 3.193 total

Asynchronous I/O version

We use aiohttp to modify the above code to the asynchronous I/O version. In order to achieve network resource acquisition and file writing operations in asynchronous I/O, we must first install the third-party libraries aiohttp and aiofile.

pip install aiohttp aiofile

The following is the asynchronous I/O version of the crawler code.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

"""

example06.py - 异步I/O版本爬虫

"""

import asyncio

import json

import os

import aiofile

import aiohttp

async def download_picture(session, url):

    filename = url[url.rfind(&#39;/&#39;) + 1:]

    async with session.get(url, ssl=False) as resp:

        if resp.status == 200:

            data = await resp.read()

            async with aiofile.async_open(f&#39;images/beauty/{filename}&#39;, &#39;wb&#39;) as file:

                await file.write(data)

async def main():

    if not os.path.exists(&#39;images/beauty&#39;):

        os.makedirs(&#39;images/beauty&#39;)

    async with aiohttp.ClientSession() as session:

        tasks = []

        for page in range(3):

            resp = await session.get(f&#39;<https://image.so.com/zjl?ch=beauty&sn=>{page * 30}&#39;)

            if resp.status == 200:

                pic_dict_list = (await resp.json())[&#39;list&#39;]

                for pic_dict in pic_dict_list:

                    tasks.append(asyncio.ensure_future(download_picture(session, pic_dict[&#39;qhimg_url&#39;])))

        await asyncio.gather(*tasks)

if __name__ == &#39;__main__&#39;:

    loop = asyncio.get_event_loop()

    loop.run_until_complete(main())

Copy after login

Execute the command shown below.

time python3 example06.py

The execution result of the code is as follows:

python3 example06.py 0.92s user 0.27s system 290% cpu 0.420 total

Compared with the single-threaded version of the crawler program, the execution time of the multi-threaded version and the asynchronous I/O version of the crawler program has been significantly improved, and the asynchronous I/O version The /O version of the crawler performs best.

The above is the detailed content of How to apply concurrent programming in Python crawlers. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Can the Python interpreter be deleted in Linux system? Can the Python interpreter be deleted in Linux system? Apr 02, 2025 am 07:00 AM

Regarding the problem of removing the Python interpreter that comes with Linux systems, many Linux distributions will preinstall the Python interpreter when installed, and it does not use the package manager...

How to solve the problem of Pylance type detection of custom decorators in Python? How to solve the problem of Pylance type detection of custom decorators in Python? Apr 02, 2025 am 06:42 AM

Pylance type detection problem solution when using custom decorator In Python programming, decorator is a powerful tool that can be used to add rows...

Python 3.6 loading pickle file error ModuleNotFoundError: What should I do if I load pickle file '__builtin__'? Python 3.6 loading pickle file error ModuleNotFoundError: What should I do if I load pickle file '__builtin__'? Apr 02, 2025 am 06:27 AM

Loading pickle file in Python 3.6 environment error: ModuleNotFoundError:Nomodulenamed...

How to solve permission issues when using python --version command in Linux terminal? How to solve permission issues when using python --version command in Linux terminal? Apr 02, 2025 am 06:36 AM

Using python in Linux terminal...

Do FastAPI and aiohttp share the same global event loop? Do FastAPI and aiohttp share the same global event loop? Apr 02, 2025 am 06:12 AM

Compatibility issues between Python asynchronous libraries In Python, asynchronous programming has become the process of high concurrency and I/O...

How to ensure that the child process also terminates after killing the parent process via signal in Python? How to ensure that the child process also terminates after killing the parent process via signal in Python? Apr 02, 2025 am 06:39 AM

The problem and solution of the child process continuing to run when using signals to kill the parent process. In Python programming, after killing the parent process through signals, the child process still...

What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6? What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6? Apr 02, 2025 am 07:12 AM

Error loading Pickle file in Python 3.6 environment: ModuleNotFoundError:Nomodulenamed...

See all articles