Table of Contents
1. The structure of the training data pipeline
2. Tensorflow tf.data API
3. Use multi-process parallelization of the data pipeline
4. Use Ray to parallelize the data pipeline
Home Technology peripherals AI Create efficient deep learning data pipelines with Ray

Create efficient deep learning data pipelines with Ray

Nov 02, 2023 pm 08:17 PM
deep learning ray

The GPU required for deep learning model training is powerful but expensive. To fully utilize the GPU, developers need an efficient data transfer channel that can quickly transfer data to the GPU when it is ready to compute the next training step. Using Ray can significantly improve the efficiency of the data transmission channel

1. The structure of the training data pipeline

First, let’s take a look at the pseudocode of model training

for step in range(num_steps):sample, target = next(dataset) # 步骤1train_step(sample, target) # 步骤2
Copy after login

In the steps 1, get the samples and labels of the next mini-batch. In step 2, they are passed to the train_step function, which copies them to the GPU, performs a forward and backward pass to calculate the loss and gradient, and updates the optimizer's weights.

Please learn more about step 1. When the data set is too large to fit in memory, step 1 will fetch the next mini-batch from disk or network. In addition, step 1 also includes a certain amount of preprocessing. Input data must be converted into numeric tensors or collections of tensors before being fed to the model. In some cases, other transformations are also performed on the tensors before being passed to the model, such as normalization, rotation around the axis, random shuffling, etc.

If the workflow is executed strictly in sequence, that is If you perform step 1 first and then step 2, the model will always need to wait for the next batch of data input, output, and preprocessing operations. The GPU will not be efficiently utilized, it will sit idle while loading the next mini-batch of data.

To solve this problem, the data pipeline can be viewed as a producer-consumer problem. The data pipeline generates small batches of data and writes them to bounded buffers. The model/GPU consumes small batches of data from the buffer, performs forward/reverse calculations and updates model weights. If the data pipeline can generate small batches of data as quickly as the model/GPU consumes, the training process will be very efficient.

Create efficient deep learning data pipelines with RayPicture

2. Tensorflow tf.data API

Tensorflow tf.data API provides a rich set of functions that can be used Efficiently create data pipelines and use background threads to obtain small batches of data so that the model does not need to wait. Just pre-fetching the data is not enough. If generating small batches of data is slower than the GPU can consume the data, then you need to use parallelization to speed up the reading and transformation of the data. To this end, Tensorflow provides interleave functionality to leverage multiple threads to read data in parallel, and parallel mapping functionality to use multiple threads to transform small batches of data.

Because these APIs are based on multi-threading, they may be restricted by the Python Global Interpreter Lock (GIL). Python's GIL limits bytecode to only a single thread running at a time. If you use pure TensorFlow code in your pipeline, you generally do not suffer from this limitation because the TensorFlow core execution engine works outside the scope of the GIL. However, if the third-party library used does not lift GIL restrictions or uses Python to perform a large number of calculations, then relying on multi-threading to parallelize the pipeline is not feasible

3. Use multi-process parallelization of the data pipeline

Consider the following generator function that simulates loading and performing some calculations to generate mini-batches of data samples and labels.

def data_generator():for _ in range(10):# 模拟获取# 从磁盘/网络time.sleep(0.5)# 模拟计算for _ in range(10000):passyield (np.random.random((4, 1000000, 3)).astype(np.float32), np.random.random((4, 1)).astype(np.float32))
Copy after login

Next, use the generator in a dummy training pipeline and measure the average time it takes to generate mini-batches of data.

generator_dataset = tf.data.Dataset.from_generator(data_generator,output_types=(tf.float64, tf.float64),output_shapes=((4, 1000000, 3), (4, 1))).prefetch(tf.data.experimental.AUTOTUNE)st = time.perf_counter()times = []for _ in generator_dataset:en = time.perf_counter()times.append(en - st)# 模拟训练步骤time.sleep(0.1)st = time.perf_counter()print(np.mean(times))
Copy after login

It was observed that the average time taken was about 0.57 seconds (measured on a Mac laptop equipped with an Intel Core i7 processor). If this were a real training loop, the GPU utilization would be quite low, it would only spend 0.1 seconds doing the computation and then idle for 0.57 seconds waiting for the next batch of data.

To speed up data loading, you can use a multi-process generator.

from multiprocessing import Queue, cpu_count, Processdef mp_data_generator():def producer(q):for _ in range(10):# 模拟获取# 从磁盘/网络time.sleep(0.5)# 模拟计算for _ in range(10000000):passq.put((np.random.random((4, 1000000, 3)).astype(np.float32),np.random.random((4, 1)).astype(np.float32)))q.put("DONE")queue = Queue(cpu_count()*2)num_parallel_processes = cpu_count()producers = []for _ in range(num_parallel_processes):p = Process(target=producer, args=(queue,))p.start()producers.append(p)done_counts = 0while done_counts <p>Now, if we measure the time spent waiting for the next mini-batch of data, we get an average time of 0.08 seconds. Almost 7 times faster, but ideally would like this time to be close to 0. </p><p>If you analyze it, you can find that a lot of time is spent on preparing the deserialization of data. In a multi-process generator, the producer process returns large NumPy arrays, which need to be prepared and then deserialized in the main process. So how to improve efficiency when passing large arrays between processes? </p><h2 id="Use-Ray-to-parallelize-the-data-pipeline">4. Use Ray to parallelize the data pipeline</h2><p>This is where Ray comes into play. Ray is a framework for running distributed computing in Python. It comes with a shared memory object store to efficiently transfer objects between different processes. In particular, Numpy arrays in the object store can be shared between workers on the same node without any serialization and deserialization. Ray also makes it easy to scale data loading across multiple machines and use Apache Arrow to efficiently serialize and deserialize large arrays. </p><p>Ray comes with a utility function from_iterators that can create parallel iterators, and developers can use it to wrap the data_generator generator function. </p><pre class="brush:php;toolbar:false">import raydef ray_generator():num_parallel_processes = cpu_count()return ray.util.iter.from_iterators([data_generator]*num_parallel_processes).gather_async()
Copy after login

Using ray_generator, the measured time spent waiting for the next mini-batch of data is 0.02 seconds, which is 4 times faster than using multi-process processing.

The above is the detailed content of Create efficient deep learning data pipelines with Ray. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Will R.E.P.O. Have Crossplay?
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Methods and steps for using BERT for sentiment analysis in Python Methods and steps for using BERT for sentiment analysis in Python Jan 22, 2024 pm 04:24 PM

BERT is a pre-trained deep learning language model proposed by Google in 2018. The full name is BidirectionalEncoderRepresentationsfromTransformers, which is based on the Transformer architecture and has the characteristics of bidirectional encoding. Compared with traditional one-way coding models, BERT can consider contextual information at the same time when processing text, so it performs well in natural language processing tasks. Its bidirectionality enables BERT to better understand the semantic relationships in sentences, thereby improving the expressive ability of the model. Through pre-training and fine-tuning methods, BERT can be used for various natural language processing tasks, such as sentiment analysis, naming

Analysis of commonly used AI activation functions: deep learning practice of Sigmoid, Tanh, ReLU and Softmax Analysis of commonly used AI activation functions: deep learning practice of Sigmoid, Tanh, ReLU and Softmax Dec 28, 2023 pm 11:35 PM

Activation functions play a crucial role in deep learning. They can introduce nonlinear characteristics into neural networks, allowing the network to better learn and simulate complex input-output relationships. The correct selection and use of activation functions has an important impact on the performance and training results of neural networks. This article will introduce four commonly used activation functions: Sigmoid, Tanh, ReLU and Softmax, starting from the introduction, usage scenarios, advantages, disadvantages and optimization solutions. Dimensions are discussed to provide you with a comprehensive understanding of activation functions. 1. Sigmoid function Introduction to SIgmoid function formula: The Sigmoid function is a commonly used nonlinear function that can map any real number to between 0 and 1. It is usually used to unify the

Latent space embedding: explanation and demonstration Latent space embedding: explanation and demonstration Jan 22, 2024 pm 05:30 PM

Latent Space Embedding (LatentSpaceEmbedding) is the process of mapping high-dimensional data to low-dimensional space. In the field of machine learning and deep learning, latent space embedding is usually a neural network model that maps high-dimensional input data into a set of low-dimensional vector representations. This set of vectors is often called "latent vectors" or "latent encodings". The purpose of latent space embedding is to capture important features in the data and represent them into a more concise and understandable form. Through latent space embedding, we can perform operations such as visualizing, classifying, and clustering data in low-dimensional space to better understand and utilize the data. Latent space embedding has wide applications in many fields, such as image generation, feature extraction, dimensionality reduction, etc. Latent space embedding is the main

Beyond ORB-SLAM3! SL-SLAM: Low light, severe jitter and weak texture scenes are all handled Beyond ORB-SLAM3! SL-SLAM: Low light, severe jitter and weak texture scenes are all handled May 30, 2024 am 09:35 AM

Written previously, today we discuss how deep learning technology can improve the performance of vision-based SLAM (simultaneous localization and mapping) in complex environments. By combining deep feature extraction and depth matching methods, here we introduce a versatile hybrid visual SLAM system designed to improve adaptation in challenging scenarios such as low-light conditions, dynamic lighting, weakly textured areas, and severe jitter. sex. Our system supports multiple modes, including extended monocular, stereo, monocular-inertial, and stereo-inertial configurations. In addition, it also analyzes how to combine visual SLAM with deep learning methods to inspire other research. Through extensive experiments on public datasets and self-sampled data, we demonstrate the superiority of SL-SLAM in terms of positioning accuracy and tracking robustness.

Understand in one article: the connections and differences between AI, machine learning and deep learning Understand in one article: the connections and differences between AI, machine learning and deep learning Mar 02, 2024 am 11:19 AM

In today's wave of rapid technological changes, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) are like bright stars, leading the new wave of information technology. These three words frequently appear in various cutting-edge discussions and practical applications, but for many explorers who are new to this field, their specific meanings and their internal connections may still be shrouded in mystery. So let's take a look at this picture first. It can be seen that there is a close correlation and progressive relationship between deep learning, machine learning and artificial intelligence. Deep learning is a specific field of machine learning, and machine learning

From basics to practice, review the development history of Elasticsearch vector retrieval From basics to practice, review the development history of Elasticsearch vector retrieval Oct 23, 2023 pm 05:17 PM

1. Introduction Vector retrieval has become a core component of modern search and recommendation systems. It enables efficient query matching and recommendations by converting complex objects (such as text, images, or sounds) into numerical vectors and performing similarity searches in multidimensional spaces. From basics to practice, review the development history of Elasticsearch vector retrieval_elasticsearch As a popular open source search engine, Elasticsearch's development in vector retrieval has always attracted much attention. This article will review the development history of Elasticsearch vector retrieval, focusing on the characteristics and progress of each stage. Taking history as a guide, it is convenient for everyone to establish a full range of Elasticsearch vector retrieval.

Super strong! Top 10 deep learning algorithms! Super strong! Top 10 deep learning algorithms! Mar 15, 2024 pm 03:46 PM

Almost 20 years have passed since the concept of deep learning was proposed in 2006. Deep learning, as a revolution in the field of artificial intelligence, has spawned many influential algorithms. So, what do you think are the top 10 algorithms for deep learning? The following are the top algorithms for deep learning in my opinion. They all occupy an important position in terms of innovation, application value and influence. 1. Deep neural network (DNN) background: Deep neural network (DNN), also called multi-layer perceptron, is the most common deep learning algorithm. When it was first invented, it was questioned due to the computing power bottleneck. Until recent years, computing power, The breakthrough came with the explosion of data. DNN is a neural network model that contains multiple hidden layers. In this model, each layer passes input to the next layer and

AlphaFold 3 is launched, comprehensively predicting the interactions and structures of proteins and all living molecules, with far greater accuracy than ever before AlphaFold 3 is launched, comprehensively predicting the interactions and structures of proteins and all living molecules, with far greater accuracy than ever before Jul 16, 2024 am 12:08 AM

Editor | Radish Skin Since the release of the powerful AlphaFold2 in 2021, scientists have been using protein structure prediction models to map various protein structures within cells, discover drugs, and draw a "cosmic map" of every known protein interaction. . Just now, Google DeepMind released the AlphaFold3 model, which can perform joint structure predictions for complexes including proteins, nucleic acids, small molecules, ions and modified residues. The accuracy of AlphaFold3 has been significantly improved compared to many dedicated tools in the past (protein-ligand interaction, protein-nucleic acid interaction, antibody-antigen prediction). This shows that within a single unified deep learning framework, it is possible to achieve

See all articles