


Lao Huang gives H100 a boost: Nvidia launches large model acceleration package, doubling Llama2 inference speed
The inference speed of large models has doubled in just one month!
Recently, Nvidia announced the launch of a "chicken blood package" specially designed for H100, aiming to speed up the LLM inference process
Maybe now you don't have to wait for the GH200 to be delivered next year. .
The computing power of GPU has been affecting the performance of large models. Both hardware suppliers and users hope to obtain faster computing speed
As the largest supplier of hardware behind large models, NVIDIA has been studying how to accelerate large model hardware.
Through cooperation with a number of AI companies, NVIDIA finally launched the large model inference optimization program TensorRT-LLM (tentatively referred to as TensorRT).
TensorRT can not only double the inference speed of large models, but is also very convenient to use.
Without in-depth knowledge of C and CUDA, you can quickly customize optimization strategies and run large models faster on H100.
NVIDIA scientist Jim Fan forwarded and commented that NVIDIA’s “another advantage” is the supporting software that can maximize the use of GPU performance.
NVIDIA injects new vitality into its products through software, just like it implements Lao Huang's saying "the more you buy, the more you save." However, this does not prevent some people from thinking that the price of the product is too high
In addition to the price, some netizens also questioned its operating effect:
We always I have seen how many times the performance has improved (as advertised), but when I run Llama 2 myself, I can still only process dozens of tokens per second.
For TensorRT, we need further testing to determine whether it is really effective. Let us first take a closer look at TensorRT
Double the inference speed of large models
TensorRT-LLM optimized H100, how fast is it for running large models?
Nvidia’s announcement provides data for two models, Llama 2 and GPT-J-6B.
On the optimized H100, the inference speed of running Llama 2 is 4.6 times that of the A100 and 1.77 times that of the unoptimized H100 in August
The inference speed of GPT-J-6B is 8 times that of A100 and 2 times that of the August unoptimized version.
TensorRT also provides an open source modular Python API that can quickly customize optimization solutions according to different LLM requirements
This API will combine the deep learning compiler with , kernel optimization, pre/post-processing and multi-node communication functions are integrated together.
There are also customized versions for common models such as GPT(2/3) and Llama, which can be "used out of the box".
Through the latest open source AI kernel in TensorRT, developers can also optimize the model itself, including the attention algorithm FlashAttention that greatly speeds up Transformer.
TensorRT is a high-performance inference engine for optimizing deep learning inference. It optimizes LLM inference speed by using technologies such as mixed-precision computing, dynamic graph optimization, and layer fusion. Specifically, TensorRT improves inference speed by reducing the amount of computation and memory bandwidth requirements by converting floating-point calculations into half-precision floating-point calculations. In addition, TensorRT also uses dynamic graph optimization technology to dynamically select the optimal network structure based on the characteristics of the input data, further improving the inference speed. In addition, TensorRT also uses layer fusion technology to merge multiple computing layers into a more efficient computing layer, reducing computing and memory access overhead and further improving inference speed. In short, TensorRT has significantly improved the speed and efficiency of LLM inference through a variety of optimization technologies
First of all, it is due to TensorRT's optimization of multi-node collaborative working.
A huge model like Llama cannot be run on a single card. It requires multiple GPUs to run together.
In the past, this work required people to manually disassemble the model to achieve it.
With TensorRT, the system can automatically split the model and run it efficiently between multiple GPUs through NVLink
Secondly, TensorRT also An optimized scheduling technology called Dynamic Batch Processing is used.
During the inference process, LLM is actually performed by executing model iterations multiple times
Dynamic batch processing technology will kick out the completed sequence immediately instead of waiting for the entire batch of tasks Once complete, process the next set of requests.
In actual tests, dynamic batch processing technology successfully reduced LLM's GPU request throughput by half, thereby significantly reducing operating costs
Another key point is Convert 16-bit precision floating point numbers to 8-bit precision , thereby reducing memory consumption.
Compared with FP16 in the training phase, FP8 has lower resource consumption and is more accurate than INT-8. It can improve performance without affecting the accuracy of the model
Usage Hopper Transformer engine, the system will automatically complete the conversion and compilation of FP16 to FP8, without the need to manually modify any code in the model
Currently, the early bird version of TensorRT-LLM is available for download, and the official version will be launched in a few weeks And integrated into the NeMo framework
One More Thing
Whenever a big event occurs, the figure of "Leewenhoek" is indispensable.
In Nvidia’s announcement, it mentioned cooperation with leading artificial intelligence companies such as Meta, but did not mention OpenAI
From this announcement, some netizens discovered this point and sent it to On the OpenAI forum:
Please let me see who has not been cueed by Lao Huang (manual dog head)
Are you still What kind of "surprises" do we expect Lao Huang to bring us?
The above is the detailed content of Lao Huang gives H100 a boost: Nvidia launches large model acceleration package, doubling Llama2 inference speed. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

How to output a countdown in C? Answer: Use loop statements. Steps: 1. Define the variable n and store the countdown number to output; 2. Use the while loop to continuously print n until n is less than 1; 3. In the loop body, print out the value of n; 4. At the end of the loop, subtract n by 1 to output the next smaller reciprocal.

A C language function consists of a parameter list, function body, return value type and function name. When a function is called, the parameters are copied to the function through the value transfer mechanism, and will not affect external variables. Pointer passes directly to the memory address, modifying the pointer will affect external variables. Function prototype declaration is used to inform the compiler of function signatures to avoid compilation errors. Stack space is used to store function local variables and parameters. Too much recursion or too much space can cause stack overflow.

How to achieve the playback of pictures like videos? Many times, we need to implement similar video player functions, but the playback content is a sequence of images. direct...

Algorithms are the set of instructions to solve problems, and their execution speed and memory usage vary. In programming, many algorithms are based on data search and sorting. This article will introduce several data retrieval and sorting algorithms. Linear search assumes that there is an array [20,500,10,5,100,1,50] and needs to find the number 50. The linear search algorithm checks each element in the array one by one until the target value is found or the complete array is traversed. The algorithm flowchart is as follows: The pseudo-code for linear search is as follows: Check each element: If the target value is found: Return true Return false C language implementation: #include#includeintmain(void){i

Integers are the most basic data type in programming and can be regarded as the cornerstone of programming. The job of a programmer is to give these numbers meanings. No matter how complex the software is, it ultimately comes down to integer operations, because the processor only understands integers. To represent negative numbers, we introduced two's complement; to represent decimal numbers, we created scientific notation, so there are floating-point numbers. But in the final analysis, everything is still inseparable from 0 and 1. A brief history of integers In C, int is almost the default type. Although the compiler may issue a warning, in many cases you can still write code like this: main(void){return0;} From a technical point of view, this is equivalent to the following code: intmain(void){return0;}

C language functions include definitions, calls and declarations. Function definition specifies function name, parameters and return type, function body implements functions; function calls execute functions and provide parameters; function declarations inform the compiler of function type. Value pass is used for parameter pass, pay attention to the return type, maintain a consistent code style, and handle errors in functions. Mastering this knowledge can help write elegant, robust C code.

Data update problems in zustand asynchronous operations. When using the zustand state management library, you often encounter the problem of data updates that cause asynchronous operations to be untimely. �...

A solution to implement text annotation nesting in Quill Editor. When using Quill Editor for text annotation, we often need to use the Quill Editor to...
