Home Technology peripherals AI 'Putting' a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

'Putting' a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Jun 07, 2023 pm 10:33 PM
performance Model

The performance of pre-trained large language models (LLM) on specific tasks continues to improve. Subsequently, if the prompt instructions are appropriate, it can be better generalized to more tasks. Many people will This phenomenon is attributed to the increase in training data and parameters. However, recent trends show that researchers are focusing more on smaller models, but these models are trained on more data and are therefore easier to infer. use.

For example, LLaMA with a parameter size of 7B was trained on 1T tokens. Although the average performance is slightly lower than GPT-3, the parameter size is 1/25 of the latter. Not only that, but current compression technology can further compress these models, significantly reducing memory requirements while maintaining performance. With such improvements, well-performing models can be deployed on end-user devices such as laptops.

However, this faces another challenge, which is how to compress these models into a small enough size to fit these devices, while taking into account the generation quality. Research shows that while compressed models generate answers with acceptable accuracy, existing 3-4-bit quantization techniques still degrade accuracy. Since LLM generation is performed sequentially and relies on previously generated tokens, small relative errors accumulate and lead to severe output corruption. To ensure reliable quality, it is critical to design low bit-width quantization methods that do not degrade prediction performance compared to 16-bit models.

However, quantizing each parameter to 3-4 bits often results in moderate or even high accuracy loss, especially those 1-10B that are well suited for edge deployments Smaller model within parameter range.

In order to solve the accuracy problem, researchers from the University of Washington, ETH Zurich and other institutions proposed a new compression format and quantization technology SpQR (Sparse-Quantized Representation, sparse - quantified representation), achieving near-lossless compression of LLM across model scales for the first time while achieving similar compression levels to previous methods.

SpQR works by identifying and isolating anomalous weights that cause particularly large quantization errors, storing them with higher precision while compressing all other weights. To position 3-4, less than 1% perplexity relative accuracy loss is achieved in LLaMA and Falcon LLMs. This allows a 33B parameter LLM to be run on a single 24GB consumer GPU without any performance degradation while being 15% faster.

The SpQR algorithm is efficient and can both encode weights into other formats and decode them efficiently at runtime. Specifically, this research provides SpQR with an efficient GPU inference algorithm that can perform inference faster than 16-bit baseline models while achieving over 4x memory compression gains.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

  • Paper address: https://arxiv.org/pdf/2306.03078.pdf
  • Project address: https://github.com/Vahe1994/SpQR
##Method

This research proposes a new format for hybrid sparse quantization - Sparse Quantization Representation (SpQR), which can compress accurately pre-trained LLM to 3-4 bits per parameter while remaining nearly lossless.

Specifically, the study divided the entire process into two steps. The first step is outlier detection: the study first isolates the outlier weights and demonstrates that their quantization leads to high errors: outlier weights are maintained with high precision, while other weights are stored with low precision (e.g. in a 3-bit format). The study then implements a variant of grouped quantization with very small group sizes and shows that the quantization scale itself can be quantized into a 3-bit representation.

SpQR greatly reduces the memory footprint of LLM without sacrificing accuracy, while producing LLM 20%-30% faster compared to 16-bit inference.

In addition, the study found that the positions of sensitive weights in the weight matrix are not random, but have a specific structure. To highlight its structure during quantification, the study calculated the sensitivity of each weight and visualized these weight sensitivities for the LLaMA-65B model. Figure 2 below depicts the output projection of the last self-attention layer of LLaMA-65B.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

The study made two changes to the quantification process: one to capture small sensitive weight groups; Used to capture individual outliers. Figure 3 below shows the overall architecture of SpQR:

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

The following table shows the SpQR quantification algorithm. The code fragment on the left describes the entire process, the code snippet on the right contains subroutines for secondary quantification and finding outliers:

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Experiment

This study will SpQR is compared with two other quantization schemes: GPTQ, RTN (rounding-to-nearest), and two metrics are used to evaluate the performance of the quantization model. The first is the measurement of perplexity, using data sets including WikiText2, Penn Treebank and C4; the second is the zero-sample accuracy on five tasks: WinoGrande, PiQA, HellaSwag, ARC-easy, ARC-challenge.

Main results. Figure 1 results show that at similar model sizes, SpQR performs significantly better than GPTQ (and corresponding RTN), especially on smaller models. This improvement is due to SpQR achieving more compression while also reducing loss degradation.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Table 1, Table 2 The results show that for 4-bit quantization, the error of SpQR relative to the 16-bit baseline is halved compared to GPTQ.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

##Table 3 reports the LLaMA-65B model Perplexity results on different data sets.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

Finally, the study evaluates SpQR inference speed. This study compares a specially designed sparse matrix multiplication algorithm with the algorithm implemented in PyTorch (cuSPARSE), and the results are shown in Table 4. As you can see, although standard sparse matrix multiplication in PyTorch is not faster than 16-bit inference, the sparse matrix multiplication algorithm specially designed in this article can improve the speed by about 20-30%.

Putting a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance

The above is the detailed content of 'Putting' a large 33 billion parameter model into a single consumer-grade GPU, speeding up 15% without sacrificing performance. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

Comprehensively surpassing DPO: Chen Danqi's team proposed simple preference optimization SimPO, and also refined the strongest 8B open source model Comprehensively surpassing DPO: Chen Danqi's team proposed simple preference optimization SimPO, and also refined the strongest 8B open source model Jun 01, 2024 pm 04:41 PM

In order to align large language models (LLMs) with human values ​​and intentions, it is critical to learn human feedback to ensure that they are useful, honest, and harmless. In terms of aligning LLM, an effective method is reinforcement learning based on human feedback (RLHF). Although the results of the RLHF method are excellent, there are some optimization challenges involved. This involves training a reward model and then optimizing a policy model to maximize that reward. Recently, some researchers have explored simpler offline algorithms, one of which is direct preference optimization (DPO). DPO learns the policy model directly based on preference data by parameterizing the reward function in RLHF, thus eliminating the need for an explicit reward model. This method is simple and stable

No OpenAI data required, join the list of large code models! UIUC releases StarCoder-15B-Instruct No OpenAI data required, join the list of large code models! UIUC releases StarCoder-15B-Instruct Jun 13, 2024 pm 01:59 PM

At the forefront of software technology, UIUC Zhang Lingming's group, together with researchers from the BigCode organization, recently announced the StarCoder2-15B-Instruct large code model. This innovative achievement achieved a significant breakthrough in code generation tasks, successfully surpassing CodeLlama-70B-Instruct and reaching the top of the code generation performance list. The unique feature of StarCoder2-15B-Instruct is its pure self-alignment strategy. The entire training process is open, transparent, and completely autonomous and controllable. The model generates thousands of instructions via StarCoder2-15B in response to fine-tuning the StarCoder-15B base model without relying on expensive manual annotation.

LLM is all done! OmniDrive: Integrating 3D perception and reasoning planning (NVIDIA's latest) LLM is all done! OmniDrive: Integrating 3D perception and reasoning planning (NVIDIA's latest) May 09, 2024 pm 04:55 PM

Written above & the author’s personal understanding: This paper is dedicated to solving the key challenges of current multi-modal large language models (MLLMs) in autonomous driving applications, that is, the problem of extending MLLMs from 2D understanding to 3D space. This expansion is particularly important as autonomous vehicles (AVs) need to make accurate decisions about 3D environments. 3D spatial understanding is critical for AVs because it directly impacts the vehicle’s ability to make informed decisions, predict future states, and interact safely with the environment. Current multi-modal large language models (such as LLaVA-1.5) can often only handle lower resolution image inputs (e.g.) due to resolution limitations of the visual encoder, limitations of LLM sequence length. However, autonomous driving applications require

Performance comparison of different Java frameworks Performance comparison of different Java frameworks Jun 05, 2024 pm 07:14 PM

Performance comparison of different Java frameworks: REST API request processing: Vert.x is the best, with a request rate of 2 times SpringBoot and 3 times Dropwizard. Database query: SpringBoot's HibernateORM is better than Vert.x and Dropwizard's ORM. Caching operations: Vert.x's Hazelcast client is superior to SpringBoot and Dropwizard's caching mechanisms. Suitable framework: Choose according to application requirements. Vert.x is suitable for high-performance web services, SpringBoot is suitable for data-intensive applications, and Dropwizard is suitable for microservice architecture.

Yolov10: Detailed explanation, deployment and application all in one place! Yolov10: Detailed explanation, deployment and application all in one place! Jun 07, 2024 pm 12:05 PM

1. Introduction Over the past few years, YOLOs have become the dominant paradigm in the field of real-time object detection due to its effective balance between computational cost and detection performance. Researchers have explored YOLO's architectural design, optimization goals, data expansion strategies, etc., and have made significant progress. At the same time, relying on non-maximum suppression (NMS) for post-processing hinders end-to-end deployment of YOLO and adversely affects inference latency. In YOLOs, the design of various components lacks comprehensive and thorough inspection, resulting in significant computational redundancy and limiting the capabilities of the model. It offers suboptimal efficiency, and relatively large potential for performance improvement. In this work, the goal is to further improve the performance efficiency boundary of YOLO from both post-processing and model architecture. to this end

See all articles