Home > Technology peripherals > AI > Microsoft's 6-page paper explodes: ternary LLM, so delicious!

Microsoft's 6-page paper explodes: ternary LLM, so delicious!

WBOY
Release: 2024-02-29 22:01:02
forward
510 people have browsed it

This is the conclusion put forward by Microsoft and the University of Chinese Academy of Sciences in the latest study-

All LLMs will be 1.58 bit.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Specifically, the method proposed in this study is called BitNet b1.58, which can be said to be "rooted in" from the large language model. "Start with the parameters on.

The traditional storage in the form of 16-bit floating point numbers (such as FP16 or BF16) has been changed into ternary , that is, {- 1, 0, 1}.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

It should be noted that "1.58 bit" does not mean that each parameter occupies 1.58 bytes of storage space, but that each parameter can use 1.58 bits of information. coding.

After such conversion, the calculation in the matrix will only involve the addition of integers, thus allowing large models to significantly reduce the storage space required while maintaining a certain accuracy. and computing resources.

For example, BitNet b1.58 is compared with Llama when the model size is 3B. While the speed is increased by 2.71 times, the GPU memory usage is almost only a quarter of the original.

And when the size of the model is larger (for example, 70B) , the speed improvement and memory saving will be more significant!

This subversive idea of ​​​​tradition really makes netizens shine. The paper also received high attention on X:

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Netizens were amazed While "changing the rules of the game", it also played up the old joke of Google's attention paper:

1 bit is all YOU need.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

So how is BitNet b1.58 implemented? Let's continue reading.

Change the parameters into ternary

This research is actually an optimization done by the original team based on a previously published paper, that is, adding additional data to the original BitNet An extra value of 0 is added.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Overall, BitNet b1.58 is still based on the BitNet architecture (a Transformer) , replacing nn.Linear with BitLinear.

As for the detailed optimization, the first thing is the "adding a 0" we just mentioned, that is, weight quantization(weight quantization).

The weights of the BitNet b1.58 model are quantized into ternary values ​​{-1, 0, 1}, which is equivalent to using 1.58 bits to represent each weight in the binary system. This quantification method reduces the memory footprint of the model and simplifies the calculation process.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

Secondly, in terms of quantitative function design, in order to limit the weight to -1, 0 or 1, the researchers adopted a A quantification function called absmean.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

This function first scales according to the average absolute value of the weight matrix, and then rounds each value to the nearest integer (-1, 0, 1).

The next step is activation quantization(activation quantization).

The quantization of activation values ​​is the same as the implementation in BitNet, but the activation values ​​are not scaled to the range [0, Qb] before the nonlinear function. Instead, the activations are scaled to the range [−Qb, Qb] to eliminate zero-point quantization.

It is worth mentioning that in order to make BitNet b1.58 compatible with the open source community, the research team adopted components of the LLaMA model, such as RMSNorm, SwiGLU, etc., so that it can be easily integrated into mainstream open source software.

Finally, in terms of experimental performance comparison, the team compared BitNet b1.58 and FP16 LLaMA LLM on models of different sizes.

Microsofts 6-page paper explodes: ternary LLM, so delicious!

The results show that BitNet b1.58 starts to match the full-precision LLaMA LLM in perplexity at 3B model size, while having better performance in latency, memory usage and throughput. Significantly improved.

And when the model size becomes larger, this performance improvement will become more significant.

Netizen: Can run 120B large model on consumer-grade GPU

As mentioned above, the unique method of this study has caused a lot of heated discussion on the Internet.

DeepLearning.scala author Yang Bo said:

Compared with the original BitNet, the biggest feature of BitNet b1.58 is that it allows 0 parameters. I think that by slightly modifying the quantization function, we may be able to control the proportion of 0 parameters. When the proportion of 0 parameters is large, the weights can be stored in a sparse format, so that the average video memory occupied by each parameter is even less than 1 bit. This is equivalent to a weight-level MoE. I think it's more elegant than regular MoE.

At the same time, he also raised the shortcomings of BitNet:

The biggest shortcoming of BitNet is that although it can reduce the memory overhead during inference, the optimizer state and gradient still use floating point numbers. , training is still very memory intensive. I think if BitNet can be combined with technology that saves video memory during training, then compared to traditional half-precision networks, it can support more parameters with the same computing power and video memory, which will have great advantages.

The current way to save the graphics memory overhead of the optimizer state is offloading. A way to save the memory usage of gradients may be ReLoRA. However, the ReLoRA paper experiment only used a model with one billion parameters, and there is no evidence that it can be generalized to models with tens or hundreds of billions of parameters.

Microsofts 6-page paper explodes: ternary LLM, so delicious!
##△Picture source: Zhihu, quoted with permission

However, some netizens analyzed that:

If the paper is established, Then we can run a 120B large model on a 24GB consumer-grade GPU.

Microsofts 6-page paper explodes: ternary LLM, so delicious!
Microsofts 6-page paper explodes: ternary LLM, so delicious!

So what do you think of this new approach?

The above is the detailed content of Microsoft's 6-page paper explodes: ternary LLM, so delicious!. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template