Musk promised to open source Grok-1, and the open source community was ecstatic.
But it is still a bit difficult to make changes or commercialize based on Grok-1:
Grok-1 is built using Rust JAX, for those who are used to mainstream software ecosystems such as Python PyTorch HuggingFace The threshold for users to get started is high.
The latest achievements of the Colossal-AI team solve everyone’s urgent needs and provide convenience The easy-to-use Python PyTorch HuggingFace Grok-1 can accelerate the inference delay by nearly 4 times!
Now, the model has been published on HuggingFace and ModelScope.
HuggingFace download link:
https://www.php.cn/link/335396ce0d3f6e808c26132f91916eae
ModelScope download link:
https: //www.php.cn/link/7ae7778c9ae86d2ded133e891995dc9e
Combined with Colossal-AI’s rich accumulation in the field of AI large model system optimization, it has quickly supported Grok-1 Tensor parallelism.
On a single 8H800 80GB server, the inference performance is compared to JAX, HuggingFace's auto device map and other methods, the inference latency is accelerated by nearly 4 times.
After downloading and installing Colossal-AI, just start the inference script.
./run_inference_fast.sh hpcaitech/grok-1
Model weights will be automatically downloaded and loaded, and inference results will remain aligned. As shown in the figure below, the running test of Grok-1 greedy search.
For more details, please refer to the grok-1 usage example:
https://www.php.cn/link/e2575ed7d2c481c414c10e688bcbc4cf
This open source, xAI released the basic model weights and network architecture of Grok-1.
Specifically, the original base model from the pre-training phase in October 2023, which was not fine-tuned for any specific application (such as dialogue).
Structurally, Grok-1 adopts a mixed expert (MoE) architecture, contains 8 experts, and the total parameter amount is 314B (314 billion). When processing Token, two of the experts will be activated, and the activation parameter amount is 86B.
Just looking at the amount of activated parameters, it has exceeded the 70B of the dense model Llama 2. For the MoE architecture, it is not an exaggeration to call this amount of parameters a behemoth.
More parameter information is as follows:
On the GitHub page, the official tip is that due to the large model size (314B parameters), A machine with sufficient GPU and memory is required to run Grok.
The implementation efficiency of the MoE layer here is not high. This implementation method was chosen to avoid the need to customize the kernel when verifying the correctness of the model.
The weight file of the model is provided in the form of magnetic link, and the file size is close to 300GB.
It is worth mentioning that Grok-1 uses the Apache 2.0 license, Commercial friendly.
Currently, the star rating of Grok-1 on GitHub has reached 43.9k Stars.
Qubit understands that Colossal-AI will further launch optimizations for Grok-1 in the near future such as parallel acceleration and quantitative reduction of graphics memory costs. Welcome to continue to pay attention.
Colossal-AI open source address: https://www.php.cn/link/b9531e7d2a8f38fe8dcc73f58cae9530
The above is the detailed content of 3140 parameters Grok-1 inference accelerated by 3.8 times, PyTorch+HuggingFace version is here. For more information, please follow other related articles on the PHP Chinese website!