The largest open source model in China is here:
with 65 billion parameters and training based on 2.6-3.2 trillion tokens. Ranked second only to "Falcon" and "Alpaca", its performance is comparable to GPT3.5, and it can now beunconditionally free for commercial use.
three abilities:
1. Basic abilities such as understanding, generation, reasoning and memory, to the diversity of models, Creativity and precision performance, from excellent to powerful;2. Expanded the capabilities of tool calling, code interpretation, reflection and correction, etc., laying a technical foundation for building intelligent agents(AI Agent) and improving Practicality of the model;
3. Significantly alleviate common and possibly serious hallucination problems in 7B and 13B, reduce the "nonsense" of large models, and improve accuracy and professionalism. The Yuanxiang large model series are all self-developed, covering a number of key technologies and R&D innovations:1. Complex distributed system design:
Drawing on the team’s rich experience in developing large systems such as Tencent Go AI “Jue Yi” and King of Glory AI “Jue Wu”, self-developed efficient operators, memory optimization, parallel scheduling strategies, data-computing-communication overlap, platforms and frameworks Collaboration and other key technologies are used to create an efficient and stable training system. The peak computing power utilization rate of the kilocalorie cluster reaches 58.5%, ranking among the top in the industry.2. Comprehensively improve performance:
65B training uses FlashAttention2 to accelerate calculations, and uses virtual pipeline(virtual pipeline) technology based on 3D parallelism , reducing the excessive bubble rate generated by long pipelines and improving computational reasoning efficiency; the context window length is gradually increased from 8K to 16K, allowing it to not only complete complex tasks well, including long text understanding, long text generation and ultra-long dialogues, but also expand With the ability to call tools, code interpretation and reflection and correction, it can better build intelligent agents(AI Agent).
3. Extremely improve training stability:
Due to the huge amount of calculations, communication congestion, chip overheating or computing node failure have become the norm for 65B training. In the initial stage, the highest number occurred in a week Eight failures. Through continuous optimization of cluster infrastructure operation, resource scheduling, training framework and scheduling platform collaboration, Yuanxiang has created a highly stable, low-interruption, and strong fault-tolerant training system, increasing the weekly effective training rate to 98.6 %. In addition, in the middle of model training with nearly 1.6 trillion Tokens, the loss function produced NaN values, which may cause training to be interrupted. Normally, the industry generally deletes the relevant data intervals after analysis. The team determined based on experience that this was the natural evolution of the model, chose not to delete the data, and directly skipped the relevant parameter updates. Finally, the NaN value problem was solved. Further analysis of intermediate states such as parameter values, activation values, and gradient values later showed that this problem may be related to the change in the maximum value of the activation value of the transformer block in the last layer of the model, and will gradually decrease with the maximum value. And solve it yourself.In order to ensure that the industry can have a comprehensive, objective and long-term understanding of the performance of the Yuanxiang large model, the researchers referred to a series of authoritative academic evaluations and developed a system covering Q&A, understanding, The 11 mainstream authoritative assessment standards in six dimensions including knowledge, reasoning, mathematics, and code will continue to be used and iterated.
There is no model of the same level in China for XVERSE-65B to compare with. In the comparative evaluation with foreign benchmarks, some indicators surpassed and the overall performance was comparable to GPT3.5; it comprehensively surpassed the open source benchmark Llama2 -70B and Falcon-180B; there is still a gap with GPT4.
The fully upgraded XVERSE-13B-2 adds a large amount of high-quality data compared to models of the same size. The training data reaches 3.2 trillion, which greatly improves the performance of "small" models. Capability limit.
It studies both liberal arts and science, maintaining its advantages in liberal arts. Q&A has improved by 18%, science has made great progress, coding has improved by 149%, and mathematics has improved by 198%. In the evaluation, it has completely surpassed domestic and foreign open source benchmarks such as Llama2 and Baichuan2.
Now, the Yuanxiang large model can be downloaded by searching for "XVERSE" on multiple platforms such as Github, Hugging Face, and Moda ModelScope, it can be used for unconditional free commercial use after simple registration, and can meet most of the application and iteration needs of small and medium-sized enterprises, scientific research institutions and individual developers.
Yuanxiang also provides a full range of technical services such as model training, inference, deployment, and fine-tuning, empowering various industries such as entertainment, finance, and medical care, and helping in multiple scenarios such as intelligent customer service, creative writing, and accurate recommendations. Create an industry-leading user experience.
In October 2023, Tencent Music took the lead in announcing the establishment of a strategic cooperation with Yuanxiang Model, jointly launched the lyraXVERSE accelerated model, comprehensively upgraded its music assistant "AI Xiaoqin", and will continue to explore AI and 3D in the future. advanced technology.
The above is the detailed content of The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens. For more information, please follow other related articles on the PHP Chinese website!