Home > Technology peripherals > AI > The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens

The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens

PHPz
Release: 2023-11-06 14:29:21
forward
1012 people have browsed it

The largest open source model in China is here:

with 65 billion parameters and training based on 2.6-3.2 trillion tokens.

Ranked second only to "Falcon" and "Alpaca", its performance is comparable to GPT3.5, and it can now be

unconditionally free for commercial use.

The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens

It is XVERSE from Shenzhen Yuanxiang Company.

We can freely modify or distill it according to different computing power, resource constraints and specific task requirements.

In addition to its large scale, it also has 16k context, supports more than 40 languages, and has two versions of 7B and 13B available.

What is the specific origin?

The largest commercially available large model in China is here

Research shows that the higher the number of parameters and the more high-quality training data, the more performance of the large model can be continuously improved.

The general consensus in the industry is that only when the parameter threshold of 50 to 60 billion is reached can large models "intelligently emerge" and demonstrate powerful performance in multiple tasks.

However, training a model of this magnitude is expensive and requires high technical skills. Currently, it is mainly provided as a closed source paid model.

In the foreign open source ecosystem, benchmark models such as Llama2-70B and Falcon-180B are "conditionally" open source, with commercial upper limits such as monthly active users or income set, and lack of training data in Chinese language capabilities. There are obvious shortcomings.

Here, in order to promote the development of domestic large model open source ecology and industrial applications, Yuanxiang XVERSE Company announced that it will open source the 65 billion parameter high-performance universal large model XVERSE-65B for unconditional free commercial use. The 13B model has been fully upgraded to increase the upper limit of the "small" model's capabilities.

Yao Xing, founder of Yuanxiang Presenting a 'promising' 65B model."

The XVERSE-65B base model is trained from scratch on high-quality data of 2.6 trillion Tokens. The context window is expanded to 16K and supports 40 languages ​​including Chinese, English, Russian, and French. multilingual.

Significantly improved

three abilities:

1. Basic abilities such as understanding, generation, reasoning and memory, to the diversity of models, Creativity and precision performance, from excellent to powerful;

2. Expanded the capabilities of tool calling, code interpretation, reflection and correction, etc., laying a technical foundation for building intelligent agents

(AI Agent) and improving Practicality of the model;

3. Significantly alleviate common and possibly serious hallucination problems in 7B and 13B, reduce the "nonsense" of large models, and improve accuracy and professionalism.

The Yuanxiang large model series are all self-developed, covering a number of key technologies and R&D innovations:

1. Complex distributed system design:

Drawing on the team’s rich experience in developing large systems such as Tencent Go AI “Jue Yi” and King of Glory AI “Jue Wu”, self-developed efficient operators, memory optimization, parallel scheduling strategies, data-computing-communication overlap, platforms and frameworks Collaboration and other key technologies are used to create an efficient and stable training system. The peak computing power utilization rate of the kilocalorie cluster reaches 58.5%, ranking among the top in the industry.

2. Comprehensively improve performance:

65B training uses FlashAttention2 to accelerate calculations, and uses virtual pipeline

(virtual pipeline) technology based on 3D parallelism , reducing the excessive bubble rate generated by long pipelines and improving computational reasoning efficiency; the context window length is gradually increased from 8K to 16K, allowing it to not only complete complex tasks well, including long text understanding, long text generation and ultra-long dialogues, but also expand With the ability to call tools, code interpretation and reflection and correction, it can better build intelligent agents(AI Agent).

3. Extremely improve training stability:

Due to the huge amount of calculations, communication congestion, chip overheating or computing node failure have become the norm for 65B training. In the initial stage, the highest number occurred in a week Eight failures.

Through continuous optimization of cluster infrastructure operation, resource scheduling, training framework and scheduling platform collaboration, Yuanxiang has created a highly stable, low-interruption, and strong fault-tolerant training system, increasing the weekly effective training rate to 98.6 %.

In addition, in the middle of model training with nearly 1.6 trillion Tokens, the loss function produced NaN values, which may cause training to be interrupted.

Normally, the industry generally deletes the relevant data intervals after analysis.

The team determined based on experience that this was the natural evolution of the model, chose not to delete the data, and directly skipped the relevant parameter updates. Finally, the NaN value problem was solved.

Further analysis of intermediate states such as parameter values, activation values, and gradient values ​​later showed that this problem may be related to the change in the maximum value of the activation value of the transformer block in the last layer of the model, and will gradually decrease with the maximum value. And solve it yourself.

The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens

Performance comparable to GPT3.5

In order to ensure that the industry can have a comprehensive, objective and long-term understanding of the performance of the Yuanxiang large model, the researchers referred to a series of authoritative academic evaluations and developed a system covering Q&A, understanding, The 11 mainstream authoritative assessment standards in six dimensions including knowledge, reasoning, mathematics, and code will continue to be used and iterated.

There is no model of the same level in China for XVERSE-65B to compare with. In the comparative evaluation with foreign benchmarks, some indicators surpassed and the overall performance was comparable to GPT3.5; it comprehensively surpassed the open source benchmark Llama2 -70B and Falcon-180B; there is still a gap with GPT4.

The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens

The fully upgraded XVERSE-13B-2 adds a large amount of high-quality data compared to models of the same size. The training data reaches 3.2 trillion, which greatly improves the performance of "small" models. Capability limit.

It studies both liberal arts and science, maintaining its advantages in liberal arts. Q&A has improved by 18%, science has made great progress, coding has improved by 149%, and mathematics has improved by 198%. In the evaluation, it has completely surpassed domestic and foreign open source benchmarks such as Llama2 and Baichuan2.

The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens

Now, the Yuanxiang large model can be downloaded by searching for "XVERSE" on multiple platforms such as Github, Hugging Face, and Moda ModelScope, it can be used for unconditional free commercial use after simple registration, and can meet most of the application and iteration needs of small and medium-sized enterprises, scientific research institutions and individual developers.

Yuanxiang also provides a full range of technical services such as model training, inference, deployment, and fine-tuning, empowering various industries such as entertainment, finance, and medical care, and helping in multiple scenarios such as intelligent customer service, creative writing, and accurate recommendations. Create an industry-leading user experience.

In October 2023, Tencent Music took the lead in announcing the establishment of a strategic cooperation with Yuanxiang Model, jointly launched the lyraXVERSE accelerated model, comprehensively upgraded its music assistant "AI Xiaoqin", and will continue to explore AI and 3D in the future. advanced technology.

The above is the detailed content of The largest open source model in China is released for unconditional free commercial use! 65 billion parameters, training based on 2.6 trillion tokens. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template