Several large models with 7 to 13 billion parameters have been open sourced in China before, and the implementation results have emerged, and the open source ecosystem has been initially established. As the complexity and data volume of tasks such as agents increase, the industry and community's demand for larger models becomes increasingly urgent.
Research shows that the higher the number of parameters and the more high-quality training data, the more performance of large models can be continuously improved. The general consensus in the industry is that when reaches the 50 to 60 billion parameter threshold, large models can "smartly emerge" and demonstrate powerful performance in multi-tasking. However, training a model of this magnitude is expensive and requires high technical requirements. Currently, it is mainly provided as a closed-source paid model.
In the foreign open source ecosystem, benchmark models such as Llama2-70B and Falcon-180B are conditionally open source, with commercial upper limits set on the number of monthly active users or income, and have obvious limitations in Chinese language capabilities due to lack of training data. Short board. In addition, the United States’ recently promulgated AI chip ban may further restrict the development speed of China’s large model industry. The industry urgently calls for a high-performance large-scale domestic model to fill the ecological gap and provide more powerful understanding, reasoning and long text generation capabilities for Chinese applications.
In this context, Yuanxiang XVERSE Company announcedthe open source 65 billion parameter high-performance universal large model . In addition, the 13B model has been fully upgraded to increase the upper limit of small model capabilities. This will allow a large number of small and medium-sized enterprises, researchers and AI developers to realize the freedom of large models earlier. They can freely use, modify or distill Yuanxiang large models according to their computing power, resource constraints and specific task requirements, promoting breakthroughs in research and application. Innovation.
More than 40 languages. Yuanxiang adheres to high-performance positioning and has significantly improved 65B's capabilities in three aspects:
Basic capabilities such as understanding, generation, reasoning and memory, to model diversity, Creativity and precision performance, from excellent to powerful;
Expands the capabilities of tool calling, code explanation, reflection and correction, etc., laying a technical foundation for building intelligent agents and improving the practicality of the model;
Significantly alleviates common and potentially serious hallucination problems in 7B and 13B, reduces large model hallucinations, and improves accuracy and professionalism.
Yuanxiang large model series are self-developed across the entire chain, covering a number of key technologies and R&D innovations:
2. Comprehensively improve performance: FlashAttention2 is used to accelerate calculations in 65B training, and virtual pipeline technology is used based on 3D parallelism to reduce the excessive bubble rate generated by long pipelines and improve computational inference efficiency; context window The length has been gradually increased from 8K to 16K, which not only enables it to successfully complete complex tasks, including long text understanding, long text generation and super long dialogues, but also expands tool calling, code explanation and reflection and correction capabilities, and can better build intelligent agents ( AI Agent).
3. Extremely improve training stability: Due to the huge amount of calculations, communication congestion, chip overheating or computing node failures have become the norm for 65B training. In the early days, there were up to eight failures a week. Through continuous optimization of cluster infrastructure operation, resource scheduling, training framework and scheduling platform collaboration, Yuanxiang has created a training system with high stability, low interruption and strong fault tolerance, increasing the weekly effective training rate to 98.6%.
In addition, in the middle of model training with nearly 1.6 trillion Tokens, the loss function produced NaN values, which may cause training to be interrupted. Normally, the industry generally deletes the relevant data intervals after analysis. Based on experience, the team determined that this was the natural evolution of the model, chose not to delete the data, and directly skipped the relevant parameter updates. Finally, the NaN value problem was solved. Later further analysis of intermediate states such as parameter values, activation values, and gradient values showed that the problem may be related to the change in the maximum value of the activation value of the transformer block in the last layer of the model, and will be resolved by itself as the maximum value gradually decreases.
This experience nan value problem research and development experience
# Comprehensive evaluation 65B performance is comparable to GPT3.5
In order to ensure that the industry can have a comprehensive, objective, and long-term understanding of the performance of the Yuanxiang large model, the researchers referred to a series of authoritative academic evaluations and developed a system covering question and answer, understanding, knowledge, reasoning, and mathematics. 11 mainstream authoritative evaluation standards in six dimensions, including , code, etc., will continue to be used and iterated.
XVERSE-65B There is no model of the same level in China that can be compared. In the comparative evaluation with foreign benchmarks, some indicators surpassed and the comprehensive performance was comparable to GPT3.5; it comprehensively surpassed the open source benchmarks Llama2-70B and Falcon-180B ; There is still a gap with GPT4.
## A large amount of high-quality data has been added, with the training data reaching 3.2 trillion, which greatly increases the upper limit of the capabilities of small models. It has both liberal arts and science, maintaining its advantages in liberal arts. Questions and answers have improved by 18%, science has made great progress, coding has improved by 149%, and mathematics has improved by 198%. In the evaluation, it has completely surpassed domestic and foreign open source benchmarks such as Llama2 and Baichuan2.
## Open the new era of big model application
The Yuanxiang large model can be downloaded by searching for "XVERSE" on Github, Hugging Face, ModelScope and other platforms,After simple registration, you can unconditionally Free for commercial use, it can meet most of the application and iteration needs of small and medium-sized enterprises, scientific research institutions and individual developers.
Yuanxiang also provides a full range of technical services such as model training, inference, deployment, and fine-tuning, empowering various industries such as entertainment, finance, and medical care, and helping in multiple scenarios such as intelligent customer service, creative writing, and accurate recommendations. Create an industry-leading user experience. In October 2023, Tencent Music took the lead in announcing a strategic cooperation with Yuanxiang Model, jointly launched the lyraXVERSE accelerated model, and comprehensively upgraded its music assistant "AI Xiaoqin". In the future, it will continue to explore AI and 3D cutting-edge technologies to lead music entertainment Innovative direction. Yao It is like the driving force for continuous exploration of cutting-edge technology. The XVERSE open source series is committed to promoting the domestic substitution and continuous technological innovation of large models, injecting strong impetus into the development of the real economy and the digital economy. We look forward to joining hands with enterprises and developers to jointly open up a new era of large model applications. Times!"
About Yuanxiang Yuanxiang XVERSE was established in Shenzhen in early 2021. It is a leading domestic AI and 3D technology service company, committed to creating AI-driven A one-stop platform for 3D content production and consumption, with the vision of "defining your world".
The above is the detailed content of Yuanxiang XVERSE-65B: The largest open source model in China is here, with high performance and unconditional free commercial use. For more information, please follow other related articles on the PHP Chinese website!