Home > Technology peripherals > AI > Tsinghua University and Zhipu AI open source GLM-4: launching a new revolution in natural language processing

Tsinghua University and Zhipu AI open source GLM-4: launching a new revolution in natural language processing

WBOY
Release: 2024-06-12 20:38:02
Original
848 people have browsed it

Since the launch of ChatGLM-6B on March 14, 2023, the GLM series models have received widespread attention and recognition. Especially after ChatGLM3-6B was open sourced, developers are full of expectations for the fourth-generation model launched by Zhipu AI. This expectation has finally been fully satisfied with the release of GLM-4-9B.

The birth of GLM-4-9B

In order to give small models (10B and below) more powerful capabilities, the GLM technical team spent nearly half a year exploring , launched this new fourth-generation GLM series open source model: GLM-4-9B. This model greatly compresses the model size while ensuring accuracy, and has faster inference speed and higher efficiency. There is no end to the exploration of the GLM technical team, and we will continue to work hard to launch more competitive open source

Innovative pre-training technology

In the pre-training process, we introduce large The language model performed data screening and finally obtained 10T of high-quality multilingual data. This amount of data is more than three times that of the ChatGLM3-6B model. In addition, we use FP8 technology for efficient pre-training, which improves training efficiency by 3.5 times compared to the third-generation model. Taking into account the user's storage needs, the parameter size of GLM-4-9B has been increased from 6B to 9B. Ultimately, we increased the pre-training computation by 5 times to maximize performance capabilities under limited storage conditions.

Excellent performance display

GLM-4-9B is a comprehensive technology upgrade tool with more powerful inference performance and better It has the advantages of context processing capabilities, multi-language support, multi-modal processing, and full tool set All Tools calling. These upgrades provide users with more stable, more reliable, and more accurate technical support, and improve users' work efficiency and quality.

The GLM-4-9B series includes multiple versions:

  • Basic version: GLM-4-9B (8K)
  • Conversational version: GLM -4-9B-Chat (128K)
  • Extra-long context version: GLM-4-9B-Chat-1M (1M)
  • Multi-modal version: GLM-4V-9B-Chat (8K)

GLM-4-9B’s powerful abilities

Basic abilities

GLM-4- Based on strong pre-training, 9B’s comprehensive ability in Chinese and English has improved by 40% compared to ChatGLM3-6B. In particular, significant improvements have been achieved in the Chinese alignment capability AlignBench, the instruction compliance capability IFeval, and the engineering code processing capability Natural Code Bench. Even when comparing the Llama 3 8B model with more training volume, GLM-4-9B is not inferior at all and leads in English performance. In the field of Chinese subjects, GLM-4-9B has improved by up to 50% [Performance Evaluation chart].

Long text processing capability

清华大学与智谱AI重磅开源 GLM-4:掀起自然语言处理新革命Pictures

The context length of the GLM-4-9B+ model is expanded from 128K At 1M tokens, it means that it can process input of up to 2 million words at the same time, which is equivalent to the length of two "Dream of Red Mansions" or 125 academic papers. The GLM-4-9B-Chat-1M model successfully demonstrated its excellent ability to non-destructively process long text input in the "needle in the haystack" experiment [illustration of long text experiment].

The following are two demo video cases showing long text processing capabilities:

  1. GLM-4-9B-Chat model: Input 5 PDF files with a total length of about 128K, and give a prompt to write a detailed research report on the development of large models in China. The model can quickly generate high-quality research reports (the video is not accelerated).
  2. GLM-4-9B-Chat-1M Model: Input about 900,000 words of the complete collection of "The Three-Body Problem" and ask the model to write a sequel outline for the novel. The model is reasonably planned and provides a continuation framework (video accelerated 10 times).

Multi-language support

GLM-4-9B+ supports up to 26 languages, including Chinese, English, Russian, etc. We expanded the tokenizer vocabulary size from 65K to 150K, improving coding efficiency by 30%. In multi-language understanding and generation tasks, GLM-4-9B-Chat outperforms Llama-3-8B-Instruct [Multi-language performance comparison chart].

Function Call Capability

The function calling capability of GLM-4-9B has been improved by 40% compared to the previous generation. On the Berkeley Function-Calling Leaderboard, its Function Call The capabilities are comparable to GPT-4 [Function call performance comparison chart].

All Tools full tool call

The "All Tools" capability means that the model can understand and use various external tools (such as code execution, network browsing, and drawing) etc.) to assist in completing the task. At the Zhipu DevDay on January 16, the GLM-4 model was fully upgraded with All Tools capabilities, which can intelligently call web browsers, code interpreters, CogView and other tools to complete complex requests [All Tools task icon].

Multimodal processing

GLM-4V-9B As an open source multimodal model of the GLM-4 base, capable of processing high-resolution input, By directly mixing visual and text data for training, it demonstrates significant multi-modal processing effects and is comparable to GPT-4V in performance. It performs very well when identifying and processing complex multi-modal tasks [Multi-modal application example diagram].

清华大学与智谱AI重磅开源 GLM-4:掀起自然语言处理新革命Picture

Future Outlook

GLM-4-9B has demonstrated its powerful performance in a variety of tasks, It is a major breakthrough in the field of natural language processing. Whether it is academic research or industrial applications, the GLM-4-9B will be your best choice.

We sincerely invite you to join the ranks of GLM-4 users and explore the possibilities brought by this excellent model:

  • GitHub repository
  • Hugging Face model page
  • 魔达Community

The above is the detailed content of Tsinghua University and Zhipu AI open source GLM-4: launching a new revolution in natural language processing. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template