


360 has reached a strategic cooperation with Zhipu AI to jointly develop the large language model 360GLM
DoNews reported on May 16 that 360 announced on the 16th that it had reached a strategic cooperation with Zhipu AI. The 100-billion-level large model "360GLM" jointly developed by both parties has reached the level of a new generation of cognitive intelligence general model.
The two parties cooperate in research and development, which is also an effective combination of basic models and application scenarios. Zhou Hongyi, founder of 360 Group, has said many times that Microsoft, as an industrial company, has complemented OpenAI’s engineering, scenario-based, productization and commercialization capabilities. It is the perfect division of labor model between Microsoft and OpenAI that has contributed to the emergence of this turning point in artificial intelligence.
Zhou Hongyi believes that China should establish an industry-research collaborative innovation model for large-scale technology companies and key scientific research institutions, and create China’s “Microsoft OpenAI” combination to lead large-model technology research. He said that this cooperation with Zhipu AI draws on this collaborative relationship between industry and research.
Zhipu AI is the top artificial intelligence technology company in China and has been evaluated as the AI company with the “most OpenAI temperament and level” in China. In November 2022, the Large Model Center of Stanford University conducted a comprehensive evaluation of 30 mainstream large models around the world. The bilingual 100-billion-level ultra-large-scale pre-training model GLM-130B developed by Zhipu AI was the only large model selected in Asia. The evaluation results show that , its accuracy and other key indicators are close to or the same as large models from companies such as OpenAI, Google Brain, Microsoft and NVIDIA. More than 1,000 institutions in 70 countries around the world have applied for use.
ChatGLM developed by the Zhipu AI team achieves human intention alignment on GLM-130B through supervised fine-tuning and other technologies; it supports training and inference with domestic chips such as NVIDIA and Huawei Yiteng, Haiguang and Shenwei. The open source ChatGLM- The 6B model has been downloaded more than 1.6 million times worldwide, ranking first on the Huggingface global model trend list for two consecutive weeks.
Regarding this cooperation, Zhipu AI CEO Zhang Peng said that Zhipu AI has always adhered to its own vision: to make machines think like humans and realize the concept of Model as a Service (MaaS).
Zhang Peng said that 360 Group has a domestically advanced multi-modal R&D team with long-term accumulation in AI technology, superimposed on the advantages of search, browser and other scenarios, and will become a strong R&D partner of Zhipu AI. At the same time, it is training It brings useful supplements in terms of data, reinforcement learning, engineering optimization, user scenarios and commercialization. Close cooperation between the two parties will promote the implementation of large-scale model technology in wider and deeper scenarios and empower more industries.
Through this cooperation, 360 has formed a large model layout driven by independent research and development and cooperative research and development of "dual engines". In March, 360’s self-developed 100-billion-level large-scale model “360GPT” has achieved outstanding results in intelligent search, AI image generation and other scene evaluations.
360GLM and 360GPT, the two billion-level models, each have their own advantages in capabilities and complement each other. In the future, they will be seamlessly connected at the application layer to provide users with a smooth user experience. On the basis of this cooperation, Zhipu AI will further promote and deepen the application of large-scale model technology to help more industries improve efficiency and user experience.
The above is the detailed content of 360 has reached a strategic cooperation with Zhipu AI to jointly develop the large language model 360GLM. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Translator | Bugatti Review | Chonglou This article describes how to use the GroqLPU inference engine to generate ultra-fast responses in JanAI and VSCode. Everyone is working on building better large language models (LLMs), such as Groq focusing on the infrastructure side of AI. Rapid response from these large models is key to ensuring that these large models respond more quickly. This tutorial will introduce the GroqLPU parsing engine and how to access it locally on your laptop using the API and JanAI. This article will also integrate it into VSCode to help us generate code, refactor code, enter documentation and generate test units. This article will create our own artificial intelligence programming assistant for free. Introduction to GroqLPU inference engine Groq

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

The potential of large language models is stimulated - high-precision time series prediction can be achieved without training large language models, surpassing all traditional time series models. Monash University, Ant and IBM Research jointly developed a general framework that successfully promoted the ability of large language models to process sequence data across modalities. The framework has become an important technological innovation. Time series prediction is beneficial to decision-making in typical complex systems such as cities, energy, transportation, and remote sensing. Since then, large models are expected to revolutionize time series/spatiotemporal data mining. The general large language model reprogramming framework research team proposed a general framework to easily use large language models for general time series prediction without any training. Two key technologies are mainly proposed: timing input reprogramming; prompt prefixing. Time-

This article will open source the results of "Local Deployment of Large Language Models in OpenHarmony" demonstrated at the 2nd OpenHarmony Technology Conference. Open source address: https://gitee.com/openharmony-sig/tpc_c_cplusplus/blob/master/thirdparty/InferLLM/docs/ hap_integrate.md. The implementation ideas and steps are to transplant the lightweight LLM model inference framework InferLLM to the OpenHarmony standard system, and compile a binary product that can run on OpenHarmony. InferLLM is a simple and efficient L

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Large language models (LLMs) demonstrate impressive performance in language understanding and various reasoning tasks. However, their role in spatial reasoning, a key aspect of human cognition, remains understudied. Humans have the ability to create mental images of unseen objects and actions through a process known as the mind's eye, making it possible to imagine the unseen world. Inspired by this cognitive ability, researchers proposed "Visualization of Thought" (VoT). VoT aims to guide the spatial reasoning of LLMs by visualizing their reasoning signs, thereby guiding subsequent reasoning steps. Researchers apply VoT to multi-hop spatial reasoning tasks, including natural language navigation, vision

Since the "AttentionIsAllYouNeed" paper published in 2017, the Transformer architecture has been a cornerstone of the natural language processing (NLP) field. Its design has remained largely unchanged for years, with 2022 marking a major development in the field with the introduction of Rotary Position Encoding (RoPE). Rotated position embedding is the state-of-the-art NLP position embedding technique. Most popular large-scale language models (such as Llama, Llama2, PaLM and CodeGen) already use it. In this article, we’ll take a deep dive into what rotational position encodings are and how they neatly combine the benefits of absolute and relative position embeddings. The need for positional encoding in order to understand Ro

Today, Beijing Kingsoft Office Software Co., Ltd. ("Kingsoft Office" for short) and Alibaba Cloud have reached a strategic cooperation. Both parties will leverage their respective technical advantages and platform capabilities to develop cloud resources, AI large models, product ecological integration, joint solutions, etc. Carry out in-depth cooperation in multiple fields to achieve ecological coordinated development. Zhang Qingyuan, CEO of Kingsoft Office, and Wang Jian, academician of the Chinese Academy of Engineering and founder of Alibaba Cloud, witnessed the signing. Jiang Zhiqiang, Senior Vice President of Kingsoft Office, and Zhang Tao, Vice President of Global Commercial of Alibaba Cloud Intelligence Group, signed the cooperation agreement on behalf of both parties. Kingsoft Office is a leading office software service provider in China, providing office services to users in more than 220 countries and regions around the world. In order to promote technical cooperation and ecological synergy between the two parties, create better smart office applications and provide users with more
