


Another large AI model is released, and Qishang Online builds a solid base of intelligent computing power for it
On June 3, Zhongke Wenge, a leading domestic data and decision-making intelligent service provider under the Chinese Academy of Sciences, released the Yayi large model and launched large model applications in media, finance, publicity and other fields. As a strategic partner of Zhongke Wenge's computing power service, Qishang Online provides reliable underlying support for the Yayi large model in terms of intelligent computing power.
AI computing power has entered the era of large models. Behind the rapid development of domestic large models, huge intelligent computing power support is needed to provide large-scale data processing, complex model training and reasoning task guarantees for AI applications. As a leading digital computing integration service provider in China, Qishang Online has been deeply involved in the data industry for more than 20 years. As early as 2020, it established the strategic direction of AI intelligent computing power services and embarked on the construction and operation of intelligent computing power centers.
In 2021, Qishang Online and Zhongke Wenge signed a strategic cooperation agreement. The two parties will work together in depth in cloud computing, big data, artificial intelligence and other fields to jointly develop new business models. Zhongke Wenge's "digital intelligence and supercomputing" The "Cloud Computing Center" was officially launched in the Sanlitun Capital Financial Computing Power Center of Qishang Online.
Qishang Online provides construction, operation, maintenance and other services for the Zhongke Wenge Digital Supercomputing Cloud Platform. It builds a big data resource pool according to the data middle platform model, establishes data standards, integrates cross-modal data resources, and realizes data Hierarchical classification storage and authorized use improve the level of data utilization, thereby achieving "one pool integration" of data assets.
The Yayi large model released this time is a phased result of Zhongke Wenge’s years of exploration and technical research based on the direction of new generation artificial intelligence technology. It is also an important wisdom derived from the “Digital Intelligence Supercomputing Cloud Computing Center” Fruitful. Qishang Online's cloud computing resources and hosting services accelerate the pace of product development and increase the speed of model training.
The Yayi large model is a safe and reliable enterprise-level exclusive large model. It has five core capabilities, including real-time network Q&A, domain knowledge Q&A, multi-language content understanding, complex scene information extraction, and multi-modal content generation, with a total of more than 100 A unique skill that can quickly connect to government and enterprise data and generate large model exclusive application services with one click.
Yayi large model supports three usage methods: cloud use, local all-in-one machine deployment, and independent private training deployment. It can cover scenario businesses in finance, media, governance, security and other directions, and can be generalized to home furnishing, medical care, education and other industries. .
Zhongke Wenge hopes to use Yayi large model to provide comprehensive large model solutions for various enterprises and industries. As a computing power infrastructure service provider, Qishang Online will continue to conduct in-depth and extensive industrial collaboration and integration with upstream and downstream cooperative enterprises to build an AI public computing power service platform to provide computing power guarantee for the implementation of more AI scenarios, allowing artificial intelligence to The development of all walks of life brings greater dividends!
The above is the detailed content of Another large AI model is released, and Qishang Online builds a solid base of intelligent computing power for it. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Recently, the "Lingang New Area Intelligent Computing Conference" with the theme of "AI leads the era, computing power drives the future" was held. At the meeting, the New Area Intelligent Computing Industry Alliance was formally established. SenseTime became a member of the alliance as a computing power provider. At the same time, SenseTime was awarded the title of "New Area Intelligent Computing Industry Chain Master" enterprise. As an active participant in the Lingang computing power ecosystem, SenseTime has built one of the largest intelligent computing platforms in Asia - SenseTime AIDC, which can output a total computing power of 5,000 Petaflops and support 20 ultra-large models with hundreds of billions of parameters. Train at the same time. SenseCore, a large-scale device based on AIDC and built forward-looking, is committed to creating high-efficiency, low-cost, and large-scale next-generation AI infrastructure and services to empower artificial intelligence.

IT House reported on October 13 that "Joule", a sister journal of "Cell", published a paper this week called "The growing energy footprint of artificial intelligence (The growing energy footprint of artificial intelligence)". Through inquiries, we learned that this paper was published by Alex DeVries, the founder of the scientific research institution Digiconomist. He claimed that the reasoning performance of artificial intelligence in the future may consume a lot of electricity. It is estimated that by 2027, the electricity consumption of artificial intelligence may be equivalent to the electricity consumption of the Netherlands for a year. Alex DeVries said that the outside world has always believed that training an AI model is "the most important thing in AI".

I believe that friends who follow the mobile phone circle will not be unfamiliar with the phrase "get a score if you don't accept it". For example, theoretical performance testing software such as AnTuTu and GeekBench have attracted much attention from players because they can reflect the performance of mobile phones to a certain extent. Similarly, there are corresponding benchmarking software for PC processors and graphics cards to measure their performance. Since "everything can be benchmarked", the most popular large AI models have also begun to participate in benchmarking competitions, especially in the "Hundred Models" After the "war" began, there were breakthroughs almost every day. Each company claimed to be "the first in running scores." The large domestic AI models almost never fell behind in terms of performance scores, but they were never able to surpass GP in terms of user experience.

IT House reported on November 3 that the official website of the Institute of Physics of the Chinese Academy of Sciences published an article. Recently, the SF10 Group of the Institute of Physics of the Chinese Academy of Sciences/Beijing National Research Center for Condensed Matter Physics and the Computer Network Information Center of the Chinese Academy of Sciences collaborated to apply large AI models to materials science. In the field, tens of thousands of chemical synthesis pathway data are fed to the large language model LLAMA2-7b, thereby obtaining a MatChat model, which can be used to predict the synthesis pathways of inorganic materials. IT House noted that the model can perform logical reasoning based on the queried structure and output the corresponding preparation process and formula. It has been deployed online and is open to all materials researchers, bringing new inspiration and new ideas to materials research and innovation. This work is for large language models in the field of segmented science

The Transformer model comes from the paper "Attentionisallyouneed" published by the Google team in 2017. This paper first proposed the concept of using Attention to replace the cyclic structure of the Seq2Seq model, which brought a great impact to the NLP field. And with the continuous advancement of research in recent years, Transformer-related technologies have gradually flowed from natural language processing to other fields. Up to now, the Transformer series models have become mainstream models in NLP, CV, ASR and other fields. Therefore, how to train and infer Transformer models faster has become an important research direction in the industry. Low-precision quantization techniques can

Driving China News on June 28, 2023, today during the Mobile World Congress in Shanghai, China Unicom released the graphic model "Honghu Graphic Model 1.0". China Unicom said that the Honghu graphic model is the first large model for operators' value-added services. China Business News reporter learned that Honghu’s graphic model currently has two versions of 800 million training parameters and 2 billion training parameters, which can realize functions such as text-based pictures, video editing, and pictures-based pictures. In addition, China Unicom Chairman Liu Liehong also said in today's keynote speech that generative AI is ushering in a singularity of development, and 50% of jobs will be profoundly affected by artificial intelligence in the next two years.

What is Google Gemini? Gemini is the latest and most powerful AI model launched by Google, which can understand not only text, but also images, videos, and audio. As a multi-modal model, Gemini is described as being able to complete complex tasks in mathematics, physics and other fields, as well as understand and generate high-quality code in various programming languages. Currently, it can be integrated with Google Bard and Google Pixel 8, and will be gradually integrated. To other Google services Dennis Hassabis, CEO and co-founder of Google DeepMind, said: "Gemini is the result of a massive collaboration across the entire team at Google, including our colleagues at Google Research. It

Nvidia recently announced the launch of a new open source software suite called TensorRT-LLM, which expands the capabilities of large language model optimization on Nvidia GPUs and breaks the limits of artificial intelligence inference performance after deployment. Generative AI large language models have become popular due to their impressive capabilities. It expands the possibilities of artificial intelligence and is widely used in various industries. Users can obtain information by talking to chatbots, summarize large documents, write software code, and discover new ways to understand information, said Ian Buck, vice president of hyperscale and high-performance computing at Nvidia Corporation: "Large language model inference is becoming increasingly difficult. .The complexity of the model continues to increase, the model becomes more and more intelligent, and it becomes
