


Development of large-scale models is limited and requires the creation of explainable AI theories
The limitation of GPT-4 is that it can only interact with the digital world, and we ultimately need to interact with the physical world. For this reason, the emergence of robots is particularly important, as it represents the realization of embodied intelligence. Zhang Bo pointed out that it is not necessary to develop humanoid robots, nor does it require overly complex hardware. He advocates conducting reinforcement learning research based on certain hardware. Such an approach allows fine-tuning without changing the original meaning.
The whole world is surprised by the powerful capabilities and potential of large models, but cannot explain the reason and can only attribute it to the "emergence" phenomenon. For the healthy development of the artificial intelligence industry, scientific research, technological innovation and industrial development must be integrated. To develop the third generation of artificial intelligence, explainable and robust theories and methods must be established, otherwise AI technology will never be convincing.
There is a ceiling for large language models.
Although the road to general artificial intelligence is still difficult, large language models have opened up a broad path for the AI industry. At the Zhipu AI 2024 Annual Technology Open Day, Academician Zhang Bo said that large models provide opportunities for the development of general hardware and software.
The traditional AI paradigm uses specific algorithms and rules to complete specific tasks. The generative AI paradigm is based on a general model called the basic model. It is trained in an open domain (open domain) through extensive text data training. It can generate high-quality text, images, and other content similar to human levels, and can be adapted to a wide range of downstream tasks through fine-tuning and other methods. Generative AI takes one step towards general AI, the second step is AI agents, and the third step is embodied intelligence. Zhang Bo said that GPT-4 can only deal with the digital world. We must eventually deal with the physical world, which requires robots, that is, embodied intelligence. The proposal of embodied intelligence helps to construct a complete intelligent agent, allowing the intelligent agent to both perceive and think. "You don't have to make a humanoid robot, because in many cases you just need your hands or feet, and you don't need to make the hardware very complicated." He advocates conducting reinforcement learning research based on certain hardware.
Generative AI large models have three major capabilities and one major shortcoming. The first is its powerful generation capabilities, which amaze people with its ability to generate coherent text with a variety of context and past conversations. Secondly, it has strong migration capabilities, that is, through training and fine-tuning of agent tasks, it can be applied to downstream tasks of interest. The third is powerful interaction capabilities, including human-computer interaction, interaction between multiple agents, and interaction with the environment, allowing AI to demonstrate intelligence levels comparable to humans in various fields. However, these large models also suffer from a drawback: illusion. Sometimes they generate made-up or nonsensical answers that seem reasonable.
Artificial intelligence helps promote economic growth. Industries such as construction, maintenance, and installation are difficult to be automated and intelligent, but white-collar jobs such as administrative management may be replaced by AI. AI can improve the quality and efficiency of most human jobs, but there are still only a few jobs that are completely replaced by AI. The reason why AI cannot yet replace most jobs is because there are still insurmountable ceilings for large models. Zhang Bo said that all the work of the large model is external prompts rather than proactive, and uses probabilistic predictions to complete tasks under external prompts, while human work is driven by internal intentions. The language generated by large language models and human language generation are only similar in behavior, but the internal mechanisms are fundamentally different. Large language models have ceilings such as unawareness, uncontrollable quality, untrustworthiness, and unrobustness. Different prompt words are given. Large models will output different answers. Large models can also produce hallucinations. “No matter how big the model is, the shortcoming of hallucinations always exists.”
He proposed that to develop the third generation of artificial intelligence, we must establish explainable and robust artificial intelligence theories and methods, develop safe, controllable, credible, reliable and scalable AI technology, and promote innovative applications of AI. and industrialization. If an explainable and robust artificial intelligence theory cannot be established, AI technology will be unreliable and will never be credible. "So far, this theory has not been established, which is why the development of artificial intelligence is slow and tortuous. The reason why the theory cannot be established is because it is subject to three specific limitations. In the past, specific models could only be used to solve specific tasks in specific fields. , how is it possible to establish a general theory? The emergence of large models provides the possibility of establishing this theory."
Zhang Bo said that large models provide us with the opportunity to develop general hardware and software. Artificial intelligence is entering a stage of steady development and has a huge impact on all walks of life. We must seize the opportunity to develop the artificial intelligence industry. But there are still a lot of uncertainties because AI is unpredictable and uncontrollable. The whole world is surprised by the powerful generation, migration and interaction capabilities of large models, but they cannot explain it and can only attribute it to "emergence". Therefore, for the healthy development of the artificial intelligence industry, scientific research, technological innovation, and industrial development must be combined.
The above is the detailed content of Development of large-scale models is limited and requires the creation of explainable AI theories. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



On May 30, Tencent announced a comprehensive upgrade of its Hunyuan model. The App "Tencent Yuanbao" based on the Hunyuan model was officially launched and can be downloaded from Apple and Android app stores. Compared with the Hunyuan applet version in the previous testing stage, Tencent Yuanbao provides core capabilities such as AI search, AI summary, and AI writing for work efficiency scenarios; for daily life scenarios, Yuanbao's gameplay is also richer and provides multiple features. AI application, and new gameplay methods such as creating personal agents are added. "Tencent does not strive to be the first to make large models." Liu Yuhong, vice president of Tencent Cloud and head of Tencent Hunyuan large model, said: "In the past year, we continued to promote the capabilities of Tencent Hunyuan large model. In the rich and massive Polish technology in business scenarios while gaining insights into users’ real needs

Tan Dai, President of Volcano Engine, said that companies that want to implement large models well face three key challenges: model effectiveness, inference costs, and implementation difficulty: they must have good basic large models as support to solve complex problems, and they must also have low-cost inference. Services allow large models to be widely used, and more tools, platforms and applications are needed to help companies implement scenarios. ——Tan Dai, President of Huoshan Engine 01. The large bean bag model makes its debut and is heavily used. Polishing the model effect is the most critical challenge for the implementation of AI. Tan Dai pointed out that only through extensive use can a good model be polished. Currently, the Doubao model processes 120 billion tokens of text and generates 30 million images every day. In order to help enterprises implement large-scale model scenarios, the beanbao large-scale model independently developed by ByteDance will be launched through the volcano

"High complexity, high fragmentation, and cross-domain" have always been the primary pain points on the road to digital and intelligent upgrading of the transportation industry. Recently, the "Qinling·Qinchuan Traffic Model" with a parameter scale of 100 billion, jointly built by China Vision, Xi'an Yanta District Government, and Xi'an Future Artificial Intelligence Computing Center, is oriented to the field of smart transportation and provides services to Xi'an and its surrounding areas. The region will create a fulcrum for smart transportation innovation. The "Qinling·Qinchuan Traffic Model" combines Xi'an's massive local traffic ecological data in open scenarios, the original advanced algorithm self-developed by China Science Vision, and the powerful computing power of Shengteng AI of Xi'an Future Artificial Intelligence Computing Center to provide road network monitoring, Smart transportation scenarios such as emergency command, maintenance management, and public travel bring about digital and intelligent changes. Traffic management has different characteristics in different cities, and the traffic on different roads

1. Product positioning of TensorRT-LLM TensorRT-LLM is a scalable inference solution developed by NVIDIA for large language models (LLM). It builds, compiles and executes calculation graphs based on the TensorRT deep learning compilation framework, and draws on the efficient Kernels implementation in FastTransformer. In addition, it utilizes NCCL for communication between devices. Developers can customize operators to meet specific needs based on technology development and demand differences, such as developing customized GEMM based on cutlass. TensorRT-LLM is NVIDIA's official inference solution, committed to providing high performance and continuously improving its practicality. TensorRT-LL

1. Background Introduction First, let’s introduce the development history of Yunwen Technology. Yunwen Technology Company...2023 is the period when large models are prevalent. Many companies believe that the importance of graphs has been greatly reduced after large models, and the preset information systems studied previously are no longer important. However, with the promotion of RAG and the prevalence of data governance, we have found that more efficient data governance and high-quality data are important prerequisites for improving the effectiveness of privatized large models. Therefore, more and more companies are beginning to pay attention to knowledge construction related content. This also promotes the construction and processing of knowledge to a higher level, where there are many techniques and methods that can be explored. It can be seen that the emergence of a new technology does not necessarily defeat all old technologies. It is also possible that the new technology and the old technology will be integrated with each other.

According to news on April 4, the Cyberspace Administration of China recently released a list of registered large models, and China Mobile’s “Jiutian Natural Language Interaction Large Model” was included in it, marking that China Mobile’s Jiutian AI large model can officially provide generative artificial intelligence services to the outside world. . China Mobile stated that this is the first large-scale model developed by a central enterprise to have passed both the national "Generative Artificial Intelligence Service Registration" and the "Domestic Deep Synthetic Service Algorithm Registration" dual registrations. According to reports, Jiutian’s natural language interaction large model has the characteristics of enhanced industry capabilities, security and credibility, and supports full-stack localization. It has formed various parameter versions such as 9 billion, 13.9 billion, 57 billion, and 100 billion, and can be flexibly deployed in Cloud, edge and end are different situations

If the test questions are too simple, both top students and poor students can get 90 points, and the gap cannot be widened... With the release of stronger models such as Claude3, Llama3 and even GPT-5 later, the industry is in urgent need of a more difficult and differentiated model Benchmarks. LMSYS, the organization behind the large model arena, launched the next generation benchmark, Arena-Hard, which attracted widespread attention. There is also the latest reference for the strength of the two fine-tuned versions of Llama3 instructions. Compared with MTBench, which had similar scores before, the Arena-Hard discrimination increased from 22.6% to 87.4%, which is stronger and weaker at a glance. Arena-Hard is built using real-time human data from the arena and has a consistency rate of 89.1% with human preferences.

According to news on June 13, according to Byte's "Volcano Engine" public account, Xiaomi's artificial intelligence assistant "Xiao Ai" has reached a cooperation with Volcano Engine. The two parties will achieve a more intelligent AI interactive experience based on the beanbao large model. It is reported that the large-scale beanbao model created by ByteDance can efficiently process up to 120 billion text tokens and generate 30 million pieces of content every day. Xiaomi used the beanbao large model to improve the learning and reasoning capabilities of its own model and create a new "Xiao Ai Classmate", which not only more accurately grasps user needs, but also provides faster response speed and more comprehensive content services. For example, when a user asks about a complex scientific concept, &ldq
