


In the field of artificial intelligence, a breakthrough has been made: the 'sourceless water' problem of domestically produced large models has been solved.
In the field of artificial intelligence, a breakthrough has been made: the "sourceless water" problem of domestically produced large models has been solved.
At the "General Artificial Intelligence Industry Development Opportunities and Risks in the Big Model Era" forum at the 2023 World Artificial Intelligence Conference, a number of experts in the field of general artificial intelligence focused on large models and in-depth discussions on basic innovation, application technology and the future Prospects and other aspects of artificial intelligence issues.
Dai Qionghai, an academician of the Chinese Academy of Engineering, said: "Our country should deepen talent training and basic research on artificial intelligence in terms of policies, mechanisms and investment, strengthen original innovation, and avoid falling into the dilemma of 'water without a source'." He said in his keynote speech emphasized this point.
Wang Yu, tenured professor and chair of the Department of Electronic Engineering at Tsinghua University, pointed out that Shanghai already has many chip companies and algorithms, but how to deploy these algorithms on chips efficiently and uniformly is a very important issue. He emphasized that this is a key challenge faced by Shanghai in the field of artificial intelligence.
From the perspective of basic research, Dai Qionghai believes that my country’s breakthrough achievements in large-scale innovation are relatively limited. His point of view is that China’s talents in the field of artificial intelligence are mainly concentrated in applications, so there is huge development potential in application scenarios and technical levels. However, in terms of talent at the basic level, China is clearly at a disadvantage and lacks original innovation.
Dai Qionghai said that the innovative development of artificial intelligence requires three pillars: algorithms, data and computing power. Algorithms determine the level of intelligence, data determines the scope of intelligence, and computing power determines intelligence efficiency. Generally speaking, it is expected that in the next five years or so, large algorithm models will become the core basic platform for artificial intelligence applications.
Dai Qionghai also pointed out that brain intelligence is the new direction of the future. New artificial intelligence algorithms that integrate brain and cognition will lead the development of a new generation of intelligence. He suggested that the government encourage enterprises to lead the construction of large models, explore the combination of biological mechanisms and machine features, and further promote basic research and application development. He predicted that artificial intelligence with cognitive intelligence as its core will begin to be widely used in ten years.
In addition, Dai Qionghai also reminded people to be wary of security issues in large model applications. Large models are not yet capable of verifying the credibility of outputs, such as generating deceptive content. He emphasized that problems with large model applications are not as simple as computer network viruses. Once problems occur, they will have a disruptive impact. Therefore, safety and trustworthiness should be explicitly discussed during the application of large models.
It is necessary to focus on solving the pain points required to solve the four problems faced by the implementation of large-scale domestic models. First, the problem of long text processing needs to be solved. Secondly, the cost performance of large models needs to be improved. Third, large models need to be applied to multiple vertical domains. Finally, there is a new requirement for one-stop deployment. He emphasized that solving these needs will promote the development of the entire industry chain.
In the forum, participants put forward more opinions and suggestions on the development of large models. Some experts believe that dependence in the chip field can be compensated by enhancing the development and application of domestic large-power computing chips. They emphasized that although some chip companies have emerged in China, the ability to efficiently and uniformly deploy algorithms on chips needs to be further strengthened.
At the same time, experts also mentioned the application issues of large models in different vertical fields. In fields such as medical and finance, obtaining large-scale corpus data is a huge problem. Therefore, establishing a large model of a universal base and conducting detailed fine-tuning will help improve the basic performance of various industries.
It is generally believed that automating the deployment and optimization of large models into integrated solutions is an important trend. Improve overall efficiency and achieve more cost-effective results by implementing a layered approach to optimizing software and hardware collaboration, compilation optimization, and hardware infrastructure deployment. Experts call for further exploration of efficient fine-tuning algorithms to meet the needs of large models in different vertical fields.
Participants reached a consensus, emphasizing that the development of large models requires the joint efforts of governments, enterprises and academia. The government should strengthen policy guidance and promote basic research and talent training. Enterprises should assume a leading role and increase investment and promotion in the construction of large models. The academic community should strengthen cooperation with industry to promote the transformation and application of scientific and technological achievements.
Experts emphasized the need to strengthen research and exploration on security and credibility in the development of large-scale models. They advocate the establishment of corresponding norms and standards to ensure that the application of large models does not bring adverse effects and risks.
Finally, the participants expressed that the development of large models will bring huge opportunities to the artificial intelligence industry, but they also need to be alert to potential risks and challenges. They encourage all parties to conduct in-depth cooperation in the research and development, deployment and application of large models to jointly promote the healthy development of artificial intelligence and social progress.
Experts conducted extensive discussions and exchanges on the development and application of large-scale models at the forum of the World Artificial Intelligence Conference. They provided valuable insights and suggestions on basic innovation, technology applications and future prospects in the field of artificial intelligence, pointing out the direction for the development of the artificial intelligence industry.
The above is the detailed content of In the field of artificial intelligence, a breakthrough has been made: the 'sourceless water' problem of domestically produced large models has been solved.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics





This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

On May 30, Tencent announced a comprehensive upgrade of its Hunyuan model. The App "Tencent Yuanbao" based on the Hunyuan model was officially launched and can be downloaded from Apple and Android app stores. Compared with the Hunyuan applet version in the previous testing stage, Tencent Yuanbao provides core capabilities such as AI search, AI summary, and AI writing for work efficiency scenarios; for daily life scenarios, Yuanbao's gameplay is also richer and provides multiple features. AI application, and new gameplay methods such as creating personal agents are added. "Tencent does not strive to be the first to make large models." Liu Yuhong, vice president of Tencent Cloud and head of Tencent Hunyuan large model, said: "In the past year, we continued to promote the capabilities of Tencent Hunyuan large model. In the rich and massive Polish technology in business scenarios while gaining insights into users’ real needs

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

1. Background Introduction First, let’s introduce the development history of Yunwen Technology. Yunwen Technology Company...2023 is the period when large models are prevalent. Many companies believe that the importance of graphs has been greatly reduced after large models, and the preset information systems studied previously are no longer important. However, with the promotion of RAG and the prevalence of data governance, we have found that more efficient data governance and high-quality data are important prerequisites for improving the effectiveness of privatized large models. Therefore, more and more companies are beginning to pay attention to knowledge construction related content. This also promotes the construction and processing of knowledge to a higher level, where there are many techniques and methods that can be explored. It can be seen that the emergence of a new technology does not necessarily defeat all old technologies. It is also possible that the new technology and the old technology will be integrated with each other.

According to news on June 13, according to Byte's "Volcano Engine" public account, Xiaomi's artificial intelligence assistant "Xiao Ai" has reached a cooperation with Volcano Engine. The two parties will achieve a more intelligent AI interactive experience based on the beanbao large model. It is reported that the large-scale beanbao model created by ByteDance can efficiently process up to 120 billion text tokens and generate 30 million pieces of content every day. Xiaomi used the beanbao large model to improve the learning and reasoning capabilities of its own model and create a new "Xiao Ai Classmate", which not only more accurately grasps user needs, but also provides faster response speed and more comprehensive content services. For example, when a user asks about a complex scientific concept, &ldq

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A
