


Let the machine be anthropomorphized, from 'artificial mental retardation' to 'artificial intelligence'
On May 27, Entrepreneurship Dark Horse held the "2023 Leap•Dark Horse AIGC Summit" in Beijing. The theme of this conference is "Foreseeing a New World and Building a New Pattern." Justin Cassell, former associate dean of the School of Computer Science at Carnegie Mellon University and known as an "AI expert" and former chairman of the World Economic Forum (WEF) Computing Global Future Council in Davos, as well as 360 Group, Zhiyuan Research Institute, Senior executives from many companies in the industry, including Kunlun Wanwei, Yunzhisheng, BlueFocus, Wondershare Technology, and Zhizhichuangyu, were present and had in-depth exchanges with thousands of participants.
At the summit, Huang Wei, founder and CEO of Yunzhisheng, shared the "Road to an Intelligent Future" theme.
The following is the sharing content:
At first we wanted to do it the way experts do, hoping to hand over some methodologies to the machine. Ten years ago, the machine began to learn from error feedback. These are the general stages and paths of artificial intelligence technology in the past.
Today OpenAI launched ChatGPT and pre-training models, and the entire intelligence has become more anthropomorphic. First, we used very powerful computing power to read all the known texts in the world, and trained to form a large model. It is particularly like a baby's brain, which may have tens of billions or hundreds of billions of parameters. Unlike the human brain, the baby only inherits the appearance and personality of its parents at most, but the brain of the large model inherits knowledge, which is only the initial state. , and then through various methods such as fine-tuning, like children who will receive various educations during their growth, the evolution of the entire large model will be more anthropomorphic.
This is a change in the entire artificial intelligence.
What are the essential changes between today’s AGI and before? Before December 2022, the entire artificial intelligence will still be a discriminative artificial intelligence, doing judgment questions, special systems and intelligent modules to do some specific tasks. On the one hand, the performance of artificial intelligence is not that smart, and it is often criticized by others, "What you provide is artificial intelligence", so that in the past, the ceiling of artificial intelligence capabilities was low.
Second, in many scenarios, customer needs vary widely, but the capabilities of artificial intelligence are not that strong. Many companies and teams use various customizations to meet them. Artificial intelligence companies are not like high-tech companies. In the past ten years, they could only do discriminative AI in an era of manual workshops. But now there are large models and more powerful general capabilities, and artificial intelligence has begun to enter the industrial era.
With new generation and emergence capabilities, one model can solve different problems in many scenarios. In today's era, the big model of artificial intelligence is a generator. Before the invention of the engine, the countries in the Middle East were not that wealthy, and the value of oil was not that great. Just like today, data can be turned into fuel and capabilities, and this capability can be used to empower thousands of industries.
Why was Yunzhisheng able to launch a large self-developed model in a short period of time?
In 2016, when we saw AlphaGo, we implemented medical products in hospitals to help doctors at Peking Union Medical College Hospital and greatly improve their work efficiency. In the hospital scenario, just efficiency tools are not enough. The real intelligence of artificial intelligence is cognitive intelligence. Transformer was proposed in 2017. Cognitive intelligence requires relatively powerful computing power.
With these foundations, we have accumulated a lot of experience in both academic and engineering aspects. For an individual, this experience is your ability to make a living, but for a company, it is the core competitiveness to win in the market. After looking at the ChatGPT framework, we found that none of it was new, and they were all existing engineering combinations. We quickly combined this capability and invested it in the development of large models.
Three days ago, we released a large business model called Shanhai. After running through pre-training, instruction fine-tuning, and reinforcement learning based on human feedback, we saw the long-awaited emerging capabilities. At that time, the team was thinking about giving it a name. I was traveling frequently during that time and thought the name was pretty good. The sea is majestic, and its capacity is great, reflecting the infinite generative ability of large models. Mountains are high, and we know what can be said and what cannot be said. This is precisely to emphasize both the generative ability of large models and the large-scale models. Model security compliance issues.
There is a very interesting phenomenon. Everyone is talking about large models. Domestic attention to large models started after the Spring Festival, but no one talks about it and they are unsure. Until today, there is a view that this thing cannot be done only with technology. Even if the people are in place, the training cost is very high and it is extremely expensive. Large models are not a scientific revolution or the invention of new algorithms, but a combination of existing algorithms to make them larger. Most of them come at a cost, and of course there are many projects involved. The point is right.
On the other hand, if you think that large-scale models will be a big opportunity in the next 10-20 years, and BAT can’t invest in it, so you give up, I think there is still a chance.
In the past few years, Yunzhisheng did not need particularly good scientists. I even think that this matter is not something scientists do. Scientists have not used so much computing power and do not know where the scene is. So the result must be bad. Manufacturers with scenarios are the most likely to succeed.
The name Shanhai also has another meaning. The one you love is separated by mountains and seas, and mountains and seas can be leveled.
The power of mountains and seas is the decathlon. Generation ability is very subjective. Language understanding ability is very important when the scene is implemented. Why I thought it was artificial intelligence in the past was because of the lack of understanding and coding ability. The improvement of coding capabilities can help improve the reasoning capabilities of large models, and the output results must comply with domestic laws, regulations and even moral values. We also use the GPT-4 plug-in architecture to help enterprises and customers with one-stop services from data selection, model training, and model deployment.
Why do large models have complex logical reasoning capabilities? We did it today, but I don’t know why. It’s hard to say whether 50 billion parameters or 100 billion parameters is better. Maybe the neurons in the 100 billion parameters have not been activated yet.
In addition, there is medical care. At the beginning, we were doing large-scale models. Many people thought that Yunzhisheng was doing vertical industry models. No, we were doing industry applications. We challenged one of the most serious scenarios - medical care. Through the pre-training stage, we collected a lot of medical literature, monographs, books, and medical records, and accumulated tens of millions of real-labeled data, which can be converted into our fine-tuning data.
In addition, we also won the first prize of the Beijing Science and Technology Progress Award in 2019. The winning project is the key technology and application of large-scale knowledge graph construction. We have one of the largest medical knowledge graphs in China. We decompose the knowledge graph into knowledge Plug-ins are embedded into large language models, turning large models into experts in the medical field.
MedQA is a very authoritative medical knowledge question and answer test set, including Google's Med-PaLM, ChatGPT and GPT-4. They have all published their evaluation results on this test set. Shanhai achieved a score of 81 in the evaluation not long ago. , which greatly exceeds the 71 points of GPT-4. After domain enhancement, large models can be turned into experts in a certain field. There is another number that can be used for horizontal comparison. The highest known AI score for medical school graduates to pass the clinical practitioner examination is 456 points. Shanhai scored about 511 points. This is the super ability obtained by large models through domain enhancement. .
It is still quite difficult to build a large model. The threshold is very high. In addition to a lot of money, excellent algorithm engineers and algorithms, it also requires a lot of abilities. We summarize it as the power of mountains and seas. Intuitively speaking, large models themselves are large data sets, and large models are the job of engineers. Why is it that Yunzhisheng can produce a very authoritative and objective evaluation data in a few months? Our internal evaluation is not only In terms of medical and general fields, Yunzhisheng is one of the best.
The computing power platform is not just about buying how many cards to plug in. Yunzhisheng has almost 200P of computing power. The efficiency of using clusters has reached the top level in the industry. It can use relatively few cards to quickly train our Model.
Our current GPU cluster utilization can reach 50%. Large models require multiple cards. The current industry level is about 42%. Large models also need to achieve 3D hybrid parallel training. What is 3D? It means the parallelization of the model, the parallelization of the data, and the parallelization of the pipeline. The tasks must be separated into different cards of many different machines for calculation respectively, and finally the response results can be obtained quickly. In addition, many optimizations have been made in model reasoning, and the speed of reasoning has been increased by 5 times. How to separate the training card and the inference card? The training card is A800, and the inference card can achieve fast reasoning on a single card A6000.
In addition, data is very important, data size, data diversity, and data high quality. We can now support 10T level of fast deduplication. The training number of ChatGPT was 45T, but after optimization, hundreds of G of data were used. Come and train.
With these capabilities, we can use the capabilities of Atlas and UniDataOps to better serve Shanhai's industry customers.
Smart Internet of Things is also an important business of the company. We have many implementations. The results used in the past were indeed not very good. We hope that after Shanhai is established, we can use large models to build all existing Internet of Things products.
Medical care is the direction we are optimistic about. In the past, in the medical field, the product mainly had two aspects. First, instead of typing on the keyboard with hands, one could speak directly with the microphone, which greatly improved the work efficiency of doctors and shortened the time of inputting medical records from 3 hours to 1 hour; Second, after having medical records, there is also a system to review the medical records through the AI brain to check whether there are any errors in the medical records. What can be done now that AI has large model capabilities?
Shanhai’s vision is to create an interconnected and intuitive world through artificial intelligence. In the past, the definition of artificial intelligence was to make machines obey people. Today, we hope that machines will be more anthropomorphic. The communication between people and things will become more intuitive, and new capabilities will bring new products and new business models. I am very willing to welcome the new era of large models with everyone here.
Scan the QR code to join the Dark Horse Entrepreneur Exchange Group
↓↓↓
Scan the QR code below
Join the Dark Horse AIGC Industrial Camp
Understand the underlying logic of AIGC and connect to the future of the industry in one step
↓↓↓
Share, like and watch, complete the three-click combo, and deliver good content to more people who need it.
More exciting content, all in i dark horse video account
↓↓↓
Follow the Dark Horse Communication Matrix and get more exciting content
↓↓ ↓
The above is the detailed content of Let the machine be anthropomorphized, from 'artificial mental retardation' to 'artificial intelligence'. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
