Recently, the Stanford HAI Research Institute led by Li Feifei released a perspective report on "generative AI".
#The report points out that most current generative artificial intelligence is driven by basic models.
The opportunities these models bring to our lives, communities, and societies are enormous, but so are the risks.
#On the one hand, the generative AI can make humans more productive and creative. On the other hand, they can amplify social biases and even undermine our trust in information.
#We believe that collaboration across disciplines is essential to ensuring these technologies benefit us all. Here's what leaders in the fields of medicine, science, engineering, humanities, and social sciences have to say about how "generative artificial intelligence" will impact their fields and our world.
#In this article, we selected Li Feifei and Percy Liang’s insights on current generative AI.
For the complete opinion report, please see:
##https://hai.stanford .edu/generative-ai-perspectives-stanford-haiLi Feifei: The great turning point of artificial intelligence
Li Feifei, co-director of Stanford HAI, posted a message : "The great turning point of artificial intelligence."
The human brain can recognize all patterns in the world and build models or generate models based on them concept. The dream of generations of artificial intelligence scientists is to give machines this generative ability, and they have worked hard for a long time in the field of generative model algorithm development.
In 1966, researchers at MIT launched the "Summer Vision Project", aiming to use technology to effectively build visual systems. An important part. This was the beginning of research in the field of computer vision and image generation.
Recently, thanks to the close relationship between deep learning and big data, people seem to have reached an important turning point, which is about to enable machines to generate language, images, Audio, etc. capabilities.
Although computer vision was inspired by building AI that can see what humans can, the goal of the discipline now goes far beyond that. The AI to be built in the future should see things that humans cannot.
#How to use generative artificial intelligence to enhance human vision?
##For example, deaths caused by medical errors are a concerning issue in the United States. Generative AI can assist healthcare providers in seeing potential problems.
If the error occurs in rare circumstances, generative AI can create simulated versions of similar data to further train AI models or provide training for medical personnel.
Before you start developing a new generative tool, you should focus on what people want to gain from the tool.
#In a recent project to benchmark robotic tasks, the research team conducted a large-scale user study before starting work, asking people if they were How much they would benefit from robots completing certain tasks, and the tasks that benefit people the most, became the focus of the project.
#To seize the significant opportunities created by generative AI, the associated risks also need to be properly assessed.
Joy Buolamwini led a study called "Shades of Gender," which found that AI often has problems identifying women and people of color. Similar biases against underrepresented groups will continue to appear in generative AI.
It is also a very important ability to determine whether a picture was generated using AI. Human society is built on trust in citizenship, and without this ability, our sense of trust will be reduced.
#Advances in machine-generated capabilities are extremely exciting, as is the potential for AI to see things humans cannot.
#However, we need to be alert to the ways in which these capabilities can disrupt our daily lives, our environment, and our role as global citizens.
Director of the Institute for Human-Centered Artificial Intelligence at Stanford University, Associate Professor of Computer Science Percy Liang published an article "The New Cambrian Era: The Excitement and Anxiety of Science"
# #In human history, it has always been difficult to create new things, and this ability is almost only possessed by experts.
#But with the recent advancement of basic models, the "Cambrian explosion" of artificial intelligence is taking place, and artificial intelligence will be able to create anything, from videos to Protein to code.
#This ability lowers the threshold for creation, but it also deprives us of the ability to discern reality.
#Basic models based on deep neural networks and self-supervised learning have been around for decades. Recently, however, the sheer volume of data that these models can be trained on has led to rapid advances in model capabilities.
A paper released in 2021 details the opportunities and risks of the underlying model, and these emerging capabilities will become "a source of excitement for the scientific community," It can also lead to "unintended consequences."
The issue of homogeneity is also discussed in the paper. The same few models are reused as the basis for many applications, allowing researchers to focus on a small set of models. But centralization also makes these models a single point of failure, with potential harm affecting many downstream applications.
It is also very important to benchmark the basic model so that researchers can better understand its capabilities and shortcomings and formulate more reasonable development strategies.
#HELM (Holistic Evaluation of Language Models) was developed for this purpose. HELM evaluates the performance of more than 30 well-known language models in a series of scenarios using various indicators such as accuracy, robustness, and fairness.
New models, new application scenarios and new evaluation indicators will continue to appear. We welcome everyone to contribute to the development of HELM.
The above is the detailed content of Li Feifei has these views on AIGC|Stanford HAI Viewpoint Report. For more information, please follow other related articles on the PHP Chinese website!