


Stanford University's '2023 AI Index' interprets the prospects of artificial intelligence
Stanford University’s Human-Centered Artificial Intelligence Institute (HAI) has released the 2023 Artificial Intelligence Index, which analyzes the impact and progress of artificial intelligence. This data-driven report delves into hot topics related to artificial intelligence, such as research, ethics, policy, public opinion and economics.
#Key findings from the study include how AI research can expand into specialized areas such as pattern recognition, machine learning and computer vision. The report notes that the number of AI publications has more than doubled since 2010. At the same time, AI industry applications surpassed academia, citing 32 important machine learning models produced by industry, while academia produced only 3. Research attributes this to the massive resources required to train these large models.
Traditional artificial intelligence benchmarks, such as the image classification benchmark ImageNet and the reading comprehension test SQuAD, are no longer sufficient to measure the rapid progress of technology, leading to the emergence of new benchmarks such as BIG bench and HELM . Vanessa Parli, deputy director of HAI and member of the AI Index Steering Committee, explained in an article at Stanford University that many AI benchmarks have reached a saturation point with little improvement, and researchers must adapt to how society wants to interact with AI. to develop new benchmarks. She gave the example of ChatGPT and how it passed many benchmarks but still often gave the wrong information.
Ethical issues such as bias and misinformation are another aspect of artificial intelligence examined in the report. With the rise of popular generative AI models such as DALL-E 2, Stable Diffusion, and of course ChatGPT, the ethical misuse of AI is increasing. The report noted that the number of AI incidents and controversies has increased 26 times since 2012, according to the AIAAIC, an independent database that stores AI abuse. Additionally, concern about AI ethics is growing rapidly, as research found that the number of submissions to the FAccT AI Ethics conference has more than doubled since 2021, and the number of submissions has increased tenfold since 2018.
The scale of large-scale language models is getting larger and larger, and the cost has become sky-high. The report takes the Google PaLM model released in 2022 as an example, pointing out that the cost of this model is 160 times higher than the OpenAI GPT-2 in 2019, and the scale is 360 times larger. In general, the larger the model, the higher the training cost. The study estimates the training costs for Deepmind’s Chinchilla model and HuggingFace’s BLOOM to be $2.1 million and $2.3 million, respectively.
Globally, private investment in AI is currently down 26.7% compared to 2021-2022, and AI financing for startups has also slowed down. However, over the past decade, investment in AI has increased significantly. The report shows that compared with 2013, private investment in artificial intelligence will increase 18 times in 2022. There has also been a plateau in the number of companies adopting new AI initiatives. The report said that the proportion of companies adopting artificial intelligence doubled between 2017 and 2022, but has recently leveled off at about 50-60%.
Another topic of interest is the growing government focus on artificial intelligence. The Artificial Intelligence Index analyzed the legislative records of 127 countries and found that 37 bills containing "artificial intelligence" became law in 2022, compared with just one in 2016. The study found that the U.S. government has increased spending on AI-related contracts by 2.5 times since 2017. Courts are also seeing a surge in AI-related legal cases: in 2022, there were 110 such cases related to civil, intellectual property and contract law.
The AI Index also delves into a Pew Research Center survey on Americans’ views on artificial intelligence. In a survey of more than 10,000 panelists, 45% said they had mixed feelings about the use of artificial intelligence in their daily lives, and 37% said they were more concerned than excited. Only 18% felt excited rather than worried. Among the main hesitations, 74% said they were very or somewhat concerned about AI being used to make important decisions for humans, and 75% were uncomfortable with AI being used to understand people’s thoughts and behaviors.
The above is the detailed content of Stanford University's '2023 AI Index' interprets the prospects of artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Open AI’s ChatGPT Mac application is now available to everyone, having been limited to only those with a ChatGPT Plus subscription for the last few months. The app installs just like any other native Mac app, as long as you have an up to date Apple S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
