Table of Contents
The rise of ChatGPT highlights the potential dangers of moving too fast
It’s difficult to develop AI technology responsibly when the free market demands rapid development
Not Everyone Wants to Slow Down
Home Technology peripherals AI US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety

US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety

Apr 13, 2023 am 09:16 AM
openai ai training

US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety

According to news on March 30, Tesla CEO Elon Musk and Apple co-founder Steve Steve Wozniak and more than 1,000 others recently signed an open letter calling for a moratorium on training AI systems more powerful than GPT-4. BI, a mainstream American online media, believes that for the benefit of the whole society, AI development needs to slow down.

In the open letter, Wozniak, Musk and others requested that as AI technology becomes increasingly powerful, safety guardrails be set up and the training of more advanced AI models be suspended. They believe that for powerful AI models like OpenAI's GPT-4, "they should only be developed when we are confident that their impact is positive and the risks are controllable."

Of course, this is not the first time people have called for safety guardrails for AI. However, as AI becomes more complex and advanced, calls for caution are rising.

James Grimmelmann, a professor of digital and information law at Cornell University in the United States, said: “Slowing down the development of new AI models is a very good idea, because if AI ends up being If it is beneficial to us, then there is no harm in waiting a few months or years, we will reach the end anyway. And if it is detrimental, then we also buy ourselves extra time to work out the best way to respond and understand How to fight against it.”

The rise of ChatGPT highlights the potential dangers of moving too fast

Last November, when OpenAI’s chatbot ChatGPT was launched for public testing, it caused a huge sensation. Understandably, people started promoting ChatGPT's capabilities, and its destructive effect on society quickly became apparent. ChatGPT began passing medical professional exams, giving instructions on how to make bombs, and even created an alter ego for himself.

The more we use AI, especially so-called generative artificial intelligence (AIGC) tools like ChatGPT or the text-to-image conversion tool Stable Diffusion, the more we see its shortcomings and its ways of creating bias. potential, and how powerless we humans appear to be in harnessing its power.

BI editor Hasan Chowdhury wrote that AI has the potential to “become a turbocharger, accelerating the spread of our mistakes.” Like social media, it taps into the best and worst of humanity. But unlike social media, AI will be more integrated into people's lives.

ChatGPT and other similar AI products already tend to distort information and make mistakes, something Wozniak has spoken about publicly. It's prone to so-called "hallucinations" (untruthful information), and even OpenAI CEO Sam Altman admitted that the company's models can produce racial, sexist and biased answers . Stable Diffusion has also run into copyright issues and been accused of stealing inspiration from the work of digital artists.

As AI becomes integrated into more everyday technologies, we may introduce more misinformation into the world on a larger scale. Even tasks that seem benign to an AI, such as helping plan a vacation, may not yield completely trustworthy results.

It’s difficult to develop AI technology responsibly when the free market demands rapid development

To be clear, AI is an incredibly transformative technology, especially one like ChatGPT Such AIGC. There's nothing inherently wrong with developing machines to do most of the tedious work that people hate.

While the technology has created an existential crisis among the workforce, it has also been hailed as an equalizing tool for the tech industry. There is also no evidence that ChatGPT is preparing to lead a bot insurgency in the coming years.

Many AI companies have ethicists involved to develop this technology responsibly. But if rushing a product outweighs its social impact, teams focused on creating AI safely won’t be able to get the job done in peace.

Speed ​​seems to be a factor that cannot be ignored in this AI craze. OpenAI believes that if the company moves fast enough, it can fend off the competition and become a leader in the AIGC space. That prompted Microsoft, Google and just about every other company to follow suit.

Releasing powerful AI models for public experience before they are ready does not make the technology better. The best use cases for AI have yet to be found because developers have to cut through the noise generated by the technology they create, and users are distracted by the noise.

Not Everyone Wants to Slow Down

The open letter from Musk and others has also been criticized by others, who believe that it misses the point.

Emily M. Bender, a professor at the University of Washington, said on Twitter that Musk and other technology leaders only focus on the power of AI in the hype cycle, rather than the actual damage it can cause.

Cornell University digital and information law professor Gerry Melman added that tech leaders who signed the open letter were "belatedly arriving" and opening a Pandora's box that could bring consequences to themselves. Come to trouble. He said: "Now that they have signed this letter, they can't turn around and apply the same policy to other technologies such as self-driving cars."

Suspending development or imposing more regulations may also Will not achieve results. But now, the conversation seems to have turned. AI has been around for decades, maybe we can wait a few more years.

The above is the detailed content of US media: Musk and others are right to call for a suspension of AI training and need to slow down for safety. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Will R.E.P.O. Have Crossplay?
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

A new programming paradigm, when Spring Boot meets OpenAI A new programming paradigm, when Spring Boot meets OpenAI Feb 01, 2024 pm 09:18 PM

In 2023, AI technology has become a hot topic and has a huge impact on various industries, especially in the programming field. People are increasingly aware of the importance of AI technology, and the Spring community is no exception. With the continuous advancement of GenAI (General Artificial Intelligence) technology, it has become crucial and urgent to simplify the creation of applications with AI functions. Against this background, "SpringAI" emerged, aiming to simplify the process of developing AI functional applications, making it simple and intuitive and avoiding unnecessary complexity. Through "SpringAI", developers can more easily build applications with AI functions, making them easier to use and operate.

Choosing the embedding model that best fits your data: A comparison test of OpenAI and open source multi-language embeddings Choosing the embedding model that best fits your data: A comparison test of OpenAI and open source multi-language embeddings Feb 26, 2024 pm 06:10 PM

OpenAI recently announced the launch of their latest generation embedding model embeddingv3, which they claim is the most performant embedding model with higher multi-language performance. This batch of models is divided into two types: the smaller text-embeddings-3-small and the more powerful and larger text-embeddings-3-large. Little information is disclosed about how these models are designed and trained, and the models are only accessible through paid APIs. So there have been many open source embedding models. But how do these open source models compare with the OpenAI closed source model? This article will empirically compare the performance of these new models with open source models. We plan to create a data

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Rust-based Zed editor has been open sourced, with built-in support for OpenAI and GitHub Copilot Rust-based Zed editor has been open sourced, with built-in support for OpenAI and GitHub Copilot Feb 01, 2024 pm 02:51 PM

Author丨Compiled by TimAnderson丨Produced by Noah|51CTO Technology Stack (WeChat ID: blog51cto) The Zed editor project is still in the pre-release stage and has been open sourced under AGPL, GPL and Apache licenses. The editor features high performance and multiple AI-assisted options, but is currently only available on the Mac platform. Nathan Sobo explained in a post that in the Zed project's code base on GitHub, the editor part is licensed under the GPL, the server-side components are licensed under the AGPL, and the GPUI (GPU Accelerated User) The interface) part adopts the Apache2.0 license. GPUI is a product developed by the Zed team

Don't wait for OpenAI, wait for Open-Sora to be fully open source Don't wait for OpenAI, wait for Open-Sora to be fully open source Mar 18, 2024 pm 08:40 PM

Not long ago, OpenAISora quickly became popular with its amazing video generation effects. It stood out among the crowd of literary video models and became the focus of global attention. Following the launch of the Sora training inference reproduction process with a 46% cost reduction 2 weeks ago, the Colossal-AI team has fully open sourced the world's first Sora-like architecture video generation model "Open-Sora1.0", covering the entire training process, including data processing, all training details and model weights, and join hands with global AI enthusiasts to promote a new era of video creation. For a sneak peek, let’s take a look at a video of a bustling city generated by the “Open-Sora1.0” model released by the Colossal-AI team. Open-Sora1.0

Microsoft, OpenAI plan to invest $100 million in humanoid robots! Netizens are calling Musk Microsoft, OpenAI plan to invest $100 million in humanoid robots! Netizens are calling Musk Feb 01, 2024 am 11:18 AM

Microsoft and OpenAI were revealed to be investing large sums of money into a humanoid robot startup at the beginning of the year. Among them, Microsoft plans to invest US$95 million, and OpenAI will invest US$5 million. According to Bloomberg, the company is expected to raise a total of US$500 million in this round, and its pre-money valuation may reach US$1.9 billion. What attracts them? Let’s take a look at this company’s robotics achievements first. This robot is all silver and black, and its appearance resembles the image of a robot in a Hollywood science fiction blockbuster: Now, he is putting a coffee capsule into the coffee machine: If it is not placed correctly, it will adjust itself without any human remote control: However, After a while, a cup of coffee can be taken away and enjoyed: Do you have any family members who have recognized it? Yes, this robot was created some time ago.

The local running performance of the Embedding service exceeds that of OpenAI Text-Embedding-Ada-002, which is so convenient! The local running performance of the Embedding service exceeds that of OpenAI Text-Embedding-Ada-002, which is so convenient! Apr 15, 2024 am 09:01 AM

Ollama is a super practical tool that allows you to easily run open source models such as Llama2, Mistral, and Gemma locally. In this article, I will introduce how to use Ollama to vectorize text. If you have not installed Ollama locally, you can read this article. In this article we will use the nomic-embed-text[2] model. It is a text encoder that outperforms OpenAI text-embedding-ada-002 and text-embedding-3-small on short context and long context tasks. Start the nomic-embed-text service when you have successfully installed o

Sudden! OpenAI fires Ilya ally for suspected information leakage Sudden! OpenAI fires Ilya ally for suspected information leakage Apr 15, 2024 am 09:01 AM

Sudden! OpenAI fired people, the reason: suspected information leakage. One is Leopold Aschenbrenner, an ally of the missing chief scientist Ilya and a core member of the Superalignment team. The other person is not simple either. He is Pavel Izmailov, a researcher on the LLM inference team, who also worked on the super alignment team. It's unclear exactly what information the two men leaked. After the news was exposed, many netizens expressed "quite shocked": I saw Aschenbrenner's post not long ago and felt that he was on the rise in his career. I didn't expect such a change. Some netizens in the picture think: OpenAI lost Aschenbrenner, I

See all articles