


Former Google CEO: AI is like nuclear weapons. Big countries need to establish 'AI deterrence' to ensure mutual destruction.
Former Google CEO Eric Schmidt compared artificial intelligence to nuclear weapons in an interview and called for a similar deterrent mechanism of mutually assured destruction to prevent the world’s most Powerful countries are destroying each other because of AI.
Mutually Assured Destruction (Mutually Assured Destruction, referred to as the M.A.D. mechanism, also known as the principle of mutual destruction) is an idea of "all destroyed" nature. If one of the two opposing parties uses it fully, both parties will be destroyed, which is called the "balance of terror."
Schmidt compared AI to nuclear weapons, saying that China and the United States may conclude a treaty similar to the ban on "nuclear testing" in the future to prevent AI from destroying the world.
Schmidt: I was still very young and naive at the time
On July 22, Schmidt spoke at the Aspen Security Forum on national security and artificial intelligence. The dangers of artificial intelligence were discussed at the panel.
In response to a question about ethical values in technology, Schmidt explained that he himself was naive about the power of information in the early days of Google.
He then called for technology to better align with the morals and ethics of those it serves, and made a bizarre comparison between artificial intelligence and nuclear weapons.
Schmidt envisions that in the near future, China and the United States will need to sign some treaties around artificial intelligence.
Schmidt said: "In the 1950s and 1960s, we ended up with a 'to be expected' rule about nuclear testing, and ultimately nuclear testing was banned."
Schmidt believes that "this is an example of the balance of trust or lack of trust, and it is a 'no surprises' rule." He is very worried about the beginning of some misunderstandings and misunderstandings between the United States and China, the two artificial intelligence powers. Something that leads to triggering a dangerous event.
Schmidt said that no one is currently doing research in this area, but artificial intelligence is so powerful.
Eric Schmidt served as CEO of Google from 2001 to 2011, executive chairman of Google from 2011 to 2015, and executive chairman of Alphabet from 2015 to 2017. From 2020 to 2020, he served as a technical advisor to Alphabet.
In 2008, while serving as chairman of Google, Schmidt campaigned for Barack Obama and later worked with Eric Lander Lander and became a member of President Obama's Council of Advisors on Science and Technology.
From 2019 to 2021, Schmidt co-chaired the National Security Commission on AI with Robert O. Work.
#Is AI really that dangerous?
Artificial intelligence and machine learning are impressive yet often misunderstood technologies. For the most part, it's not as smart as people think it is.
It can produce masterpiece-level artwork, beat humans in "StarCraft 2", and can make basic phone calls for users. However, trying to make it complete more complex tasks, such as autonomous driving, But it didn't go well.
Schmidt envisions that in the near future, both China and the United States will be concerned about security issues, forcing both sides to reach some kind of deterrence treaty on artificial intelligence. He spoke of the 1950s and 1960s, when countries used diplomacy to orchestrate a series of controls over the deadliest weapons on earth. But it would take decades of nuclear explosions for the world to get to the point where the Nuclear Test Ban Treaty, as well as SALT II and other landmark legislation, were enacted , such as the nuclear explosions in Hiroshima and Nagasaki.
The United States used nuclear weapons to destroy two Japanese cities at the end of World War II, killing thousands of people and proving to the world the eternal horror of nuclear weapons.
Subsequently, the Soviet Union and China also successfully developed nuclear weapons, and then came the creation of Mutually Assured Destruction (MAD), a deterrence theory that maintains a "balance of dangers" to ensure that if one country launches a nuclear weapon, other countries may also launch nuclear weapons. emission.
To date, humanity has refrained from using the most destructive weapons on the planet because doing so could destroy civilization across the globe.
Does artificial intelligence currently have such power?
It seems that artificial intelligence has not proven itself to be as destructive as nuclear weapons, but many people in power are afraid of this new technology, and people have even suggested giving control of nuclear weapons to artificial intelligence. These people believe that Artificial intelligence is better suited than humans to serve as arbiters of nuclear weapons use.
So the problem with AI may not be that it has the potential world-destroying power of nuclear weapons, AI is only as good as its designers and they reflect the values of their creators.
Artificial intelligence has the classic "garbage in, garbage out" problem. Racist algorithms give birth to racist robots, and artificial intelligence will also produce biases.
DeepMind CEO Demis Hassabis understands this better than Schmidt.
DeepMind has developed an artificial intelligence capable of defeating Starcraft II players. In a July interview on the Lex Fridman Podcast, Fridman asked Hassabis how to control an artificial intelligence like Such powerful technology, and how Hassabis himself avoided being corrupted by such power.
Hassabis’s answer: “Artificial intelligence is too big an idea,” he said. “What matters is who created (artificial intelligence), what culture they come from, what values they have, They are the builders of artificial intelligence systems. The artificial intelligence system will learn on its own... But the culture of the system and the values of the creator will remain in the system."
Artificial intelligence is one of the The creator reflects that a 1.2-megaton explosion cannot level a city to the ground unless humans teach it to do so.
The above is the detailed content of Former Google CEO: AI is like nuclear weapons. Big countries need to establish 'AI deterrence' to ensure mutual destruction.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year
