


Don't be exposed, give up AI, and move away from the earth as soon as possible! What is the meaning of Hawking's advice?
Don’t actively look for aliens! Try to move away from the earth as quickly as possible! Give up the development of artificial intelligence, otherwise it will bring destruction to the world. The above are the three pieces of advice left to the world by the late physicist Stephen Hawking.
Perhaps you will think that his statement is inevitably a bit exaggerated or even alarmist. But have you ever thought about what the world would be like if his worries finally came true?
If you are interested in extraterrestrial civilization, you must have heard of the name SETI. It is an experimental project that uses networked computers around the world to search for extraterrestrial civilizations. Since its establishment in 1999, it has been relentlessly searching for suspicious signals in the universe. And looking forward to encountering some distant extraterrestrial civilization unexpectedly one day.
But Hawking believes that this is too dangerous. The level of technology and intelligence of any extraterrestrial civilization that appears on Earth will be beyond the reach of humans.
Their arrival will undoubtedly be the same as when Columbus landed on the American continent hundreds of years ago. All it will bring is death and destruction.
In addition, Hawking also believes that we cannot just limit ourselves to the earth. At present, real problems such as climate change, resource depletion, and population growth will become the key constraints on human development.
So in his opinion, we should move away from here as soon as possible and spread the seeds of civilization to other planets through interstellar immigration. This is the best way to ensure the long-term existence of mankind.
Not only that, he also suggested not to develop artificial intelligence. Because this is likely to ultimately bring destruction to mankind. According to Hawking, as artificial intelligence iterates, they may eventually develop self-awareness. Once out of control, the horrific scenes we see in science fiction movies today will become reality in the future.
Although now, the level of artificial intelligence is far from having such terrifying capabilities. But with continuous self-learning and improvement, they will eventually surpass human wisdom. At that time, the person who controls the future outcome of the entire earth will also change hands.
Of course, Hawking’s advice did not stop the pace of human exploration. Today, both the search for extraterrestrial civilizations and the development of artificial intelligence are proceeding step by step. Musk also announced that he would cooperate with NASA to prepare for a Mars immigration plan.
I just don’t know that for us, we are in the midst of destruction and departure. Which one will come first?
The above is the detailed content of Don't be exposed, give up AI, and move away from the earth as soon as possible! What is the meaning of Hawking's advice?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Since its public launch in November 2022, ChatGPT has experienced significant growth. It has become an indispensable tool for many businesses and individuals, but as ChatGPT is integrated into our daily lives and work on a large scale, people will naturally think: Is ChatGPT safe to use? ChatGPT is generally considered safe due to the extensive security measures, data handling methods, and privacy policies implemented by its developers. However, like any technology, ChatGPT is not immune to security issues and vulnerabilities. This article will help you better understand the security of ChatGPT and AI language models. We will explore aspects such as data confidentiality, user privacy, potential risks, AI regulation and security measures. Finally, you will be interested in Chat

"Although I know that scams are common nowadays, I still can't believe that I have actually encountered one." On May 23, reader Wu Jia (pseudonym) was still frightened when recalling the telecom fraud she encountered a few days ago. In the scam Wu Jia encountered, the scammer used AI to change her face into someone she was familiar with. Not only did Wu Jia encounter AI fraud that was difficult to guard against in her daily life, a reporter from Beijing Business Daily noticed that new telecommunications fraud models using AI technology have shown a high incidence in recent times. “The success rate of AI fraud is close to 100%.” “Technology Topics such as "Company owner was defrauded of 4.3 million yuan in 10 minutes" have been on the hot search topics one after another, which has also triggered discussions among users on the application of new technologies. "AI face-changing" fraud Artificial intelligence has become popular again, this time around telecommunications fraud. Wu Jia

Key points: 1. The security issue of large AI models is never a single issue. Just like human health management, it is a complex and systematic system engineering involving multiple subjects and the entire industry chain. 2. AI security is divided into: the security of large language models (AISafety), the security of models and usage models (Security for AI), and the impact of the development of large language models on existing network security, which corresponds to individual security, environmental security and social security. different levels. 3. AI, as a "new species", requires safety monitoring during the training process of large models. When the large models are finally introduced to the market, they also need a "quality inspection". After quality inspection, they enter the market and need to be used in a controlled manner. methods, these are all macros to solve security problems

Many explorers and practitioners in the field of AI have gathered together to share research results, exchange practical experience, and talk about the beauty of science and technology. The 2023 Beijing Intelligent Source Conference was successfully held recently. As a comprehensive expert event in the field of artificial intelligence, this thought shining with wisdom Exchange, and witness an amazing evolution of intelligence with hundreds of wonderful reports and discussions. At the AI Security and Alignment Forum, many experts and scholars communicated. In the era of large models, how to ensure that increasingly powerful and versatile artificial intelligence systems are safe, controllable, and consistent with human intentions and values is an extremely important issue. This safety issue is also known as the human-machine alignment (AIalignment) problem, and it represents one of the most urgent and meaningful scientific challenges facing human society this century. Argument

Countries including the UK, US and China have agreed to a consensus on the risks posed by advanced artificial intelligence, pledging to ensure the technology is developed and deployed safely. At the British government's two-day "Global Artificial Intelligence Security Summit" held this week, 28 countries including Brazil, India, Nigeria and Saudi Arabia, as well as the European Union, signed an AI agreement called the "Bletchley Declaration." The UK government said the declaration achieves the summit’s main aim of establishing joint agreement and responsibilities on the risks, opportunities and international cooperation moving forward in advanced AI safety and research, particularly through wider scientific collaboration. Participating countries shared the view that potential intentional misuse could pose serious risks and highlighted concerns about cybersecurity, biotechnology, disinformation, bias and privacy risks.

Don't actively look for aliens! Try to move away from the earth as quickly as possible! Give up the development of artificial intelligence, otherwise it will bring destruction to the world. The above are the three pieces of advice left to the world by the late physicist Stephen Hawking. Maybe you will think that his statement is inevitably a bit exaggerated or even alarmist. But have you ever thought about what the world would be like if his worries finally came true? If you are interested in extraterrestrial civilization, you must have heard of the name SETI. It is an experimental project that uses networked computers around the world to search for extraterrestrial civilizations. Since its establishment in 1999, it has been relentlessly searching for suspicious signals in the universe. And looking forward to encountering some distant extraterrestrial civilization unexpectedly one day. But Hawking believes that this

On the afternoon of September 7, at the "Exploring the Next Generation of Security Intelligence" forum held at the 2023Inclusion Bund Conference, the Cloud Security Alliance (CSA), the world's authoritative international industry organization, Greater China announced the establishment of an "AI Security Working Group". Huayunan and China More than 30 institutions including the Telecommunications Research Institute, Ant Group, Huawei, Xi'an University of Electronic Science and Technology, and Shenzhen National Financial Technology Evaluation Center became the first batch of sponsors. The "AI Security Working Group" is committed to jointly solving the security problems caused by the rapid development of AI technology. The Cloud Security Alliance Greater China AI Security Working Group will be co-led by China Telecom Research Institute and Ant Group. The working group will convene enterprises, schools, research institutions and user units involved in the upstream and downstream of the artificial intelligence industry chain within the alliance.

IT House news on October 27, based on reports from CNBC, Reuters and other reports, on Thursday local time, British Prime Minister Sunak announced plans to establish the world’s first AI security research institute and hold an AI security conference on November 1-2. Security Summit. The summit will bring together AI companies, governments, civil society and experts in related fields from around the world to discuss how to reduce the risks posed by AI through international coordinated action. Source Pexels Sunak said in his speech that this soon-to-be-established institute will promote the world’s understanding of AI security, and will carefully study, evaluate and test new AI technologies to understand the capabilities of each new model and Explore risks ranging from social harms such as "bias and misinformation" to "the most extreme risks". Sunak said, “AI
