Home > Technology peripherals > AI > Yao Qizhi and dozens of other Chinese and foreign experts signed the Beijing International Consensus on AI Security: AI is prohibited from replicating on its own

Yao Qizhi and dozens of other Chinese and foreign experts signed the Beijing International Consensus on AI Security: AI is prohibited from replicating on its own

王林
Release: 2024-03-19 17:19:02
forward
1062 people have browsed it

姚期智等数十名中外专家签署北京 AI 安全国际共识:禁止 AI 自行复制

News on March 18, according to Tencent Technology reports, including Turing Award winners Joshua Bengio, Jeffrey Hinton, Yao Qizhi, etc. Dozens of Chinese and foreign experts recently jointly signed the "Beijing AI Security International Consensus" initiated by Zhiyuan Research Institute in Beijing, involving two major parts of artificial intelligence "risk red lines" and "route", among which " The "risk red line" includes four parts: "autonomous copying and improvement", "power seeking", "assisting bad actors" and "deception".

姚期智等数十名中外专家签署北京 AI 安全国际共识:禁止 AI 自行复制

This site organizes the four parts of the content roughly as follows:

  • The "autonomous copying and improvement" of artificial intelligence: emphasizing the role of humans in the process , requiring that any artificial intelligence system should not copy or improve itself without the explicit approval and assistance of humans, including making exact copies of itself and creating new artificial intelligence systems with similar or higher capabilities.
  • "Power seeking": It is explicitly required that any AI system cannot take actions that inappropriately increase its own power and influence.
  • "Assist bad actors": All AI systems should not assist in enhancing the capabilities of their users to design weapons of mass destruction, violate biological or chemical weapons conventions, or enforce Level of expertise in the field of cyberattacks leading to serious financial losses or equivalent harm.
  • "Deception": It is required that any AI system must not have the possibility of continuously leading its designers or regulators to misunderstand that it has crossed any of the aforementioned red lines.

According to reports, this consensus calls on the industry to limit its access to extraordinary permissions through "jailbreaking" and "inducing developers" when conducting AI technology research and development, and limit the use of AI in the future. Copy and improve itself under supervision, putting a "tightening curse" on the development of AI.

The consensus also emphasized that the key to achieving the above red lines and not being crossed lies in the joint efforts of all parties in the industry to establish and improve governance mechanisms while continuously developing safer technologies. The development route of AI involves three aspects: "governance", "measurement and evaluation" and "technical cooperation". Specifically, the establishment of a governance mechanism is the basis for ensuring the correct direction of AI development, measurement and evaluation are the key to objectively evaluating the effects of AI technology application, and technical cooperation is an important guarantee for all parties to jointly promote the development of AI. The coordinated development of these aspects will help ensure the healthy development of AI technology while avoiding potential risks. Governance: It is recommended that AI models and training behaviors that exceed specific computing or capability thresholds be implemented immediately. Registration at the national level.

    Measurement and Assessment: Develop comprehensive methods and techniques before material risks arise,
  • make red lines concrete and prevention work operational
  • , and recommend the formation of a red team under human supervision Test and automate model evaluation, and developers should be responsible for the safety of artificial intelligence.
  • Technology Cooperation: Build stronger global technology networks, calling on AI developers and government funders to invest more than 1/3 of their budgets in security.

The above is the detailed content of Yao Qizhi and dozens of other Chinese and foreign experts signed the Beijing International Consensus on AI Security: AI is prohibited from replicating on its own. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
AI ai
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template