The development of superintelligence is a topic of endless debate among scientists. Superintelligence evolved from the more traditional concept of artificial general intelligence (AGI). Over the next decade, this powerful technology has the potential to emerge, capable of both solving important global problems and potentially causing the loss or disappearance of human rights.
OpenAI’s (American artificial intelligence research company) strategy includes creating an automated alignment of researchers with human-level capabilities and leveraging massive computing resources to iteratively train and align superintelligence. The process, called super-intelligence alignment, requires innovation, extensive validation, and adversarial stress testing of AI alignment technology.
To address this challenge, OpenAI is investing significant resources and research and actively encourages outstanding researchers and engineers to work together. However, it remains to be seen whether the shift in terminology from artificial general intelligence to superintelligence will have a profound impact on the ongoing debate around the risks and benefits of artificial intelligence.
The potential of superintelligence is highlighted by OpenAI, which is considered one of the most impactful technologies of all time, with the ability to solve major global problems. Still, it acknowledges the huge risks associated with superintelligence, which could lead to the loss of power or even extinction of humanity.
OpenAI believes that while superintelligence may seem far out of reach, it may be possible within the next decade. To address the challenge of aligning superintelligence with human intent, new governance institutions need to be established to manage these risks. Strikingly, the term used by OpenAI is not the traditional artificial general intelligence (AGI). Their reasons are as follows:
We focus more on superintelligence rather than general artificial intelligence to highlight higher levels of capabilities. We have a lot of uncertainty about how quickly this technology will develop over the next few years, so we're choosing to aim at a more difficult target to calibrate a more capable system.
Current AI alignment technology cannot yet fully control potential super-intelligent AI, although there are methods such as reinforcement learning to improve based on human feedback. Humans are incapable of reliably supervising systems that are much smarter than us, and existing technology cannot scale into the realm of superintelligence. OpenAI emphasizes the need for scientific and technological breakthroughs to overcome these challenges.
OpenAI’s approach aims to develop an automatic alignment researcher with capabilities roughly close to human levels. Vast computing resources will be used to scale their efforts and iteratively fine-tune superintelligence. Key steps in this work include developing scalable training methods, validating the generated models, and stress testing the alignment pipeline. According to the title of the OpenAI announcement, the concept is called superintelligent alignment.
To overcome the difficulty of human evaluation of challenging tasks, scalable supervision can be used to solve it with the help of artificial intelligence systems. Critical to verifying consistency, supervision needs to be applied to unsupervised tasks and detect problematic behavior and internal structure. Training unaligned models is also an adversarial testing method that helps validate the effectiveness of alignment techniques.
OpenAI expects that its research focus will change as more is learned about the problem, and they plan to share their roadmap in the future. They formed a team of top machine learning researchers and engineers focused on solving the problem of superintelligence alignment. Over the next four years, OpenAI will devote 20% of its total computing power to this work.
Although success is not guaranteed, OpenAI remains optimistic that the problem can be solved with a focused and concerted effort. Their goal is to provide evidence and arguments to convince the machine learning and safety community that the problem has been solved, and they are actively working with experts across disciplines to consider broader human and social issues.
OpenAI welcomes outstanding researchers and engineers, even those who have not been involved in alignment efforts in the past, to join their efforts. They consider superintelligent alignment to be one of the most important unsolved technical problems and consider it a tractable machine learning problem with the potential for significant contributions.
It seems that the raging debate over artificial intelligence, general artificial intelligence, and complex interconnected issues involving practicality and human destruction is creating new divisions. Today, the vocabulary has changed somewhat, but it's not clear whether this is science or semantics.
The above is the detailed content of What exactly is superintelligence?. For more information, please follow other related articles on the PHP Chinese website!