After ChatGPT became popular, major companies such as Microsoft, Google, and Meta have entered the game.
Consequently, people have also raised concerns about its widespread application.
Former Alphabet executive director Eric Schmidt and other co-authors said in the article published in WSJ that
Generative artificial intelligence poses philosophical and practical challenges not experienced since the Enlightenment. .
Just yesterday, OpenAI CEO Sam Altman issued an article sharing OpenAI’s current and follow-up plans for general artificial intelligence (AGI).
The article emphasizes that OpenAI’s mission is to ensure that AGI benefits all mankind.
OpenAI Vision: Ensure AGI benefits all mankind
In this article, the three principles that OpenAI is most concerned about are put forward.
If AGI is ultimately successfully constructed, this technology will not only bring more possibilities and promote global economic development, but also change the discovery of emerging scientific knowledge and help humans improve their living standards in all aspects.
AGI can give everyone incredible new abilities.
In a world where AGI is within reach, everyone can get help with almost all cognitive abilities, and AGI may become a huge power amplifier for human intelligence and creativity.
But on the other hand, as some people worry, general artificial intelligence can also cause serious abuse, accidents and social chaos.
However, the benefits of AGI are amazing. We cannot ignore the problem and let society stop developing forever; on the contrary, society and AGI developers must find ways to do it well.
The future picture of living with general artificial intelligence is difficult to predict. The current progress of AI may encounter new challenges, but at the moment when ChatGPT is successful, we have listed several principles that the company is most concerned about:
1. It is hoped that AGI can empower mankind so that mankind can achieve the greatest degree of prosperity in the universe. We don’t want the future to become a false utopia, but we hope to maximize the good sides of technology and minimize the bad sides, so that AGI can become an amplifier of human goodwill.
2. Hope that the benefits, access and governance of AGI can be shared more widely in a fair way.
3. Respond to potential risks correctly. In the face of these risks, what seems right in theory is often more difficult to control in practice than expected. We must continue to learn and adapt by deploying less powerful versions of the technology to minimize the "point of no return".
So in the short term, OpenAI plans to do the following things.
First of all, as the company continues to build more powerful AI systems, we hope to quickly deploy AGI to accumulate corresponding application experience.
In OpenAI’s view, the best way to carefully manage AGI is to gradually transition to a world where AGI is ubiquitous. In the future we expect, powerful artificial intelligence technology can accelerate the pace of world progress.
The gradual approach allows the public, policymakers and research institutions to have time to understand the changes brought about by AGI technology and experience the benefits of these systems first-hand. advantages and disadvantages, adjust the form of economic organization, and implement effective supervision.
#At the same time, the gradual development method can also promote the progress of society and AI, allowing people to understand themselves at relatively low risks.
OpenAI believes that the best way to successfully solve AI application challenges is to employ a tight feedback loop of rapid learning and careful iteration. Under the impact of new technologies, society will face major issues such as "what artificial intelligence systems are allowed to do, how to eliminate bias, and how to deal with job losses."
Increasing the use of AI technology will go a long way, and OpenAI hopes to be a part of promoting this technology by putting models into service APIs and making them open source.
OpenAI said that as the systems it develops get closer to AGI, the agency has become increasingly cautious in both the creation and deployment of models.
OpenAI needs to weigh the pros and cons of using large models. On the one hand, the use of advanced large-scale models marks important scientific and technological progress; on the other hand, after using the models, companies and institutions also need to consider issues such as how to restrict malicious actors and avoid adverse impacts on society and the economy.
Secondly, OpenAI is working hard to create more consistent and controllable models. From the first version of GPT-3 to InstructGPT and ChatGPT, this step-by-step transformation demonstrates OpenAI’s efforts in AI security.
It is worth noting that human society needs to reach extremely broad boundaries on how to use artificial intelligence. As models become more powerful, OpenAI will need to develop new alignment techniques.
OpenAI’s short-term plan is to use AI to help humans evaluate the output of more complex models and monitor complex systems, and in the long term, OpenAI will use AI to help obtain better alignment technology.
OpenAI believes that artificial intelligence safety and capabilities are equally important, and the two should not be discussed separately. OpenAI says its most secure work comes from its most capable models. In other words, improving the safety of artificial intelligence is very important to the progress of AI research.
Third, OpenAI hopes to solve three key issues on a global scale: the governance of artificial intelligence systems, the distribution of benefits generated by AI systems, and the sharing of access rights.
In addition, according to the OpenAI company charter, OpenAI needs to assist other organizations to improve security and cannot compete with opponents in the later development of AGI.
OpenAI’s investment rules set a cap on the returns that shareholders can earn so that the research institutions themselves are not tempted to try to capture unlimited value or risk using catastrophically dangerous technologies.
OpenAI is governed by a non-profit organization to ensure that the institution is run for the benefit of humanity and overrides any for-profit interests.
Finally, OpenAI believes that governments around the world should maintain supervision of machine learning training beyond a certain scale.
Compared with short-term plans, OpenAI’s long-term development of AGI appears to be more ambitious.
OpenAI believes that the future of humanity should be determined by humans themselves, and sharing information about progress with the public is crucial. Therefore, all AGI development projects should be rigorously scrutinized and the public consulted on major decisions.
In OpenAI’s view, the first AGI will only be a small node in the continued development of artificial intelligence, and new progress will continue to be derived from this node. The company predicts that the future development of AI may be similar to the pace of progress we have experienced over the past decade for a long time.
Perhaps one day, the world will undergo earth-shaking changes, and technological progress may also bring great risks to mankind. A "misplaced" super-intelligent AGI could cause serious harm to the world.
Therefore, OpenAI believes that slowing down the development speed of AGI is easier to ensure safety. Even though advances in technology have given us the ability to rapidly develop AGI, maintaining a slowdown gives society enough time to adapt.
The successful transition to a world with superintelligence may be the most important, promising, and terrifying project in human history. No one can guarantee when that day will come, but the stakes are clear and help bring everyone together.
No matter what, it will be a world that is prosperous beyond imagination. And OpenAI hopes to contribute to the world a general artificial intelligence that is consistent with this prosperity.
Recently, former US Secretary of State Henry Kissinger, former Alphabet executive director Eric Schmidt, and Daniel Huttenlocher, the first dean of the MIT Schwarzman School of Computing, wrote an article Article "ChatGPT heralds an intellectual revolution".
The article revealed their concerns about current generative artificial intelligence:
Generative artificial intelligence poses philosophical and practical challenges not experienced since the Enlightenment.
The article begins with an explanation of the current impact of ChatGPT on humans.
A new technology aims to change human cognitive processes as it has never faltered since the invention of printing.
The technology used to print Gutenberg's Bible in 1455 allowed abstract human ideas to spread widely and rapidly. But today's new technology reverses this process.
The printing press caused the outpouring of modern human thought, and new technologies enabled its refinement and elaboration.
In the process, it creates a gap between human knowledge and human understanding.
If we are to successfully navigate this transition, we will need to invent new concepts of the human mind and interaction with machines. This is the fundamental challenge in the era of artificial intelligence.
This new technology is called generative artificial intelligence, and the most representative one is ChatGPT developed by OpenAI Research Laboratory.
As its capabilities become more widespread, they will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society.
Generative artificial intelligence will also open up a revolutionary path for human rationality and open up new horizons for the consolidation of knowledge.
But there are also obvious differences between the two. Enlightenment knowledge is achieved step by step, each step measurable and teachable.
AI systems, such as ChatGPT, can store and refine large amounts of existing information, and are able to output results without explaining their processes, which humans cannot.
In addition, the capabilities of artificial intelligence are not static, but increase exponentially with the advancement of technology.
We urgently need to develop a sophisticated dialectic that allows people to challenge the interactivity of generative AI and not just justify or explain AI’s answers but also interrogate them.
With a consistent dose of skepticism, we should learn to explore artificial intelligence methodically and evaluate whether and to what extent its answers are trustworthy. This will require conscious efforts to reduce our unconscious biases, rigorous training, and a lot of practice.
The question remains: Can we learn quickly enough to challenge rather than obey? Or will we eventually have to obey? Are what we perceive to be errors part of intentional design? What if there are malicious elements in artificial intelligence?
Another key task is to reflect on which questions must be left to human thinking and which questions can be risked and handed over to automated systems.
However, even with heightened skepticism and interrogation techniques, ChatGPT proves that the genie of generative technology has left the bottle. We must be thoughtful about the questions we ask.
As the technology becomes more widely understood, it will have a profound impact on international relations. Unless the technology of knowledge is universally shared, imperialism may focus on acquiring and monopolizing data to achieve the latest advances in artificial intelligence.
The model may produce different results depending on the data collected. Different evolutions of societies are likely to evolve on the basis of increasingly different knowledge bases and thus different perceptions of challenges.
At the end of the article, two questions are raised that make people think deeply:
What will happen if this technology cannot be fully controlled?
What if there is always a way to tell lies, create fake pictures and videos, and people never learn not to believe what they see and hear?
Meta chief AI scientist LeCun replied,
- People will learn to better track sources and evaluate the reliability of what they see and hear, This will most likely be with the help of new technologies.
- Current autoregressive LLMs are notoriously uncontrollable, but new AI systems are controllable, realistic and non-toxic when needed.
Some netizens quoted Andy Grove as saying, "There are two choices: adapt or die."
What would you do if it were you?
References:
https://openai.com/blog/planning-for-agi-and-beyond/
https://www.wsj.com /articles/chatgpt-heralds-an-intellectual-revolution-enlightenment-artificial-intelligence-homo-technicus-technology-cognition-morality-philosophy-774331c6
https://twitter.com/ericschmidt/status/ 1629361652574621701
The above is the detailed content of ChatGPT sets off an intellectual revolution! OpenAI releases AGI roadmap, ultimately leading to a super-intelligent world. For more information, please follow other related articles on the PHP Chinese website!