Last time it was Zhou Shouzi, this time it was Sam Altman’s turn.
But this time, the members of Congress had a completely different attitude towards him - friendly, patient, done enough homework, and asked for advice humbly.
Last night, Beijing time, OpenAI CEO Sam Altman testified before the U.S. Senate about the potential dangers of AI technology and urged lawmakers to implement licensing requirements and other regulations for organizations that manufacture advanced AI.
##Sam Altman didn’t have to accept tricky questions. He sat in his seat and talked with ease, and once again told the world Proof: As the most high-profile startup CEO in the world, he is writing the rules and future of the technology world.
Facing the U.S. Congress, Sam Altman once again categorically guaranteed that OpenAI will not train GPT-5 in the next six months.
At the same time, he also warned the world: AI may be harmful to the world. In order to deal with the increasingly powerful AI risks, we need to strengthen supervision and legislation, and government intervention is extremely important.
Why is Altman so active in government regulation?
Obviously, as long as you become the rule maker, you can win all the competition.
For Altman, who has made a name for himself in Silicon Valley by relying on his "social cow" attributes, dealing with the government is as easy as picking something out of a bag.
Opening speech generated by AIAs a new force that has suddenly emerged in the technology world, OpenAI has stirred up the world with lightning speed this year, 8 years after its establishment. All over the world, all technology companies have been forced to participate in a global involution starting with ChatGPT.
This global AI arms race has alarmed many experts.
However, during this hearing, the Senate members did not criticize the chaos caused by OpenAI’s technology, but humbly solicited the opinions of witnesses on the potential rules of ChatGPT. Sam Atlman's attitude is visibly friendly and respectful.
At the beginning of the hearing, Senator Richard Blumenthal used voice cloning software to copy his own voice and let ChatGPT write An opening sentence, a text-to-speech generator trained using hours of speech.
This move proves that Congress has a clear-cut attitude towards "embracing AI".
AI is dangerous, please regulate usAt this hearing, the legislators were obviously very excited, and they were very strict with Xiao Zha and Zhou Shouzi. The questioning is in stark contrast.
Rather than harping on the mistakes of the past, senators are eager for the benefits that AI can bring.
And Altman told the Senate straight to the point: AI technology can go wrong.
He said that he was very worried about the artificial intelligence industry causing significant harm to the world.
"If AI technology goes wrong, the consequences will be disastrous. We need to speak out about this: we want to work with the government to prevent this from happening."
"We believe that government regulatory intervention is critical to mitigate the risks of increasingly powerful AI models. For example, the U.S. government could consider combining licensing and testing requirements to develop and release AI that exceeds capability thresholds Model."
Altman said that he is very worried that the election will be affected by AI-generated content, so there needs to be adequate supervision in this regard.
In response, Senator Dick Durbin said that it is remarkable for large companies to come to the Senate to "please for our regulation."
How to supervise? Altman has already thought of the government.
At the hearing, he proposed a systematic plan.
1. Create a new government agency responsible for licensing large AI models and revoking licenses for models that do not meet standards.
He believes that there is no need to implement this licensing regulatory system for technologies that cannot reach the level of state-of-the-art large-scale models. To encourage innovation, Congress could set competency thresholds to shield smaller companies and researchers from regulatory burdens.
2. Create a set of safety standards for AI models, including an assessment of their dangerous capabilities.
For example, models must pass security tests, such as whether they can "self-replicate" and "flow outside supervision."
3. Require independent experts to conduct an independent audit of the model’s performance on various metrics.
When asked by senators if he would be willing to take on the role, Altman said: "I am satisfied with the current job, but he would be happy to provide a list for Congress to choose from."
Altman said that because AI models can "persuad, manipulate, influence a person's behavior, beliefs" and even "create new biological agents," licensing is very much needed.
It would be simpler to license all systems above a certain threshold of computing power, but Altman said he prefers to draw regulatory lines based on specific capabilities.
So is OpenAI’s own model safe?
Altman has repeatedly said that everyone can rest assured.
He said that the GPT-4 model will respond more intentionally and truthfully than any other similar model, and will definitely reject harmful requests because GPT-4 has undergone extensive Pre-release testing and auditing.
"Before releasing any new system, OpenAI conducts extensive testing, engages external experts to conduct detailed reviews and independent audits, improves the model's behavior, and implements strong security and monitoring systems ."
"Prior to releasing GPT-4, we spent more than six months conducting extensive assessments, external red teaming, and hazardous capability testing."
And last month, ChatGPT users were able to turn off chat records to prevent their personal data from being used to train AI models.
However, there are also sharp-eyed people who have discovered the "hua points". Altman's proposal does not involve the two points that are hotly debated by the public——
1. Require AI models to disclose sources for their training data.
2. It is prohibited for AI models to use works protected by intellectual property rights for training.
Well, that is to say, Altman avoided these two controversial points very cleverly.
Lawmakers have applauded Altman’s proposals for AI safety rules and occasionally thanked him for his testimony. Sen. Altman, R-LA, even reached out to Altman, asking if he would be interested in working at the regulatory agency created by Congress.
Congress is determined to regulate artificial intelligence, and there are early signs. Earlier this month, Altman, along with the CEOs of Google, Microsoft and Nvidia, met with Vice President Kamala Harris at the White House to discuss the development of responsible AI.
As early as last year, the White House proposed the "Artificial Intelligence Bill of Rights" to put forward various requirements to the industry, such as preventing discrimination.
The senator proposed comparing AI to the atomic bomb.
Referring to the practices of governments around the world in regulating nuclear weapons, Altman proposed the idea of forming an agency similar to the International Atomic Energy Agency to formulate global rules for the industry.
In April’s Round 2 interview with Lex Fridman, Sam Altman said conclusively: "We are not training GPT-5 now, we are just doing more work on the basis of GPT-4."
At this hearing, Altman directly admitted that OpenAI has no plans to train a new model that may become GPT-5 in the next six months.
And this should mean that later this year Google will have its most powerful artificial intelligence system yet - Project Gemini.
It is said that Gemini is designed for future innovations such as storage and scheduling. It is not only multi-modal from the beginning, but also very efficient in integrating tools and APIs. It is currently being developed by the newly established Google Deepmind team.
Gary Marcus, a professor of psychology and neuroscience at New York University, also appeared on the witness stand.
He is even more aggressive than the members of Congress.
His question to Sam Altman can be described as "deadly".
Isn’t the purpose of OpenAI’s establishment to benefit all mankind? Why is it now running to form an alliance with Microsoft?
OpenAI is not Open, and the training data of GPT-4 is not transparent. What does it mean?
Marcus concluded: We have unprecedented opportunities, but we are also faced with corporate irresponsibility, widespread The dire risks of deployment, lack of proper regulation, and unreliability
In Marcus’ view, both Open and Microsoft are doing something very wrong.
Microsoft’s Bing AI Sydney once showed a series of shocking behaviors.
"Sydney has a big problem. If it were me, I would take it off the market immediately, but Microsoft didn't."
Marcus said that this incident was a wake-up call for him-even a non-profit organization like OpenAI can be bought by a large company and then do whatever it wants.
But now, people’s views and lives are being subtly shaped and changed by AI. What if someone deliberately uses AI technology for bad purposes?
Marcus was very worried about this.
"If you combine a technocratic and an oligarchy, then a few companies can influence people's beliefs. That's where the real risk lies... It scares me to have a handful of players do this using data that we don't even know about"
Regarding some common legal and regulatory issues, it can be seen that Altman has already had a plan in mind and has made clear arrangements for the senators.
The senator said one of his "biggest concerns" about artificial intelligence is "this massive corporate monopoly."
He cited OpenAI’s collaboration with technology giant Microsoft as an example.
Altman said that he believes that the number of companies that can manufacture large models is relatively small, which may make it easier to regulate.
For example, only a few companies can manufacture large-scale generative AI, but competition has always existed.
The rise of social media was facilitated by Section 230 passed by the U.S. Congress in 1996, which protected websites Release from liability for users' posts.
Altman believes that large models currently have no way to be protected by Section 230. New laws should be enacted to protect large models from legal liability for the content they output.
Altman initially avoided the senator's suggestion that "AI may cause the most serious consequences."
But after Marcus gave a friendly reminder that Altman had not answered the question, the senator repeated his question.
Altman ultimately did not answer this question directly.
He said OpenAI had tried to be very clear about the risks of artificial intelligence, which could come in "a lot of different ways." Causing "significant harm to the world".
He clarified again. Addressing this problem is why OpenAI was founded. "If something goes wrong with this technology, it could go terribly wrong."
In fact, in an interview with "StrictlyVC" earlier this year, Altman said that human extinction is the most serious problem. Bad situation.
Eventually, even Marcus seemed to soften towards Ultraman.
Toward the end of the hearing, Marcus, who was sitting next to Altman, said, "The sincerity he spoke about fear was so obvious that it wasn't reflected through the television screen. I can’t feel it.”
Compared with Xiao Zha, Altman’s performance in this hearing was very sophisticated. I think he will be a social bull. He is already comfortable dealing with politicians. After all, Altman was someone who considered running for governor of California years ago.
And compared to Xiao Zha, who had already "stuck a lot of trouble" because of data privacy and currency before going to the hearing, the OpenAI behind Altman has not only received almost no public criticism, but it is also the current AI The main founder of the situation in the field of "all things happen".
Faced with Altman, who showed good will from the beginning and called for the regulation of AI, these legislators, who are almost all "technical amateurs", will naturally appear gentle in front of this "authority" A lot of kindness.
So in the same situation, the pressure on Altman is not at the same level as Xiao Zha's.
The Senate has raised this concern. If, like Internet social platforms, AI products are mainly based on advertising, The business model will allow manipulative product design and addictive algorithms to be abused.
Altman said he “really likes” the subscription model.
But OpenAI did consider the possibility of running ads in the free version of ChatGPT to make money from its free users.
The above is the detailed content of The father of ChatGPT is arguing on Capitol Hill! OpenAI wants to join forces with the government to gain power. For more information, please follow other related articles on the PHP Chinese website!