According to news from this website on May 29, OpenAI announced the establishment of a Safety and Security Committee, responsible for making recommendations on key safety and security decisions for OpenAI projects and operations.
The committee’s current priority is to evaluate and further develop OpenAI’s development processes and safeguards within the next 90 days. At the end of the 90 days, the Safety and Security Committee will share its recommendations with the full board.
According to reports, the committee will be led by the board of directors Bret Taylor (chairman), Adam D'Angelo, Nicole Seligman and OpenAI CEO Sam Altman.
In addition, OpenAI technical and policy experts Aleksander Madry (Head of Readiness), Lilian Weng (Head of Security Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security) and Jakub Pachocki (Head of Security Systems) Chief Scientist) will also join the committee.
This website noted that OpenAI also said it will hire and consult with other safety, security and technology experts to support this effort, including former cybersecurity officials Rob Joyce and John Carlin, who provide security advice to OpenAI.
OpenAI also revealed that it has recently begun training its next-generation cutting-edge model and is expected to take them to the next level on the road to AGI.
The above is the detailed content of OpenAI announces the establishment of a safety and security committee and launches next-generation cutting-edge model training. For more information, please follow other related articles on the PHP Chinese website!