Home Technology peripherals AI OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault

OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault

Jun 09, 2024 pm 05:07 PM
AI openai

Since the resignation of Ilya and Jan, the head of super alignment, OpenAI is still heartbroken, and more and more people have resigned, which has also caused more conflicts.

Yesterday, the focus of controversy came to a strict "hush agreement."

Former OpenAI employee Kelsey Piper broke the news that any employee's onboarding document instructions include: "Within sixty days of leaving the company, you must sign a "General Waiver" 』’s separation documents. If you do not complete it within 60 days, your equity benefits will be cancelled. A screenshot of this document that caused a stir prompted OpenAI CEO to quickly respond:

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅We have never taken back anyone’s vested rights. If people don’t sign the separation agreement (or don’t agree to it) without derogating the Agreement) and we will not do so. Vested equity is vested equity (period).

Sam Altman’s other responses regarding how OpenAI handles equity are as follows:

Just 15 minutes later, the whistleblower once again questioned and boldly asked: Now that you already know, will the previous employee restriction agreement be cancelled?

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

##Because most people want a clear solution, not just an apology:

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

The whistleblower Kelsey Piper also said: "As for whether what I did was unfair to Sam——I I mean, I think that's one of the CEO's responsibilities. Sometimes you not only need to apologize, but people also want clarification and want to see evidence that the policy has changed."

## It was reported last year that the most common combination of OpenAI employee compensation is: a fixed base salary of $300,000, and an annual PPU (profit participation unit) grant of about $500,000, which is a type of equity compensation form. That said, over the four-year period of the PPU grant, most OpenAI employees are expected to receive at least $2 million in equity-based compensation.

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

If the news is true, most of the former employees who were "resigned" should want to "stick to the end." OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

Besides this controversy, there is also a controversy going on at the same time: about how OpenAI will handle security and future risks.

According to multiple media reports, following the recent departure of the two co-leaders of the Super Alignment team, Ilya Sutskever and Jan Leike, OpenAI’s Super Alignment team has been disbanded. Jan Leike also published a series of posts on Friday, blasting OpenAI and its leadership for ignoring "security" in favor of "glossy products."

Earlier today, OpenAI co-founder Greg Brockman wrote a lengthy response to the issue.

In this article signed "Sam and Greg", Brockman pointed out: OpenAI has taken measures to ensure the safe development and deployment of AI technology .

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

We are very grateful for all Jan has done for OpenAI, and we know he will continue to contribute externally to OpenAI contribute to its mission. In light of the issues raised by his departure, we would like to explain how we are thinking about our overall strategy.

First, we have increased awareness of the risks and opportunities of AGI so that the world can be better prepared for it. We have repeatedly demonstrated the amazing possibilities of scaling deep learning and analyzed its impact; we called for international governance of AGI before such calls were popular and helped pioneer the science of assessing the catastrophic risks of AI systems.


#Secondly, we have been laying the foundation necessary to securely deploy increasingly capable systems. Using a new technology for the first time is not easy. For example, our team did a lot of work to bring GPT-4 to the world in a secure way, and we've since continued to improve model behavior and abuse monitoring based on lessons learned during deployment.


Third, the future will be more difficult than the past. We need to continually improve our security efforts to address the risks of each new model. Last year, we adopted the Preparedness Framework to help systematize our work.


Now is a good time to talk about how we see the future.


As models continue to become more powerful, we expect they will begin to integrate more deeply with the world. Users will increasingly interact with systems composed of many multimodal models and tools that can take actions on their behalf, rather than talking to a single model with only textual input and output.


We believe that these systems will be of great benefit and help to people, and that it is possible to deliver them safely, but it will require a lot of groundwork. This includes thoughtful thought around what they tie into during training, solutions to difficult problems like scalable supervision, and other new types of security efforts. While we're moving in this direction, we're not yet sure when we'll meet the safety standards for release, and if that delays the release, that's okay.


We know that we cannot imagine all possible future scenarios. Therefore, we need a very tight feedback loop, rigorous testing, careful consideration of every step, world-class security, and a harmonious integration of security and functionality. We will continue to conduct security research on different time scales. We will also continue to work with governments and many stakeholders on security issues.


There is no proven guide on the road to artificial intelligence. We believe that empirical understanding can help point the way forward. We believe in achieving great growth prospects while working to mitigate serious risks; we take our role in this very seriously and carefully weigh feedback on our actions.


##—Sam and Greg

But the effect seems to be unsatisfactory and even ridiculed:

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

Gary Marcus, an active scholar in the field of AI, also said: Transparency speaks louder than words.

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

It seems that Greg Brockman does not intend to provide a more specific response in terms of policies or commitments.

After the departure of Jan Leike and Ilya Sutskever, another OpenAI co-founder John Schulman has turned to be responsible for the work being done by the Super Alignment team, but there is no longer a dedicated department , but a loosely connected team. Groups of researchers embedded in various parts of the company. OpenAI describes this as "deeper integration (of teams)."

What is the truth behind the controversy? Perhaps Ilya Sutskever knew best, but he chose to leave the game gracefully and may not talk about it again in the future. After all, he already had "a very personally meaningful project."

The above is the detailed content of OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. SK Hynix will display new AI-related products on August 6: 12-layer HBM3E, 321-high NAND, etc. Aug 01, 2024 pm 09:40 PM

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Jul 15, 2024 pm 12:21 PM

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G

See all articles