


OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault
Since the resignation of Ilya and Jan, the head of super alignment, OpenAI is still heartbroken, and more and more people have resigned, which has also caused more conflicts.
Yesterday, the focus of controversy came to a strict "hush agreement."
Former OpenAI employee Kelsey Piper broke the news that any employee's onboarding document instructions include: "Within sixty days of leaving the company, you must sign a "General Waiver" 』’s separation documents. If you do not complete it within 60 days, your equity benefits will be cancelled. A screenshot of this document that caused a stir prompted OpenAI CEO to quickly respond:
We have never taken back anyone’s vested rights. If people don’t sign the separation agreement (or don’t agree to it) without derogating the Agreement) and we will not do so. Vested equity is vested equity (period).
Just 15 minutes later, the whistleblower once again questioned and boldly asked: Now that you already know, will the previous employee restriction agreement be cancelled?
##Because most people want a clear solution, not just an apology:
The whistleblower Kelsey Piper also said: "As for whether what I did was unfair to Sam——I I mean, I think that's one of the CEO's responsibilities. Sometimes you not only need to apologize, but people also want clarification and want to see evidence that the policy has changed."
## It was reported last year that the most common combination of OpenAI employee compensation is: a fixed base salary of $300,000, and an annual PPU (profit participation unit) grant of about $500,000, which is a type of equity compensation form. That said, over the four-year period of the PPU grant, most OpenAI employees are expected to receive at least $2 million in equity-based compensation.If the news is true, most of the former employees who were "resigned" should want to "stick to the end."
In this article signed "Sam and Greg", Brockman pointed out: OpenAI has taken measures to ensure the safe development and deployment of AI technology .
But the effect seems to be unsatisfactory and even ridiculed:First, we have increased awareness of the risks and opportunities of AGI so that the world can be better prepared for it. We have repeatedly demonstrated the amazing possibilities of scaling deep learning and analyzed its impact; we called for international governance of AGI before such calls were popular and helped pioneer the science of assessing the catastrophic risks of AI systems.
#Secondly, we have been laying the foundation necessary to securely deploy increasingly capable systems. Using a new technology for the first time is not easy. For example, our team did a lot of work to bring GPT-4 to the world in a secure way, and we've since continued to improve model behavior and abuse monitoring based on lessons learned during deployment.
Third, the future will be more difficult than the past. We need to continually improve our security efforts to address the risks of each new model. Last year, we adopted the Preparedness Framework to help systematize our work.
Now is a good time to talk about how we see the future.
As models continue to become more powerful, we expect they will begin to integrate more deeply with the world. Users will increasingly interact with systems composed of many multimodal models and tools that can take actions on their behalf, rather than talking to a single model with only textual input and output.
We believe that these systems will be of great benefit and help to people, and that it is possible to deliver them safely, but it will require a lot of groundwork. This includes thoughtful thought around what they tie into during training, solutions to difficult problems like scalable supervision, and other new types of security efforts. While we're moving in this direction, we're not yet sure when we'll meet the safety standards for release, and if that delays the release, that's okay.
We know that we cannot imagine all possible future scenarios. Therefore, we need a very tight feedback loop, rigorous testing, careful consideration of every step, world-class security, and a harmonious integration of security and functionality. We will continue to conduct security research on different time scales. We will also continue to work with governments and many stakeholders on security issues.
There is no proven guide on the road to artificial intelligence. We believe that empirical understanding can help point the way forward. We believe in achieving great growth prospects while working to mitigate serious risks; we take our role in this very seriously and carefully weigh feedback on our actions.
##—Sam and Greg
The above is the detailed content of OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics





This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G
