Table of Contents
Both of them were once Ilya’s subordinates
Is it related to the Q* leak?
Home Technology peripherals AI Sudden! OpenAI fires Ilya ally for suspected information leakage

Sudden! OpenAI fires Ilya ally for suspected information leakage

Apr 15, 2024 am 09:01 AM
openai leakage ilya

Breaking! OpenAI fired people, the reason: suspected information leakage.

One is Leopold Aschenbrenner, an ally of the missing chief scientist Ilya and a core member of the Superalignment team.

The other person is not simple either. He is Pavel Izmailov, a researcher on the LLM inference team, who also worked on the super alignment team.

It is unclear what information the two men leaked.

After the news was exposed, many netizens expressed "quite shocked":

I saw Aschenbrenner's post not long ago and felt that he was on the rise in his career. I didn't expect that There is such a shift.

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

Some netizens believe:

OpenAI lost Aschenbrenner, Ilya Sutskever was also marginalized, the company’s The promise of building safe AI looks even less credible.

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

OpenAI’s talent flow and mutual poaching with Google Meta are no longer surprising, but the personnel transfer in the form of expulsion was only after the board of directors’ rebellion in November last year the first time.

Both of them were once Ilya’s subordinates

It is understood that Leopold Aschenbrenner joined OpenAI a year ago. He had previously worked at Future Fund. Going back further, he graduated from Columbia University at the age of 19 and also conducted research on economic growth at Oxford University.

Pavel Izmailov is also a member of the CILVR team at New York University. He revealed that he will join NYU Tandon CSE and Courant CS in the fall of 2025 as an assistant professor.

Currently, both of their X homepages note their relationship with OpenAI.

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

Neither of them has posted any new tweets in the past month. But the pinned post is very eye-catching.

Sudden! OpenAI fires Ilya ally for suspected information leakage

Yes, both are the first papers of the Super Alignment team, and both of them are the authors of this paper.

Sudden! OpenAI fires Ilya ally for suspected information leakage

This has to be said about this team. The team was formed in July last year. It is the third group established by OpenAI to deal with the security issues that may arise from large models on different time scales. One of the big security teams.

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

The Super Alignment Team is responsible for the distant future, laying the foundation for super-human superintelligence security, By Ilya Sutskever and led by Jan Leike.

Although OpenAI seems to take security seriously, it is no secret that there are great differences within it regarding the security development of AI.

This disagreement is even considered to be the biggest reason for the internal strife in the OpenAI board of directors in November last year:

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

It is reported on the Internet that Ilya Sutskever became The leader of the "coup" was uneasy because he saw something.

As for the super-aligned team led by Ilya, many members also sided with Ilya. In the subsequent Solitaire event supporting Ultraman, the members of this super-aligned team basically remained silent.

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

As the OpenAI infighting subsides, Ultraman emerges as the winner, Ilya Sutskever leaves the board and barely appears in public again, the Super Alignment team is now We don’t know what the situation is.

Let’s go back to this time when two researchers were fired.

Both of them were once subordinates of Ilya, and the information also stated that Aschenbrenner is an ally of Ilya and is related to the movement that prioritizes solving the dangers of AI.

Therefore, some netizens speculate that this layoff is Ultraman's "settlement of accounts".

In addition, netizens are more concerned about what the leaked content is.

The current focus of speculation is still on Q*, which is one of the triggers of the boardroom rebellion.

According to netizen comments, Jimmy apples seemed to have posted relevant tweets, but not long after we saw that the tweet had been deleted when the link was opened...

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

Judging from the clues left behind, it seems that it may still be related to Q*.

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

Currently, Q* has gone from rumors to officially confirmed existence, but Ultraman himself is currently keeping secret about everything about it. .

Sudden! OpenAI fires Ilya ally for suspected information leakagePicture

In short, since Q*, there have been ideological differences within OpenAI.

After a series of turmoils, 93% of employees signed a joint letter to live and die with Ultraman. After the investigation results were announced, Ultraman returned to the board of directors without charge.

Seemingly calm, behind the scenes of everyone working together to realize AGI, cracks still exist.

Reference link:
[1]https://www.theinformation.com/articles/openai-researchers-including-ally-of-sutskever-fired-for-alleged-leaking
[ 2]https://x.com/BenjaminDEKR/status/1778536771837694353
[3]https://twitter.com/leopoldasch
[4]https://twitter.com/Pavel_Izmailov
[5 ]https://www.reddit.com/r/singularity/comments/1c1qo04/openai_superalignment_researcher_fired_for/

The above is the detailed content of Sudden! OpenAI fires Ilya ally for suspected information leakage. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

A new programming paradigm, when Spring Boot meets OpenAI A new programming paradigm, when Spring Boot meets OpenAI Feb 01, 2024 pm 09:18 PM

In 2023, AI technology has become a hot topic and has a huge impact on various industries, especially in the programming field. People are increasingly aware of the importance of AI technology, and the Spring community is no exception. With the continuous advancement of GenAI (General Artificial Intelligence) technology, it has become crucial and urgent to simplify the creation of applications with AI functions. Against this background, "SpringAI" emerged, aiming to simplify the process of developing AI functional applications, making it simple and intuitive and avoiding unnecessary complexity. Through "SpringAI", developers can more easily build applications with AI functions, making them easier to use and operate.

Choosing the embedding model that best fits your data: A comparison test of OpenAI and open source multi-language embeddings Choosing the embedding model that best fits your data: A comparison test of OpenAI and open source multi-language embeddings Feb 26, 2024 pm 06:10 PM

OpenAI recently announced the launch of their latest generation embedding model embeddingv3, which they claim is the most performant embedding model with higher multi-language performance. This batch of models is divided into two types: the smaller text-embeddings-3-small and the more powerful and larger text-embeddings-3-large. Little information is disclosed about how these models are designed and trained, and the models are only accessible through paid APIs. So there have been many open source embedding models. But how do these open source models compare with the OpenAI closed source model? This article will empirically compare the performance of these new models with open source models. We plan to create a data

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Rust-based Zed editor has been open sourced, with built-in support for OpenAI and GitHub Copilot Rust-based Zed editor has been open sourced, with built-in support for OpenAI and GitHub Copilot Feb 01, 2024 pm 02:51 PM

Author丨Compiled by TimAnderson丨Produced by Noah|51CTO Technology Stack (WeChat ID: blog51cto) The Zed editor project is still in the pre-release stage and has been open sourced under AGPL, GPL and Apache licenses. The editor features high performance and multiple AI-assisted options, but is currently only available on the Mac platform. Nathan Sobo explained in a post that in the Zed project's code base on GitHub, the editor part is licensed under the GPL, the server-side components are licensed under the AGPL, and the GPUI (GPU Accelerated User) The interface) part adopts the Apache2.0 license. GPUI is a product developed by the Zed team

Don't wait for OpenAI, wait for Open-Sora to be fully open source Don't wait for OpenAI, wait for Open-Sora to be fully open source Mar 18, 2024 pm 08:40 PM

Not long ago, OpenAISora quickly became popular with its amazing video generation effects. It stood out among the crowd of literary video models and became the focus of global attention. Following the launch of the Sora training inference reproduction process with a 46% cost reduction 2 weeks ago, the Colossal-AI team has fully open sourced the world's first Sora-like architecture video generation model "Open-Sora1.0", covering the entire training process, including data processing, all training details and model weights, and join hands with global AI enthusiasts to promote a new era of video creation. For a sneak peek, let’s take a look at a video of a bustling city generated by the “Open-Sora1.0” model released by the Colossal-AI team. Open-Sora1.0

Microsoft, OpenAI plan to invest $100 million in humanoid robots! Netizens are calling Musk Microsoft, OpenAI plan to invest $100 million in humanoid robots! Netizens are calling Musk Feb 01, 2024 am 11:18 AM

Microsoft and OpenAI were revealed to be investing large sums of money into a humanoid robot startup at the beginning of the year. Among them, Microsoft plans to invest US$95 million, and OpenAI will invest US$5 million. According to Bloomberg, the company is expected to raise a total of US$500 million in this round, and its pre-money valuation may reach US$1.9 billion. What attracts them? Let’s take a look at this company’s robotics achievements first. This robot is all silver and black, and its appearance resembles the image of a robot in a Hollywood science fiction blockbuster: Now, he is putting a coffee capsule into the coffee machine: If it is not placed correctly, it will adjust itself without any human remote control: However, After a while, a cup of coffee can be taken away and enjoyed: Do you have any family members who have recognized it? Yes, this robot was created some time ago.

The local running performance of the Embedding service exceeds that of OpenAI Text-Embedding-Ada-002, which is so convenient! The local running performance of the Embedding service exceeds that of OpenAI Text-Embedding-Ada-002, which is so convenient! Apr 15, 2024 am 09:01 AM

Ollama is a super practical tool that allows you to easily run open source models such as Llama2, Mistral, and Gemma locally. In this article, I will introduce how to use Ollama to vectorize text. If you have not installed Ollama locally, you can read this article. In this article we will use the nomic-embed-text[2] model. It is a text encoder that outperforms OpenAI text-embedding-ada-002 and text-embedding-3-small on short context and long context tasks. Start the nomic-embed-text service when you have successfully installed o

Sudden! OpenAI fires Ilya ally for suspected information leakage Sudden! OpenAI fires Ilya ally for suspected information leakage Apr 15, 2024 am 09:01 AM

Sudden! OpenAI fired people, the reason: suspected information leakage. One is Leopold Aschenbrenner, an ally of the missing chief scientist Ilya and a core member of the Superalignment team. The other person is not simple either. He is Pavel Izmailov, a researcher on the LLM inference team, who also worked on the super alignment team. It's unclear exactly what information the two men leaked. After the news was exposed, many netizens expressed "quite shocked": I saw Aschenbrenner's post not long ago and felt that he was on the rise in his career. I didn't expect such a change. Some netizens in the picture think: OpenAI lost Aschenbrenner, I

See all articles