OpenAI is going to launch AI image recognition.
The latest news is that their company is developing a detection tool.
According to Chief Technology Officer Mira Murat:
The tool has very high accuracy, with an accuracy rate of up to 99%.
It is currently in the internal testing process and will be released to the public soon.
I have to say that this accuracy rate is still a bit exciting. After all, OpenAI’s previous efforts in AI text detection ended in a disastrous failure with an “accuracy rate of 26%”.
OpenAI has already had a layout in the field of AI content detection.
In January this year, they released an AI text detector to distinguish between AI and human-generated content to prevent AI text from being abused.
However, this tool sadly retired in July: without any announcement, the page directly 404.
The reason is that the accuracy rate is too low, "almost like guessing."
According to data published by OpenAI itself:
It can only correctly identify 26% of AI-generated text, while incorrectly identifying 9% of human-written text.
After this hasty ending, OpenAI stated that it will absorb user feedback, strive to improve, and research more effective text source technology.
At the same time, they also announced that tools to determine whether images, audio and video are generated by AI will also be developed.
Now, with the advent of DALL-E 3 and the continuous iteration of similar tools such as Midjourney, AI painting capabilities are getting stronger and stronger.
The biggest concern is that it will be used to fabricate fake news images around the world.
For example, there is an AI-forged "scene" of the "2001 Cascadia 9.1 earthquake and tsunami" on reddit, which has been liked by more than 1.2k netizens.
Compared with AI text detection tools, the development of AI drawing detection tools is obviously more urgent (probably because "no one cares whether the speech is written by oneself or by a secretary ”, but the content of “pictures and truth” is hard not to be believed by some people).
However, just as OpenAI’s AI text detection tool was offline, some netizens pointed out:
It is contradictory to develop generation and detection tools at the same time.
If one side is doing well, it means the other side is not doing well, and there may also be a conflict of interest.
The more direct idea is to hand it over to a third party.
But the performance of third parties on AI text has not been good before.
As far as the technology itself is concerned, another feasible method is to hide the watermark when AI generates content.
That’s what Google does.
Recently (at the end of August this year), Google has launched an AI image detection technology before OpenAI:
SynthID.
It currently cooperates with Google's Vincent graph model Imagen, so that every image generated by the model is embedded with a "metadata identification that this is generated by AI" -
Even if the image is A series of modifications such as cropping, adding filters, changing colors and even lossy compression will not affect the recognition.
In internal testing, SynthID accurately identified a large number of edited AI images, but the specific accuracy rate was not disclosed.
I don’t know what technology OpenAI’s upcoming tool will use, and whether it will be the one with the highest accuracy on the market.
The above news comes from the speeches delivered by OpenAI CTO and Altman at the Tech Live conference held by the Wall Street Journal this week.
At the meeting, the two also revealed more news about OpenAI.
For example, the next generation of large models may be launched soon.
The name is not disclosed, but OpenAI did apply for the GPT-5 trademark in July this year.
Some people are concerned about the accuracy of "GPT-5" and ask whether it can no longer produce errors or false content.
In this regard, the CTO's attitude was more cautious and just said "maybe".
She explained:
We have made great progress on the hallucination problem of GPT-4, but it is not yet where we need to be.
Ultraman talked about the "Core Building Plan".
Judging from his original words, there is no "real hammer", but it also leaves unlimited room for imagination:
We will definitely not do this if we follow the default path. But I would never rule it out.
Compared to the "core-making plan", Ultraman's reply to the rumours of building a mobile phone was quite straightforward.
In September this year, Apple’s former chief design officer Jony Ive (who has worked at Apple for 27 years) was exposed to be in contact with OpenAI. Sources said that Altman wanted to develop a hardware device to provide a A more natural and intuitive way to interact with AI can be called the "iPhone of AI".
Now, he told everyone:
I am not sure yet what I want to do, I just have some vague ideas.
And:
No AI device will overshadow the popularity of the iPhone, and I have no interest in competing with any smartphone.
Reference link:
[1]https://finance.yahoo.com/news/openai- claims-tool-detect-ai-051511179.html?guccounter=1.
[2]https://gizmodo.com/openais-sam-altman-says-he-has-no-interest-in-competing-1850937333.
The above is the detailed content of OpenAI image detection tool exposed, CTO: 99% of images generated by AI can be recognized. For more information, please follow other related articles on the PHP Chinese website!