The California company has long hinted that it was researching technology that can detect artificial intelligence (AI) content, but it has led its clients to believe that this technology was years away.
OpenAI has reportedly developed a tool that can detect when someone uses ChatGPT to generate content with 99.9% accuracy, but the company has no plans to release it to the public.
The California company has long hinted that it was researching technology that can detect artificial intelligence (AI) content, but it has led its clients to believe that this technology was years away. However, according to insiders who spoke to the Wall Street Journal (WSJ), this tool has been available for months.
Detecting AI-generated content has become a significant challenge as adoption has soared. Legislators have formulated laws that require AI developers to include watermarks and other distinctive features in such content, but none has taken hold.
This challenge is more prevalent in some fields, like the education system, where a recent study found that 60% of middle- and high-school students use AI to help with schoolwork.
According to OpenAI insiders, this challenge was solved over a year ago by a team that was able to achieve 99.9% accuracy in detecting ChatGPT-generated content. However, the company doesn’t plan on releasing the tool to the public.
“It’s just a matter of pressing a button,” said one of the sources.
OpenAI says the delay is necessary to protect the users as the tool presents “important risks.”
“We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI,” a company spokesperson told WSJ.
The firm also claimed that if the technology is available to everyone, bad actors could decipher the technique and develop workarounds.
However, sources say that the real motive is user retention. A company survey last year found that 70% of ChatGPT users were not in favor of the new tool, with one in three saying they would quit the chatbot and turn to its rivals.
Since then, senior executives have suppressed the tool, claiming it wasn’t ready for a public launch. In a meeting two months ago, the top brass stated that this tool, which relies on watermarking outputs, was too controversial and that the company must explore other options.
OpenAI rivals, led by Google (NASDAQ: GOOGL), have not fared any better. The search engine giant, whose Gemini LLM is one of the industry leaders, has developed a similar tool, dubbed SynthID, but it has yet to launch it publicly.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data.
CoinDesk subscribers can access the full story here.
Watch: Transformative AI applications are coming
New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.
The above is the detailed content of OpenAI Developed a Tool to Detect ChatGPT-Generated Content With 99.9% Accuracy but Has No Plans to Release It. For more information, please follow other related articles on the PHP Chinese website!