Artificial intelligence needs training data, but that data is limited. So, how else can we train AI so that it continues to grow and be useful to us?
You might think the internet and its data are inexhaustible resources, but AI tools are running out of data to mine. Now, before you worry, it's not going to stop AI development—there's plenty of data still ready to train AI systems.
In short, AI research institute Epoch says the high-quality data on which AI is being trained could run out by 2026.
The key word there is "could." The amount of data added to the internet every year increases, so something drastic may change before 2026. Still, it's a fair estimate—either way, AI systems will run out of good data at some point.
We should remember, however, that some 147 zettabytes of data are added online every year (as per Exploding Topics). Just one zettabyte is equal to 1,000,000,000,000,000,000,000 bits of data. In real terms (well, somewhat real), that's more than 30 billion 4K movies (real, but unfathomable). It's a startling amount of information for AI to sift through.
Nonetheless, AI consumes data faster than humanity can create it…
Not all that 147 zettabytes of data is good data, of course. There's a lot more to it than meets the eye. But it's estimated that AI will have used up low-quality language data by 2050, too.
Reuters reported that Photobucket, once one of the world's largest picture repositories, was in talks to license its extensive library to AI training firms. Image data has trained systems like DALL-E and Midjourney, but even that could run out by 2060. There's a bigger issue here, too: Photobucket housed images from 2000s social media platforms like Myspace, meaning they're not as high a standard as current photography. This leads to low-quality data.
Photobucket isn't alone. In February 2024, Google struck a deal with Reddit, allowing the search giant to use the social media platform's user data in its AI training. Other social media platforms are also providing user data for AI training purposes; some are using it to train in-house AI models, such as Meta's Llama.
However, while some information can be gleaned from low-quality data, Microsoft is reportedly developing a way for AI to selectively "unlearn" data. Primarily, this would be used for IP issues, but it could also mean tools can forget what they've learned from low-quality data sets.
We could feed AI more data without being too selective; those AI systems could then pick and choose what's most beneficial to learn from.
Data fed to AI tools has so far consisted largely of text and, to a lesser extent, images. That will undoubtedly change, and likely already has, as speech recognition software will mean the wealth of videos and podcasts available can also train AI.
Notably, OpenAI developed the open-source, automatic speech recognition (ASR) neural network, Whisper, using 680,000 hours of multilingual and multitasking data. OpenAI then fed over a million hours of information from YouTube videos into its large language model, GPT-4.
This is an ideal template for other AI systems, which use speech recognition to transcribe videos and audio from numerous sources and run that data through their AI models.
According to Statista, over 500 hours of video are uploaded to YouTube every minute, a number that's remained fairly consistent since 2019. That's without mentioning other video and audio platforms like Dailymotion and Podbean. If AI can turn its attention to new data sets like these, there's a huge amount of information still to be mined.
That's not all we can learn from Whisper. OpenAI trained the model using 117,000 hours of non-English audio data. This is especially interesting because many AI systems have been trained primarily using English or viewing other cultures through the Western lens.
In essence, most tools are limited by the culture of their creators.
Take ChatGPT as an example. Shortly after its release in 2022, Jill Walker Rettberg, professor of Digital Culture at the University of Bergen, Norway, tried ChatGPT out and concluded:
“ChatGPT doesn’t know much about Norwegian culture. Or rather, whatever it knows about Norwegian culture is presumably mostly learned from English language sources… ChatGPT is explicitly aligned with US values and laws. In many cases these are close to Norwegian and European values, but presumably this will not always be the case.”
AIs, then, can develop the more multinational people interact with them—or the more diverse languages and cultures are used to train such systems. Right now, many artificial intelligences have been confined to a single library; they can grow if given the keys to libraries across the world.
IP is obviously a massive issue, but some publishers could help develop AIs by making licensing agreements. This would mean giving tools high-quality, i.e., reliable, data from books rather than potentially low-quality information gleaned from online sources.
In fact, Meta, the owners of Facebook, Instagram, and WhatsApp, reportedly considered buying Simon & Schuster, one of the "Big Five" publishing houses. The idea was to use literature published by the firm to train Meta's own AI. The deal ultimately fell through, perhaps owing to the ethical gray area of the company processing IPs without prior consent from writers.
Another option apparently considered was buying individual licensing rights on new titles. This should cause great concerns for creatives, but it will still be an interesting way for AI tools to develop if usable data is exhausted.
Every other solution is still limited, but one option could see AI thrive far into the future: synthetic data. And it's already being investigated as a very real possibility.
So, what is synthetic data? In this sense, it's data created by AI; just as humans create data, this method would see artificial intelligence generate data for training purposes.
In effect, an AI could create a convincing deepfake video. That deepfake video could be fed back into an AI so that it can learn from what is essentially an imaginary scenario. That is, after all, one major way humans learn: we read or watch something in order to understand the world around us.
AIs are likely to have already consumed synthetic information. Deepfakes circulated online spread misinformation and disinformation, so as AI systems scan the internet, it makes sense that some will have been subject to faked content.
Yes, there's an insidious side to this. It could also damage or limit AIs, reinforcing and spreading mistakes made by those tools. Companies are working to eradicate the latter problem; still, "AIs learning from each other and making errors" is a plot point of many sci-fi nightmare scenarios.
AI is controversial. There are plenty of downsides to it, but detractors ignore its benefits. For instance, audit and advisory network PwC [PDF] suggests AI could contribute up to $15.7 trillion to the world's economy by 2030.
What's more, AI is already being used all over the world. You've likely used it today in some form or another, perhaps without even realizing it. Now the genie is out of the bottle, the key is surely to train it on reliable, quality data so we can make proper use of it.
AI has its positives and its negatives. There's a balance to be found.
The above is the detailed content of AI Tools Are Running Out of Training Data, but There Are 6 Solutions. For more information, please follow other related articles on the PHP Chinese website!