Over the past 10 years, big tech companies have become very good at many technologies: language, prediction, personalization, archiving, text parsing, and data processing. But they're still terrible at catching, flagging, and removing harmful content. As for the election and vaccine conspiracy theories spreading in the United States, one need only look back at the events of the past two years to understand the real-world harm they are causing.
This difference raises some questions. Why aren’t tech companies improving on content moderation? Can they be forced to do this? Will new advances in artificial intelligence improve our ability to catch bad information?
Most often, when tech companies are asked by the U.S. Congress to explain their role in spreading hate and misinformation, they tend to blame their failures on the complexity of the language itself. Executives say understanding and preventing contextual hate speech in different languages and contexts is a difficult task.
One of Mark Zuckerberg’s favorite sayings is that technology companies should not be responsible for solving all the world’s political problems.
(Source: STEPHANIE ARNETT/MITTR | GETTY IMAGES)
Most companies currently use both technology and human content moderators, with the latter’s work being undervalued and this reflected in their meager pay.
For example, AI is currently responsible for 97% of all content removed on Facebook.
However, AI isn’t good at interpreting nuance and context, so it’s unlikely to fully replace human content moderators, even if humans do, said Renee DiResta, research manager at the Stanford Internet Observatory. Not always good at explaining these things.
Because automated content moderation systems are typically trained on English data, cultural background and language can also pose challenges in effectively processing content in other languages.
Professor Hani Farid of the School of Information at the University of California, Berkeley, provides a more obvious explanation. According to Farid, because content moderation is not in the financial interest of tech companies, it does not keep up with the risks. It's all about greed. Stop pretending it's not about money. ”
Due to the lack of federal regulation, it is difficult for victims of online violence to demand that platforms bear financial responsibility.
Content moderation seems to be a never-ending war between tech companies and bad actors. When tech companies roll out content moderation rules, bad actors often use emojis or intentional misspellings to avoid detection. Then these companies try to close the loopholes, and people find new loopholes, and the cycle continues.
Now, the large language model is coming...
The current situation is already very difficult. With the emergence of generative artificial intelligence and large-scale language models such as ChatGPT, the situation may become even worse. Generative technology has its problems—for example, its tendency to confidently make things up and present them as fact—but one thing is clear: AI is getting better at language. Very powerful.
While both DiResta and Farid are cautious, they believe it is too early to make a judgment on how things will develop. Although many large models like GPT-4 and Bard have built-in content moderation filters, they can still produce toxic output, such as hate speech or instructions on how to build a bomb.
Generative AI enables bad actors to conduct disinformation campaigns at greater scale and speed. This is a dire situation given that methods for identifying and labeling AI-generated content are woefully inadequate.
On the other hand, the latest large-scale language models perform better at text interpretation than previous artificial intelligence systems. In theory, they could be used to facilitate the development of automated content moderation.
Tech companies need to invest in redesigning large language models to achieve this specific goal. While companies like Microsoft have begun looking into the matter, there has yet to be significant activity.
Farid said: "While we have seen many technological advances, I am skeptical about any improvements in content moderation."
While large language models are advancing rapidly, they still face challenges in contextual understanding, which may prevent them from understanding subtle differences between posts and images as accurately as human moderators. Cross-cultural scalability and specificity also pose problems. "Do you deploy a model for a specific type of niche? Do you do it by country? Do you do it by community? It's not a one-size-fits-all question," DiResta said.
New tools based on new technologies
Whether generative AI ultimately harms or helps the online information landscape may depend largely on whether tech companies can come up with good, widely adopted tools that tell us whether content was generated by AI .
DiResta told me that detecting synthetic media may be a technical challenge that needs to be prioritized because it is challenging. This includes methods like digital watermarking, which refers to embedding a piece of code as a permanent mark that the attached content was produced by artificial intelligence. Automated tools for detecting AI-generated or manipulated posts are attractive because, unlike watermarks, they do not require active tagging by the creator of AI-generated content. In other words, current tools that try to identify machine-generated content are not doing well enough.
Some companies have even proposed using mathematics to securely record cryptographic signatures of information, such as how a piece of content was generated, but this would rely on voluntary disclosure technologies like watermarks.
The latest version of the Artificial Intelligence Act (AI Act) proposed by the European Union just last week requires companies that use generative artificial intelligence to notify users when the content is indeed generated by a machine. We'll likely hear more about emerging tools in the coming months, as demand for transparency in AI-generated content increases.
Support: Ren
Original text:
https://www.technologyreview.com/2023/05/15/1073019/catching-bad-content-in-the-age-of-ai/
The above is the detailed content of How to catch inappropriate content in the era of big models? EU bill requires AI companies to ensure users' right to know. For more information, please follow other related articles on the PHP Chinese website!