On May 23, local time, Microsoft launched a content review service product driven by artificial intelligence (AI) - Azure AI Content Safety (Azure AI content security service), aiming to create a safer network environment . This product provides a series of trained AI models that can detect negative content related to prejudice, hatred, violence, etc. in images or text.
Reports say that this product will be built into the Azure OpenAI service and open to third-party developers. It is understood that the Azure OpenAI service is an enterprise-centric service product managed by Microsoft, designed to allow enterprises to access the technology of the AI laboratory OpenAI and add governance functions. It is worth mentioning that the Azure AI content security service is not only used in the Azure OpenAI service, but can also be applied to non-AI platforms such as game platforms and online communities.
↑Information Picture
It is reported that the Azure AI content security service is "proficient" in eight languages including English, Spanish, German, French, Chinese, Japanese, Portuguese and Italian. That is, it can understand and detect images or text in these eight languages, and assign a severity score to the flagged content, indicating to human reviewers which content requires action.
In addition, according to a Microsoft spokesperson, this product can better understand text content and cultural background compared with previous products of the same type. “Previous products of the same type were unable to capture contextual content and may cause markup. Content errors." The spokesperson added: "We have a team of language and fairness experts who are committed to developing guidelines that take culture, language and context into account. The report also pointed out that Microsoft recognizes that current AI is not perfect , hoping that humans will improve AI when using it.
The report pointed out that, like the problems faced by all AI review programs, the Azure AI content security service ultimately relies on human reviewers to mark data and content, which means that its fairness ultimately depends on humans. Bias in manual processing of data and content can cause headaches, according to reports. There was controversy in 2015 when reports emerged that Google's artificial intelligence image recognition software had mistakenly labeled people of color as orangutans. US media said that eight years later, today’s technology giants are still worried about “making the same mistakes again.”
In fact, how to deal with the risks brought by AI has always been a hot topic among the public and the technology circle. On the 22nd of this month, a fake picture of an "explosion near the Pentagon" thought to be synthesized by AI circulated on the Internet, causing panic for a while. A spokesman for the US Department of Defense urgently refuted the rumor. Subsequently, US media analysis stated that the image had all the characteristics generated by AI. Earlier this month, OpenAI CEO Sam Altman called for government regulation of artificial intelligence (AI) during a U.S. Senate subcommittee hearing. He emphasized that if there is a mistake in this technology, the consequences will be very serious and may cause huge damage to the world.
Red Star News reporter Li Jinrui
Editor He Xianfeng Editor Li Binbin
The above is the detailed content of AI content review that understands eight languages and understands context is here! Microsoft launches new product: can detect text and images. For more information, please follow other related articles on the PHP Chinese website!