IT House News on May 24, Microsoft launched a content review service product driven by artificial intelligence (AI)-Azure AI Content Safety (Azure AI Content Security Service), aiming to reduce the risk of corruption in community environments. negative information.
This product provides a series of trained AI models that can detect negative content related to prejudice, hatred, violence, etc. in pictures or texts, and can understand and detect pictures or texts in eight languages as being tagged Content severity is scored to indicate to human reviewers which content requires action.
▲ Picture source Azure official website
The Azure AI Content Security Service is built into the Azure OpenAI service and is open to third-party developers, supporting eight languages including English, Spanish, German, French, Chinese, Japanese, Portuguese, and Italian. Azure OpenAI service is an enterprise-focused service offering managed by Microsoft that provides enterprises with access to AI Lab OpenAI’s technology and added governance capabilities. Compared with other similar products, the Azure AI content security service can better understand text content and cultural background, and is more accurate when processing data and content.
Microsoft said that compared with other similar products, this product has significant improvements in impartiality and understanding context, but this product still needs to rely on human reviewers to mark data and content. This means that ultimately its fairness depends on humans. Even when processing data and reviewing content, human reviewers may have personal biases, so neutrality and prudence cannot be fully achieved.
When dealing with issues involving machine learning visual perception, in 2015, Google’s AI image recognition software labeled people of color (black) as orangutans, which caused huge controversy. Eight years later, today's technology giants are still worried about "political correctness" "repeating the mistakes of the past." Earlier this month, OpenAI CEO Sam Altman called on the government to regulate artificial intelligence (AI) at a U.S. Senate subcommittee hearing. He highlighted the potential for errors in this technology and its serious consequences, which could cause huge damage globally. IT House reminds us that even the most advanced artificial intelligence technology may be used maliciously or incorrectly, so it is crucial to regulate and regulate artificial intelligence.
The above is the detailed content of Microsoft launches Azure AI Content Safety, which can automatically detect negative online content such as. For more information, please follow other related articles on the PHP Chinese website!