[CNMO News] After many months of testing, Microsoft recently officially released the AI content review tool Azure AI Content Safety. It is reported that this product is a content moderation tool that it began testing in May this year. It contains a series of trained AI models that can detect negative content related to prejudice, hatred, violence, etc. in pictures or texts, and can understand and detect Contains images or text in eight languages.
Additionally, the moderation tool assigns a severity score to flagged content and guides human reviewers on content that requires action. Initially, the audit tool was integrated into the Azure OpenAI service, but now it will be rolled out as a standalone system
Mentioned in Microsoft’s official blog post: “This means that users can apply it to artificial intelligence content generated by open source models and other company models, and can also call some user-generated content to further enhance its practicality. ”
Microsoft said that although the product has improved in handling data and content to more impartially understand context, it still needs to rely on human reviewers to flag data and content. It also means that ultimate justice rests with humans. However, human reviewers may not be completely neutral and prudent when handling data and content due to personal biases
The above is the detailed content of Microsoft releases AI content moderation tool called Azure AI Content Safety. For more information, please follow other related articles on the PHP Chinese website!