Home > Technology peripherals > AI > body text

Say no to prejudice and inequality! Microsoft stops offering controversial facial recognition service

王林
Release: 2023-05-26 13:58:06
forward
878 people have browsed it

Microsoft Corp. said it would phase out access to some of its artificial intelligence-powered facial recognition tools, including a service that recognizes emotions based on videos and images.

Say no to prejudice and inequality! Microsoft stops offering controversial facial recognition service

When Microsoft announced this decision a few days ago, it also released a 27-page "Responsible Artificial Intelligence Standard". The standard explains Microsoft's fair and worthy Trustworthy artificial intelligence goals. To meet these standards, Microsoft has restricted access to some facial recognition tools, including those provided by its AzureFace API, Computer Vision and Video Indexer services.

Microsoft said that new users will not have the opportunity to use these features, and existing customers will have to stop using them before the end of this year.

Facial recognition technology has become a major concern for civil rights and privacy groups. Previous research has shown that facial recognition technology is far from perfect, often misidentifying female subjects and people with darker skin at disproportionately high rates. This could lead to significant potential problems when AI is used to identify criminal suspects and other surveillance situations.

The use of artificial intelligence tools to detect emotions is even more controversial. Earlier this year, Zoom Video Communications Inc announced it was considering adding "emotional artificial intelligence" capabilities, and privacy group Fight for the Future launched a campaign urging Zoom not to do so due to concerns that the technology could be misused.

The controversy surrounding facial recognition has gained attention from technology companies, with Amazon Web Services Inc. and Facebook parent Meta Platforms Inc. both scaling back their use of such tools.

Natasha Crampton, Microsoft's chief artificial intelligence leader, said in a blog post that Microsoft recognizes that in order for an artificial intelligence system to be trustworthy, the artificial intelligence system must be an appropriate solution to the problem it is intended to solve. Facial recognition was deemed an inappropriate solution, Crampton said, and Microsoft will shelve its Azure service that infers "emotional state and identity attributes such as gender, age, smile, facial hair, hair, and makeup."

She also said, “AI systems have the potential to exacerbate social bias and inequality, arguably one of the most widely recognized harms associated with these systems. Our laws have not yet caught up with the unique risks of AI. or social needs. We are seeing signs that government action on artificial intelligence is expanding, but we also recognize that we have a responsibility to act."

Analysts weigh in on whether Microsoft's decision is a good one There are differences of opinion. Charles King of Pund-IT told reporters that in addition to the controversy over facial recognition technology, the results of artificial intelligence classification and profiling tools are often unsatisfactory and rarely achieve the results claimed by their creators. "Equally important," King said, "refugees and people of color seeking a better life are under attack in so many places, and the potential for profiling tools to be abused is very high. Therefore, I believe that Microsoft restricts the use of this type of tool." Rob Enderle of the Enderle Group said it was disappointing to see Microsoft back away from facial recognition, a tool that has come a long way from its many early mistakes. He said the negative publicity surrounding facial recognition has forced some major companies to stay away from the field.

Enderle said that "artificial intelligence-based facial recognition technology is too valuable for catching criminals, terrorists and spies, so government agencies will not stop using this technology. However, Microsoft's retreat Meaning they will end up using tools from specialist defense companies or foreign vendors, which are likely not to work as well and lack the same kinds of controls. The genie has been unleashed, and killing facial recognition technology any further will only make it worse. Society as a whole cannot benefit from this."

Microsoft said that its responsible artificial intelligence standards are not limited to facial recognition. Microsoft will also apply these standards to Azure AI Custom Neural Speech, a speech-to-text service that can be used to power transcription tools. Microsoft explained that a March 2020 study found that African-American and black communities had higher error rates when using the software, and in light of this, Microsoft has taken steps to improve the software.

The above is the detailed content of Say no to prejudice and inequality! Microsoft stops offering controversial facial recognition service. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!