Microsoft sets new AI guidelines and stops facial recognition in Azure

Spread the love

Microsoft is going to set tighter guidelines for how it builds artificial intelligence systems. The Responsible AI Standard contains rules for voice and facial recognition. Microsoft partially stops with the latter, because there are too many risks involved.

Microsoft say that it be the most recent version of Responsible AI Standard has released. An earlier version of this was only available internally. This guideline states what Microsoft does and does not want and can do with artificial intelligence and the tools it makes for it. These guidelines state how Microsoft deals with issues such as privacy, security and reliability, but also inclusiveness and transparency. The tools that Microsoft builds that use artificial intelligence, such as in Azure or Windows, must meet all those standards.

In addition to the guideline, the company also shares examples of where it has fallen short in previous years. For example, the company cites a study that found that the speech-to-text technology it built failed almost twice as often with users with a black skin color compared to a white one. Microsoft also gives an example of how the company built extra layers of protection into Azure’s Custom Neural Voice when it was found that there was a chance it could be used to impersonate other people.

In the future, Microsoft wants to make it less easy to use AI tools that can be abused for such things. One of the most notable decisions Microsoft has made in response to the guidelines is that the company makes it harder for some facial recognition technologies in Azure to use. Users who want to use the Azure Face API, Computer Vision, or Video Indexer must specifically request access. Existing users have one year to do so. In that case, they must indicate what they use the service for, provide examples of this and make plans to prevent misuse. Microsoft can revoke access in the event of abuse.

It is not about facial recognition tools in general, but specifically about tools that can read emotions from facial recognition images. Finding out someone’s gender, age, smile, facial hair or makeup based on images is also restricted. New users will no longer be able to access it at all. Existing users have one year to phase out use. Standard applications, including, for example, recognizing glasses in photos or removing noise and lighting in an image, will remain available.

You might also like