Research by Defcon visitors confirms biases in Twitter’s algorithm

Spread the love

There are biases in a Twitter algorithm, researchers discovered during an algorithmic bias bounty competition on Defcon. For example, photos of the elderly and people with disabilities are filtered out in Twitter’s crop tool.

A few weeks ago, Twitter announced that it would organize a contest during the Defcon hacker event in which visitors had to look for prejudices in the algorithms that the platform uses. Among other things, Twitter was looking for evidence of embedded biases in its crop tool for automatically cropping photos in the timeline. Twitter has during Defcon five winners of the competition declaredwho go home with a cash prize.

For example, a Swiss college student, Bogdan Kulynych, discovered that beauty filters can mess up the cropping algorithm’s internal scoring system. Kulynych used StyleGAN2 to produce non-existent faces and make them a little younger, lighter and warmer in color each time. He thus shows that the algorithm has a clear preference for slim, young and light-colored faces and faces with obvious feminine features. Kulynych won $3500 for first place.

This is confirmed by the second and third place in the competition. Canadian AI startup HALT AI, discovered a bias towards white or gray haired people, which would indicate ageism. And researcher Roya Pakzad discovered that the cropping algorithm prefers English text over Arabic script.

In addition to the top three, Twitter also acknowledges research showing the algorithm’s preference for light-colored emoji in photos and research from an anonymous participant who was able to confuse the algorithm by adding a micropixel to images.

During the competition, participants had to adhere to a list of strict rules about what they could and could not do with the algorithm. They were then awarded points if they could demonstrate, for example, intentional and unintentional stereotyping, misrecognition, erasure and other harm. Also, the likelihood of a bias reaching a user and the ability to abuse this bias played a role in the final score count. The prejudices were tested by a panel of machine learning experts, including from the OpenAI foundation.

It is the first time that Twitter has put a monetary amount against finding prejudice in an algorithm. it was known for a long time that Twitter’s crop tool may be biased. That Twitter confirmed in May this year again after internal investigation.

Produced faces from dark to lighter from Bogdan Kulynych .’s research

You might also like