Twitter will pay a reward for finding biases in the algorithm

Spread the love

Twitter will pay out an ‘algorithmic bias bounty’ during Defcon. It asks visitors to the hacker event to discover biases in a Twitter algorithm, which could disadvantage certain people in their services, and offers a reward in return.

It is the first time Twitter puts a sum of money versus finding errors in the algorithm that disadvantage or favor certain people in their services. During the hacker event Defcon, Twitter’s Machine Learning Ethics, Transparency and Accountability team will release an algorithm that visitors can use. If they discover a bias, they can win a cash prize.

Twitter uses several different algorithms in their service. For example, in addition to an algorithm that arranges the timeline, there is also an algorithm that automatically crops images so that they are all the same size in the preview in the timeline. Twitter has been accused of unfair algorithms that confirm biases in the past.

So discovered users last year that Twitter’s preview when sharing a photo of a white and a black man consistently shows the photo of the white man. Other users noticed that the preview in a long image shows white faces more often when cropping, at the expense of black faces. Twitter investigated the matter and confirmed the error in the algorithm in May.

Twitter researcher Rumman Chowdhury says that users can hack the algorithm during Defcon. The findings are being tested by a panel of machine learning experts, including from the OpenAI foundation. This Friday, Twitter will announce more details about the competition, including what rewards can be won.

Twitter isn’t the first to think that bug bounty programs might be a good way to uncover biases. A Mozilla Foundation researcher, Deborah Raji is investigating how bug bounty programs can contribute to finding biases in algorithms.

You might also like