Twitter will pay out an algorithmic bias bounty during Defcon. It asks visitors to the hacker event to discover biases in a Twitter algorithm that could put certain people at a disadvantage in their services, and rewards them.
It is the first time that Twitter has put a monetary amount against finding errors in the algorithm that disadvantage or favor certain people in their services. During the Defcon hacker event, Twitter’s Machine Learning Ethics, Transparency and Accountability team will unleash an algorithm that visitors can use. If they discover a bias , they can win a cash prize.
Twitter uses several different algorithms in their service. For example, in addition to an algorithm that organizes the timeline, there is also an algorithm that automatically crops images so that they are all the same size in the preview in the timeline. Twitter has been accused of unfair algorithms that confirm prejudice in the past.
For example, last year users discovered that Twitter’s preview when sharing a photo of a white and a black man consistently shows the photo of the white man. Other users noticed that the preview in a long image shows white faces more often when cropped, at the expense of black faces. Twitter investigated the matter and confirmed the flaw in the algorithm in May.
Twitter researcher Rumman Chowdhury says users can hack into the algorithm during Defcon. The finds are tested by a panel of machine learning experts, including from the OpenAI foundation. This Friday, Twitter will announce more details about the contest, including what rewards are up for grabs.
Twitter isn’t the first to think that bug bounty programs might be an appropriate way to spot bias. A Mozilla Foundation researcher, Deborah Raji investigates how bug bounty programs can help find biases in algorithms.