‘Word-trained AI systems are taking over human stereotypes’

Spread the love

Applying machine learning to human language results in the adoption of people’s semantic biases, scientists have shown. Artificial intelligence can unintentionally integrate stereotypes into its behaviour.

Machine learning is often put into practice by allowing systems to recognize patterns in large amounts of data. Artificial intelligence can thus learn word associations based on existing texts written by humans. Researchers from the University of Bath and Princeton University examined these associations using a word-embedding association test, or WEAT.

The test for computer systems is based on a test for humans called the implicit association test, or IAT. This analyzes the speed of people’s reactions to words, to get an idea about the unconscious associations. For example, Americans are more likely to associate names like “Brad” and “Courtney” with positive words like “joyful” and “sunrise.” Names like “Leroy” and “Latisha” are more likely to be associated with words like “hate” and “vomit,” Science writes.

When applying WEAT, the context of words, number strings for computers, was determined by analyzing the matches of strings. The researchers used previous research from Princeton University, in which billions of words on the internet were analyzed. The results were similar to those of the IAT: the strings for flowers corresponded more closely to those for positive words and those for insects more closely to unpleasant words.

But the test also showed that artificial intelligence took on implicit associations, such as linking ‘men’ to work and science and ‘women’ to family and the arts. As with IAT, the positive and negative associations were also found with certain names.

The researchers point out that algorithms are currently being used for, for example, matching people with jobs and predicting criminal behavior. Artificial intelligence should not automatically be seen as objective, according to them. “Ai is an extension of our own culture,” said computer scientist Joanna Bryson, one of the researchers. She does not propose to throw away information used to feed the systems to filter the prejudices. Instead, a layer could be added: a human or computer has to decide whether and how prejudice should be dealt with.

The researchers have published work in Science entitled Semantics derived automatically from language corpora contain human-like biases. In doing so, they demonstrate what AI systems have already shown several times in practice. For example, last year Microsoft released a chatbot called Tay, an acronym for ‘thinking about you’, on Kik, Twitter and GroupMe. The chatbot was able to respond to other users but soon started making racist and insulting statements and spamming users.

You might also like