The EU wants tech companies to label generative AI disinformation

Spread the love

If it is up to the European Commission, tech companies must recognize when their generative AI tools can be used to spread disinformation. This content must then be labeled. The Commission is expanding a code of conduct for this purpose.

The European Commission wants such labels to ensure that generative AI tools cannot be used to spread disinformation, says European Commission Vice President Věra Jourová according to Reuters. This concerns services such as Microsoft’s Bing Chat and Google’s Bard. Companies involved must indicate in July what they have done to combat disinformation on their platforms.

The label is part of the code of conduct that more than thirty tech companies previously signed. With this code, the European Commission wants to combat the spread of disinformation and fake news, including measures against deepfakes, bots and fake accounts. Companies that do not adhere to this code can be fined up to six percent of their annual turnover under the Digital Services Act.

Twitter withdrew from that voluntary code earlier this year. Jourová warned this platform according to The Guardian when announcing the generative AI labels that the company must still comply with the DSA. The vice president described this as a ‘mistake’ by Twitter and said the company had chosen ‘the hard way’ and ‘confrontation’. By withdrawing from the code, Twitter has “received significant attention and its actions and compliance with EU law will be vigorously and swiftly investigated.”

You might also like