Twitter will begin its new policy on deepfakes on March 5. From that date, ‘synthetic and manipulated media’ can be labeled and possibly removed.
Twitter says that it will flag posts containing “deceptively or misleadingly modified or fabricated content.” For example, a “manipulated media” label with an exclamation mark is added to a custom video in someone’s timeline. You can also click through to consult more context information about the video, whereby external sources try to prove that the video is not real. A warning can also be displayed if a user wants to share such a video and the visibility of the relevant video can be reduced on the platform. In most cases, all of these actions are applied to the Twitter messages being tagged.
To determine whether a video or photo has been tampered with, we look not only at audiovisual cues, but also at the way the media is shared. This includes any accompanying textual explanation, for example if the uploader states that it is an image that reflects reality. Furthermore, metadata, information on the profile of the Twitter user and any links in the message are taken into account.
If there is a reasonable chance that certain content will have a negative impact on public safety or cause serious damage, the media will probably be removed. This concerns threats to public safety or to a specific group or person. In addition, the risk of mass violence or unrest in society is a factor, as are attempts to pressure one’s freedom of expression, for example attempts to suppress voters during elections or intimidation in general.