Researchers create tool that can recognize deepfakes in most cases

Spread the love

American researchers have created a video analysis tool that can recognize whether the footage has been manipulated, making people seem to say something they didn’t actually say, for example. This should prevent the spread of deepfakes.

Developed by American researchers from UC Berkeley and the University of Southern California, the tool could detect manipulated images in at least 92 percent of cases. MIT Technology Review reports this on the basis of a paper written by the researchers. It works with artificial intelligence that is trained on head movements, something the researchers call the person’s ‘soft biometric signature’. The model can then distinguish between the natural movements that a person makes and those generated by a computer.

To test the AI, the researchers made a number of deepfakes themselves, making manipulated videos with a number of common tools. The AI ​​then looked at the images to determine if there are transitions in the frames that involve unnatural movements, suggesting manipulation. In 92 percent of the cases, the tool developed by the researchers was therefore able to pick out the fake video. This also worked when artifacts were introduced into the video through recompression; such tricks are often used to deceive deepfake detection methods.

Several ways to detect deepfakes have already been developed, but the method developed by the California researchers would require significantly less computational power, and would also be less easy to circumvent than tools that target a single specific property, such as eye movements.

You might also like