Deep fakes, you often hear that they are one of the great dangers for the future. You can still let people who would never say something make that statement through a fake video. Especially thanks to social media such a video can spread quickly. We see this not only with video, but also with text: we think we know when a text is written by a human or a robot, but we actually have no idea.
This has been discovered in the United States. Through Medicaid.gov, citizens were able to provide feedback on an adjustment to an Idaho state medical program. The site went live in October 2019 and received 1,000 comments. However, half did not come from citizens who wanted to express their opinion: half came from artificial intelligence . The moderators who had to process the responses did not discover this at all.
Harvard student Max Weiss knew better, because he developed the AI system that made this happen. It’s pretty serious because governments look to these comments in policy making and decision making. It is very easy for a robot to create text that looks like it belongs to a human, but it is not easy for a human to recognize that it comes from a bot.
The US government has decided to redesign its citizen feedback pages based on the investigation. So not simply a button with which you submit something: it takes more. Weiss was shocked that the button was the only thing he could use to influence politics, so he started his AI experiment. He used the GPT-2 program to create fake responses. “I was shocked at how easy it was to fine-tune GPT-2 to make the comments real. It’s relatively concerning on a number of fronts, ”says Weiss.
The danger is not only when artificial intelligence is used in response to social media or a form on the internet. After all, thanks to artificial intelligence, hackers from all over the world can write much better phishing emails. Nowadays you can sometimes discover this because of poor English or a stupid language error, but if artificial intelligence is used soon, it will also be almost impossible to distinguish from the real thing. Although it is of course more clever to fool someone with a deepfake video (after all, that is a lot more work), text also has its dangers.