Thanks to machine learning, YouTube claims to be able to take out millions of YouTube videos that breach its conditions before anyone sees them. In the last quarter of 2017, this happened with 6.7 million videos.
YouTube introduced machine learning to identify offensive videos in June 2017. More than half of videos removed for extreme violence now have less than ten views. In early 2017, before the platform started machine learning, eight percent of the violent videos that YouTube removed had less than ten views, the rest had more views. With this example, YouTube wants to show that it is getting faster in removing videos.
YouTube emphasizes that the use of machine learning means more people are involved in the review process, because human moderators use videos with a flag must assess on the guidelines of the platform. Google aims to employ ten thousand people by the end of 2018 to assess infringing content. Facebook recently announced that at the end of this year it wants to deploy the double number of people for the same task in addition to security.
In total YouTube removed eight million videos in the last three months of 2017 due to spam or offensive material. That is a fraction of one percent of the total number of videos added according to the service.