Researcher finds out Apple’s NeuralHash algorithm and could cause collision

Spread the love

Apple’s controversial NeuralHash child abuse photo detection algorithm can also recognize images that have a different format or compression. A researcher has looked at the code, and says that it can also cause collisions.

Researcher Asuhariet Ygvar this week posted a tool on GitHub that creates its own version of NeuralHash. According to Ygvar, he got the code for this from iOS 14.3, which already contained the code. He managed to build a tool using reverse engineering. With Ygvar’s tool it is possible to try out the detection system on other phones before it is finally rolled out for all users. It is a Python script that can be used to find out the NeuralMatch hash on iOS or macOS. NeuralHash is the hashing algorithm that Apple will use in iOS 15. Ygvar says in a post on Reddit that he hopes researchers can “better understand” the algorithm and Apple’s technology with his tool.

Ygvar says initial experiments with the tool show that NeuralHash can handle scaling or compressing images. This means that an image will keep the same hash if it is compressed or resized. This does not happen when cropping or rotating images.

Other researchers have already started working with the tool. One of them discovered that it is possible to cause a collision. That means two different images will produce the same hash. With many old hash algorithms such as MD5 or sha-1, this has become easy in recent years, so that they are no longer seen as safe. Ygvar confirms on GitHub that such a collision is indeed possible. When introducing the technology, Apple said there was “a one in a trillion chance” that there would be a hash match between two unrelated images.

Apple says in a conversation with journalists that the company has taken such collisions into account. Safeguards are said to have been built in so that users are not falsely suspected of uploading child abuse images. For example, Apple only receives a warning if at least thirty matches have been found. In addition, someone with malicious intent would have to access the hash database that Apple uses, which is not publicly available.

Apple announced so-called csam scanning last week. The company will locally scan iPhones and iPads for child sexual abuse material by hashing photos uploaded to iCloud and comparing those hashes with hashes from a database at the National Center for Missing and Exploited Children. The plan has been heavily criticized by security researchers, privacy activists and legal experts. That criticism concerns, among other things, that activists fear that Apple will also use the technology in the future for other unwanted image material such as terrorism or that governments will pressure Apple to censor other unwanted image material with the technology.

You might also like