Apple will only scan abuse images that have been flagged in multiple countries

Spread the love

Apple this week is responding to criticism of its plans to scan iCloud for child abuse photos. The company says, among other things, that it will only mark images that appear in at least two different child safety databases.

Apple hopes that this will reduce the number of false positives of this feature against child sexual abuse material or csam. The company says that these databases must be located in several countries. “The on-device encrypted child abuse database contains only data independently submitted by two or more child safety organizations, which are in separate jurisdictions, and therefore not under the control of the same government,” Apple writes. a report about the upcoming safety function.

In theory, this should ensure that countries cannot ensure that images without child abuse can be marked by Apple’s system. The company also says that the tech giant will only mark an iCloud account if the system identifies 30 or more images as child abuse. This threshold is said to have been chosen to provide a ‘drastic margin of safety’ to prevent false positives. This threshold may change over time as the real world performance of the system is evaluated.

Image through Apple

Before this threshold of 30 images is exceeded, Apple says it cannot decrypt data and it is not possible for Apple to check the number of matches for a given account. After the threshold of thirty images is crossed, Apple servers can only decipher vouchers that match the positive matches. The servers do not get information about other images. These vouchers would provide access to a visually derived version of a positively marked image, such as a low-resolution version. These are then humanly reviewed before the tech giant closes an account and forwards it to a child safety organization.

Apple would also make it possible for third parties to audit the csam database. “An auditor can confirm for any given root hash of the encrypted csam database in the knowledge base article or on a device that the database was generated only from a cross of hashes from participating child safety organizations, with no additions, deletions, or changes.” Security organizations do not need to share sensitive materials to facilitate such an audit, Apple reports.

Apple announced earlier this month that it would add a csam detection feature to iOS 15 and iPadOS 15. The feature is used to scan photos users upload to iCloud for child abuse, using databases of hashes from child safety organizations such as the National Center for Missing and Exploited Children in the US. The feature will be integrated into iOS 15 and iPadOS 15 for all countries, but only enabled in North America for now. The tech giant writes that it will not scan for unknown images that do not exist in csam databases.

When announced, several parties criticized Apple’s plans. According to the Center for Democracy & Technology, the company is establishing a surveillance and censorship infrastructure that is vulnerable to abuse worldwide. Apple then promised to use the photo scan function only to detect child abuse. Apple’s head of software, Craig Federichi, acknowledged that the company has created confusion with the announcement, but the tech giant is sticking with the update. The company has not changed its rollout plans for the feature following the criticism.

You might also like