By implementing an on-device matching process, Apple wants to detect and flag child sexual abuse material. But can this technology be used by governments for other purposes?
Apple on Thursday announced three new measures coming to its operating systems this fall that aim to limit the spread of Child Sexual Abuse Material (CSAM) and protect children from predators:
- CSAM detection in iCloud Photos: Using advanced cryptographic methods, Apple will detect if a user’s iCloud Photos library contains high-levels of CSAM content and pass on this information to law enforcement agencies.
- Safety measures in Messages: Messages app will warn children about sensitive content and allows parents to receive alerts if such content is sent or received.
- Safety measures in Siri and Search: Siri and Search will intervene when users try to search for CSAM-related topics and will also provide parents and children expanded information if they encounter unsafe situations.
Why it matters? Last year, it was reported that India leads in the online generation of CSAM. Considering this, Apple’s latest measures appear appreciable and harmless, but the technology used by Apple to implement these measures can evolve to be used for other privacy-invasive purposes. There will now be a burden of expectations on Android to do the same and it opens the door for all kinds of surveillance tools or content removal requests from governments. For example, the Indian government can ask platforms like WhatsApp to proactively remove photos that are critical of it using this same technology.
How does CSAM detection in iCloud Photos work?
“Apple’s method of detecting known CSAM is designed with user privacy in mind,” the company said.
- On-device matching against a database of known images: Before an image is stored in iCloud Photos, Apple will convert the image to a unique number through a process called NeuralHash. Then, an on-device matching process checks this hash with a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organisations.
- Match result not revealed: Apple says its matching process will be powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result to either the user or Apple. The device will upload this match result along with the image to iCloud.
- Apple can view match results only if there is a high number of matches: Apple cannot view the match results unless a users’ iCloud Photos library crosses a threshold of known CSAM content.
- Will block the users’ account and report to NCMEC: Once an account crosses the threshold of known CSAM content, Apple can access the matched images and it will manually review and confirm each match report before disabling the user’s account and notifying NCMEC, which works as the reporting centre for CSAM in collaboration with law enforcement agencies in the US.
Can this technology be used by governments for other purposes?
A narrowly-scoped backdoor is still a backdoor:
“Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.” – Electronic Frontier Foundation
Tech companies like Apple have faced considerable pressure from governments around the world to weaken or provide a backdoor to the encryption used on their devices and services to allow law enforcement to investigate serious crimes such as terrorism and spreading of child sexual abuse material. But tech companies, Apple in particular, have refused to create such a backdoor citing violations of free speech and privacy.
Technology can be adapted for any other target imagery or text even in end-to-end encrypted services:
“Although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple’s precedent could also increase pressure on other tech companies to use similar techniques.” – Financial Times
“All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change,” Electronic Frontier Foundation said.
“This sort of tool can be a boon for finding child pornography in people’s phones. But imagine what it could do in the hands of an authoritarian government?” Matthew Green, a cryptography professor at Johns Hopkins University, asked.
The ability to add scanning systems like this to E2E messaging systems has been a major “ask” by law enforcement the world over. Here’s an open letter signed by former AG William Barr and other western governments. https://t.co/mKdAlaDSts
— Matthew Green (@matthew_d_green) August 5, 2021
The Indian government can ask platforms to take down posts that are critical of it: For example, India’s new Information Technology Rules 2021 requires platforms to develop tools to proactively remove content that the government deems illegal, which could merely be content that is critical of the government. Platforms have maintained that this will harm privacy and free speech but if Apple can implement proactive measures for CSAM, the Indian government can ask why the same technology cannot be modified to accommodate the government’s requests? The government can ask WhatsApp why the platform cannot implement similar technology to find images and videos that are deemed by the government to be illegal or disruptive to public order and flag the users who share this content.
If you think Apple won’t adhere to the government’s whims and fancies, read here about the compromises the company has made to the authorities in China.
These technologies are mass surveillance tools:
A number of people pointed out that these scanning technologies are effectively (somewhat limited) mass surveillance tools. Not dissimilar to the tools that repressive regimes have deployed — just turned to different purposes.
— Matthew Green (@matthew_d_green) August 5, 2021
No matter how well-intentioned, @Apple is rolling out mass surveillance to the entire world with this. Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow.
They turned a trillion dollars of devices into iNarcs—*without asking.* https://t.co/wIMWijIjJk
— Edward Snowden (@Snowden) August 6, 2021
All surveillance tech start with the bogey of child abuse or terrorism, and literally all gets repurposed for political targeting and consolidation of power!
I’m done with Apple. Not accepting iMessage vulnerabilities and now this… the whole privacy show is pure advertising BS. https://t.co/CYkQL1P78X
— Shivam Shankar Singh (@ShivamShankarS) August 6, 2021
What are some other questions and concerns around this technology?
- Will Apple be able to see the images we upload? Apple will not be able to see the photos during the matching process because this process happens on the device. After the photos are uploaded to iCloud Photos, Apple will be able to see them only if the user’s account crosses a certain threshold of CSAM content. “Even in these cases, Apple only learns about images that match known CSAM,” the company said.
- Aren’t iCloud Photos end-to-end encrypted? While iCloud Photos are encrypted in transit and on the servers, they are NOT end-to-end encrypted. Therefore, Apple has the key to access photos in iCloud if need be. As a matter of fact, iCloud Backups are not end-to-end encrypted either.
- Will Apple flag partially clothed or naked images of my children? This will not happen because the on-device matching process only matches with a database of known CSAM image hashes provided by NCMEC. It is not designed to flag all naked or sensitive photos that are uploaded to iCloud.
- Will Apple be able to find new CSAM content? Because the technology relies on matching with a known database of CSAM hashes, it doesn’t look like Apple will be able to catch any new child sexual abuse material.
- Can users see the database of CSAM images hashes provided by NCMEC? The database of known CSAM image hashes is not known to the user. This database is stored as an unreadable set of hashes on users’ devices.
- Can hashes be wrongly matched? We don’t know much about the NeuralHash algorithm that Apple is using, but in the past, there have been issues with wrong matching with similar technology. “Imagine someone sends you a perfectly harmless political media file that you share with a friend. But that file shares a hash with some known child porn file?” Matthew Green asked.”Apple should commit to publishing its algorithms so that researchers can try to develop “adversarial” images that trigger the matching function, and see how resilient the tech is,” he added.
- Which countries is this safety feature coming to? As of now, the iCloud CSAM detection technology will only be used in the US with no indication on if and when it will roll out internationally. But this technology might hit a roadblock in the EU where Facebook was recently barred from running its child abuse detection tools.
- How robust and accurate is Apple’s CSAM detection technology? “The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account,” the company said. Furthermore, three independent auditors have done a technical assessment of the CSAM detection technology and found it to be mathematically robust, the company said. (Assessment 1, Assessment 2, Assessment 3).
- Will Apple flag my account if I share CSAM content through Messages? Apple is doing on-device checking of messages (more details below) sent and received by phones used by children for sexually explicit material, but it is not known if this includes specifically checking for CSAM and blocking accounts or reporting the same to NCMEC. As of now, it appears that the new CSAM detection technology is only being implemented for iCloud Photos. [Update: Aug 10, 3:40 pm] Apple confirmed that the communication safety feature in Messages is different from CSAM detection in iCloud Photos.
- Is there a way to opt-out of this feature? There is no way to opt out of this feature but notably, this safety measure only appears to be triggered if the iCloud Photo Library feature is turned on. If the iPhone user merely stores photos on the device without syncing them to iCloud, then there is no CSAM detection on these photos.
- Will I know which images of mine are flagged as CSAM? “Users can’t identify which images were flagged as CSAM by the system,” Apple said.
- What if my account is wrongly flagged? “If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated,” the company said.
- Will slightly edited CSAM images get matched? The NeuralHash hashing technology works in such a way that identical and visually similar images result in the same hash. “An image that has been slightly cropped, resized or converted from colour to black and white is treated identical to its original, and has the same hash,” Apple said.
- Do other companies and platforms check for CSAM content as well? Yes. Facebook, WhatsApp, Instagram, and most email providers including Apple Mail check and report CSAM content. Cloud storage providers like Dropbox, Google, and Microsoft OneDrive also check for CSAM content. In fact, “FB reported 15.8M cases of CSAM to NCMEC in 2019; Apple reported 205. Seems likely that there is massive under-reporting of very bad stuff on iOS,” tech reporter Casey Newton pointed out.
- Does this break end-to-end encryption in Messages? No. Apple’s new communication safety features (more about this below) is different from its CSAM detection system for iCloud Photos. The feature on messages involve on-device scanning for sensitive content and alerting children, whose parents have turned on this feature, before they view this content. “None of the communications, image evaluation, interventions, or notifications are available to Apple,” the company said.
- Does this feature prevent children in abusive homes from seeking help? “The communication safety feature applies only to sexually explicit photos shared or received in Messages. Other communications that victims can use to seek help, including text in Messages, are unaffected,” the company said.
Wonder what 2019's Apple would think of today's Apple. pic.twitter.com/BbgY6zjxrV
— Vlad Savov (@vladsavov) August 5, 2021
Other safety measures that Apple announced
Safety measures in Messages:
- What happens when sexually explicit images are sent or received? The Messages app will warn children and their parents when receiving or sending sexually explicit photos. When such content is received, Apple will blur the photo, warn the child, and present them with resources to help navigate the situation. Parents will have the ability to get notified if the child chooses to ignore the warning and view the image. Similar warnings and notifications will appear if the child tries to send sexually explicit photos. These features are only available for accounts set-up as families.
- Apple will not have access to the messages: Apple says it will use “on-device machine learning to analyze image attachments and determine if a photo is sexually explicit” and that the feature is “designed so that Apple does not get access to the messages.”
Safety measures in Siri and Search:
- Additional guidance will be provided: Siri and Search will help children and parents stay safe online by providing resources specific to addressing CSAM content. “For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report,” the company said.
- Will intervene when searching for CSAM: Siri and Search will intervene when users search for queries related to CSAM by explaining to users “that interest in this topic is harmful and problematic,” the company said. Siri and Search will also provide resources to get help with the issue.
What does India’s data protection bill say about children’s safety online?
The draft Personal Data Protection (PDP) Bill, 2019 has defined guardian data fiduciaries (GDF) as entities that
- Operate commercial websites or online services directed at children or
- Process large volumes of personal data of children.
What are the responsibilities of GDFs?
- GDFs are prohibited from “profiling, tracking or behaviourally monitoring or targeted advertising direct at, children”. Essentially, they cannot process children’s data that can cause “significant harm” to the child.
- GDFs are supposed to verify the age of their users, and obtain consent from their guardian or parents if the user is a “child” — anyone under 18.
- Failure to adhere to the provisions can attract a fine of ₹15 crore, or 4% of the company’s global turnover.
In a MediaNama discussion on this topic, we discuss how these fiduciaries will comply with this complex mandate. In another discussion, we also discuss whether there should be a blanket age of consent for using online services.
Update (10 Aug, 4:00 pm): Added clarification to existing questions in What are some other questions and concerns around this technology? and added two more questions.
- How WhatsApp Deals With Child Sexual Abuse Material Without Breaking End To End Encryption
- Instagram Announces Three New Safety Measures For Young Users, Including Limiting Advertisers’ Reach
- India Leads In Generation Of Online Child Sexual Abuse Material
Have something to add? Subscribe to MediaNama and post your comment