wordpress blog stats
Connect with us

Hi, what are you looking for?

Why are Apple’s plans to scan iCloud Photos for child sexual abuse material concerning?

By implementing an on-device matching process, Apple wants to detect and flag child sexual abuse material. But can this technology be used by governments for other purposes?

Apple on Thursday announced three new measures coming to its operating systems this fall that aim to limit the spread of Child Sexual Abuse Material (CSAM) and protect children from predators:

  1. CSAM detection in iCloud Photos: Using advanced cryptographic methods, Apple will detect if a user’s iCloud Photos library contains high-levels of CSAM content and pass on this information to law enforcement agencies.
  2. Safety measures in Messages: Messages app will warn children about sensitive content and allows parents to receive alerts if such content is sent or received.
  3. Safety measures in Siri and Search: Siri and Search will intervene when users try to search for CSAM-related topics and will also provide parents and children expanded information if they encounter unsafe situations.

Why it matters? Last year, it was reported that India leads in the online generation of CSAM. Considering this, Apple’s latest measures appear appreciable and harmless, but the technology used by Apple to implement these measures can evolve to be used for other privacy-invasive purposes. There will now be a burden of expectations on Android to do the same and it opens the door for all kinds of surveillance tools or content removal requests from governments. For example, the Indian government can ask platforms like WhatsApp to proactively remove photos that are critical of it using this same technology.

How does CSAM detection in iCloud Photos work?

“Apple’s method of detecting known CSAM is designed with user privacy in mind,” the company said.

  1. On-device matching against a database of known images: Before an image is stored in iCloud Photos, Apple will convert the image to a unique number through a process called NeuralHash. Then, an on-device matching process checks this hash with a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organisations.
  2. Match result not revealed: Apple says its matching process will be powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result to either the user or Apple. The device will upload this match result along with the image to iCloud.
  3. Apple can view match results only if there is a high number of matches: Apple cannot view the match results unless a users’ iCloud Photos library crosses a threshold of known CSAM content.
  4. Will block the users’ account and report to NCMEC: Once an account crosses the threshold of known CSAM content, Apple can access the matched images and it will manually review and confirm each match report before disabling the user’s account and notifying NCMEC, which works as the reporting centre for CSAM in collaboration with law enforcement agencies in the US.

How does CSAM detection work? Source: Apple

Here’s a technical summary of how CSAM detection works and how other safety measures work.

Can this technology be used by governments for other purposes?

A narrowly-scoped backdoor is still a backdoor: 

“Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.” – Electronic Frontier Foundation

Tech companies like Apple have faced considerable pressure from governments around the world to weaken or provide a backdoor to the encryption used on their devices and services to allow law enforcement to investigate serious crimes such as terrorism and spreading of child sexual abuse material. But tech companies, Apple in particular, have refused to create such a backdoor citing violations of free speech and privacy.

Advertisement. Scroll to continue reading.

Technology can be adapted for any other target imagery or text even in end-to-end encrypted services: 

“Although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple’s precedent could also increase pressure on other tech companies to use similar techniques.” – Financial Times

“All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change,” Electronic Frontier Foundation said.

“This sort of tool can be a boon for finding child pornography in people’s phones. But imagine what it could do in the hands of an authoritarian government?” Matthew Green, a cryptography professor at Johns Hopkins University, asked.

The Indian government can ask platforms to take down posts that are critical of it: For example, India’s new Information Technology Rules 2021 requires platforms to develop tools to proactively remove content that the government deems illegal, which could merely be content that is critical of the government. Platforms have maintained that this will harm privacy and free speech but if Apple can implement proactive measures for CSAM, the Indian government can ask why the same technology cannot be modified to accommodate the government’s requests? The government can ask WhatsApp why the platform cannot implement similar technology to find images and videos that are deemed by the government to be illegal or disruptive to public order and flag the users who share this content.

Advertisement. Scroll to continue reading.

If you think Apple won’t adhere to the government’s whims and fancies, read here about the compromises the company has made to the authorities in China.

These technologies are mass surveillance tools: 

What are some other questions and concerns around this technology?

  1. Will Apple be able to see the images we upload? Apple will not be able to see the photos during the matching process because this process happens on the device. After the photos are uploaded to iCloud Photos, Apple will be able to see them only if the user’s account crosses a certain threshold of CSAM content. “Even in these cases, Apple only learns about images that match known CSAM,” the company said.
  2. Aren’t iCloud Photos end-to-end encrypted? While iCloud Photos are encrypted in transit and on the servers, they are NOT end-to-end encrypted. Therefore, Apple has the key to access photos in iCloud if need be. As a matter of fact, iCloud Backups are not end-to-end encrypted either.
  3. Will Apple flag partially clothed or naked images of my children? This will not happen because the on-device matching process only matches with a database of known CSAM image hashes provided by NCMEC. It is not designed to flag all naked or sensitive photos that are uploaded to iCloud.
  4. Will Apple be able to find new CSAM content? Because the technology relies on matching with a known database of CSAM hashes, it doesn’t look like Apple will be able to catch any new child sexual abuse material.
  5. Can users see the database of CSAM images hashes provided by NCMEC? The database of known CSAM image hashes is not known to the user. This database is stored as an unreadable set of hashes on users’ devices.
  6. Can hashes be wrongly matched? We don’t know much about the NeuralHash algorithm that Apple is using, but in the past, there have been issues with wrong matching with similar technology. “Imagine someone sends you a perfectly harmless political media file that you share with a friend. But that file shares a hash with some known child porn file?” Matthew Green asked.”Apple should commit to publishing its algorithms so that researchers can try to develop “adversarial” images that trigger the matching function, and see how resilient the tech is,” he added.
  7. Which countries is this safety feature coming to? As of now, the iCloud CSAM detection technology will only be used in the US with no indication on if and when it will roll out internationally. But this technology might hit a roadblock in the EU where Facebook was recently barred from running its child abuse detection tools.
  8. How robust and accurate is Apple’s CSAM detection technology? “The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account,” the company said. Furthermore, three independent auditors have done a technical assessment of the CSAM detection technology and found it to be mathematically robust, the company said. (Assessment 1, Assessment 2, Assessment 3).
  9. Will Apple flag my account if I share CSAM content through Messages? Apple is doing on-device checking of messages (more details below) sent and received by phones used by children for sexually explicit material, but it is not known if this includes specifically checking for CSAM and blocking accounts or reporting the same to NCMEC. As of now, it appears that the new CSAM detection technology is only being implemented for iCloud Photos. [Update: Aug 10, 3:40 pm] Apple confirmed that the communication safety feature in Messages is different from CSAM detection in iCloud Photos.
  10. Is there a way to opt-out of this feature? There is no way to opt out of this feature but notably, this safety measure only appears to be triggered if the iCloud Photo Library feature is turned on. If the iPhone user merely stores photos on the device without syncing them to iCloud, then there is no CSAM detection on these photos.
  11. Will I know which images of mine are flagged as CSAM? “Users can’t identify which images were flagged as CSAM by the system,” Apple said.
  12. What if my account is wrongly flagged? “If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated,” the company said.
  13. Will slightly edited CSAM images get matched? The NeuralHash hashing technology works in such a way that identical and visually similar images result in the same hash. “An image that has been slightly cropped, resized or converted from colour to black and white is treated identical to its original, and has the same hash,” Apple said.
  14. Do other companies and platforms check for CSAM content as well? Yes. Facebook, WhatsApp, Instagram, and most email providers including Apple Mail check and report CSAM content. Cloud storage providers like Dropbox, Google, and Microsoft OneDrive also check for CSAM content. In fact, “FB reported 15.8M cases of CSAM to NCMEC in 2019; Apple reported 205. Seems likely that there is massive under-reporting of very bad stuff on iOS,” tech reporter Casey Newton pointed out.
  15. Does this break end-to-end encryption in Messages? No. Apple’s new communication safety features (more about this below) is different from its CSAM detection system for iCloud Photos. The feature on messages involve on-device scanning for sensitive content and alerting children, whose parents have turned on this feature, before they view this content. “None of the communications, image evaluation, interventions, or notifications are available to Apple,” the company said.
  16. Does this feature prevent children in abusive homes from seeking help? “The communication safety feature applies only to sexually explicit photos shared or received in Messages. Other communications that victims can use to seek help, including text in Messages, are unaffected,” the company said.

Advertisement. Scroll to continue reading.

Other safety measures that Apple announced

Safety measures in Messages:  

  • What happens when sexually explicit images are sent or received? The Messages app will warn children and their parents when receiving or sending sexually explicit photos. When such content is received, Apple will blur the photo, warn the child, and present them with resources to help navigate the situation. Parents will have the ability to get notified if the child chooses to ignore the warning and view the image. Similar warnings and notifications will appear if the child tries to send sexually explicit photos. These features are only available for accounts set-up as families.
  • Apple will not have access to the messages: Apple says it will use “on-device machine learning to analyze image attachments and determine if a photo is sexually explicit” and that the feature is “designed so that Apple does not get access to the messages.”

Safety measures in Siri and Search:

  • Additional guidance will be provided: Siri and Search will help children and parents stay safe online by providing resources specific to addressing CSAM content. “For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report,” the company said.
  • Will intervene when searching for CSAM: Siri and Search will intervene when users search for queries related to CSAM by explaining to users “that interest in this topic is harmful and problematic,” the company said. Siri and Search will also provide resources to get help with the issue.

What does India’s data protection bill say about children’s safety online?

The draft Personal Data Protection (PDP) Bill, 2019 has defined guardian data fiduciaries (GDF) as entities that

  1. Operate commercial websites or online services directed at children or
  2. Process large volumes of personal data of children.

What are the responsibilities of GDFs?

  • GDFs are prohibited from “profiling, tracking or behaviourally monitoring or targeted advertising direct at, children”. Essentially, they cannot process children’s data that can cause “significant harm” to the child.
  • GDFs are supposed to verify the age of their users, and obtain consent from their guardian or parents if the user is a “child” — anyone under 18.
  • Failure to adhere to the provisions can attract a fine of ₹15 crore, or 4% of the company’s global turnover.

In a MediaNama discussion on this topic, we discuss how these fiduciaries will comply with this complex mandate. In another discussion, we also discuss whether there should be a blanket age of consent for using online services.

Update (10 Aug, 4:00 pm): Added clarification to existing questions in What are some other questions and concerns around this technology? and added two more questions.

Also Read

 

Have something to add? Subscribe to MediaNama and post your comment

Advertisement. Scroll to continue reading.
Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Find out how people’s health data is understood to have value and who can benefit from that value.

News

The US and other countries' retreat from a laissez-faire approach to regulating markets presents India with a rare opportunity.

News

When news that Walmart would soon accept cryptocurrency turned out to be fake, it also became a teachable moment.

News

The DSCI's guidelines are patient-centric and act as a data privacy roadmap for healthcare service providers.

News

In this excerpt from the book, the authors focus on personal data and autocracies. One in particular – Russia.  Autocracies always prioritize information control...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ