By implementing an on-device matching process, Apple wants to detect and flag child sexual abuse material. But can this technology be used by governments for other purposes? Apple on Thursday announced three new measures coming to its operating systems this fall that aim to limit the spread of Child Sexual Abuse Material (CSAM) and protect children from predators: CSAM detection in iCloud Photos: Using advanced cryptographic methods, Apple will detect if a user's iCloud Photos library contains high-levels of CSAM content and pass on this information to law enforcement agencies. Safety measures in Messages: Messages app will warn children about sensitive content and allows parents to receive alerts if such content is sent or received. Safety measures in Siri and Search: Siri and Search will intervene when users try to search for CSAM-related topics and will also provide parents and children expanded information if they encounter unsafe situations. Why it matters? Last year, it was reported that India leads in the online generation of CSAM. Considering this, Apple's latest measures appear appreciable and harmless, but the technology used by Apple to implement these measures can evolve to be used for other privacy-invasive purposes. There will now be a burden of expectations on Android to do the same and it opens the door for all kinds of surveillance tools or content removal requests from governments. For example, the Indian government can ask platforms like WhatsApp to proactively remove photos that are critical of it using this same technology. How does CSAM detection in iCloud…
