We missed this earlier: On September 5, top prosecutors from all 50 states of the United States of America wrote a letter urging Congress to strengthen tools and regulations to fight AI-generated child sexual abuse images. The attorneys general from across the country asked the Republican and Democratic leaders of the House and Senate to “establish an expert commission to study the means and methods of AI that can be used to exploit children specifically.”
The letter titled ‘Artificial Intelligence and the Exploitation of Children’ asks for an extension of existing regulations on child sexual abuse materials, specifically to include AI-generated images. It identifies the exploitation of children through AI technology as an underreported and understudied aspect of the AI problem. It references increasing incidents of fake kidnappings and AI-generated child sexual abuse material (CSAM) in the country to emphasize the gravity of the dangers posed by AI on children.
Scenarios of harm:
The letter pointed out scenarios of dangers that AI poses to children:
- Creation of deepfake: These are instances where AI or machine learning is used to digitally fabricate or modify videos and images, potentially involving a child who has already experienced abuse. This can also include the manipulation of a genuine child’s likeness, such as a photo sourced from social media, to depict abusive content.
- Instances where a child has never experienced assault or exploitation, but their image is being utilized as if they had: It raises concerns about whether current laws (in the US) adequately address the virtual aspect of this situation, as no actual exploitation occurred, even though the child’s image is being defamed and exploited.
- Complete digital fabrication of an imaginary child’s image for the production of pornography: Explaining this phenomenon, the letter said that many might come with the rationale it cannot cause harm to anyone since it’s not even a real individual but it will create demand for an industry that victimizes children.
Why it matters
The US is not the only one concerned about AI-generated child sexual abuse images. On September 8, Australia’s eSafety Commissioner gave the go-ahead to a new search code under which Search engines, including Google, Bing, Yahoo, and DuckDuckGo, will have to take measures to minimize the presence of child abuse content in search results and prevent AI-generated “synthetic” versions (deepfakes) of such material. Online platforms themselves (Meta, PornHub, OnlyFans) adopted a tool earlier this year to curb sharing of sexually explicit images of children.
With the constantly expanding reach of AI, new concerns are raised in every area. Apart from the European Union, no other nation has yet formulated specific regulations for the AI industry. Given the dynamic nature of the industry and its reach, it is understandable that nations are taking their time. But when it comes to issues such as the prominence of Child Sexual Abuse Material (CSAM), US prosecutors put rightly in their letter that it is a race against time to protect children from the dangers of AI. Calling on the federal lawmakers of the US, they said, ”Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
Note: You can read the letter here.
STAY ON TOP OF TECH POLICY: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
Also Read:
