wordpress blog stats
Connect with us

Hi, what are you looking for?

Prosecutors across US are urging Congress to strengthen tools against AI-generated child sexual abuse images

The letter identifies the exploitation of children through AI technology as an underreported and understudied aspect of the AI problem and references increasing incidents of fake kidnappings and AI-generated child sexual abuse material.

We missed this earlier: On September 5, top prosecutors from all 50 states of the United States of America wrote a letter urging Congress to strengthen tools and regulations to fight AI-generated child sexual abuse images. The attorneys general from across the country asked the Republican and Democratic leaders of the House and Senate to “establish an expert commission to study the means and methods of AI that can be used to exploit children specifically.”

The letter titled ‘Artificial Intelligence and the Exploitation of Children’ asks for an extension of existing regulations on child sexual abuse materials, specifically to include AI-generated images. It identifies the exploitation of children through AI technology as an underreported and understudied aspect of the AI problem. It references increasing incidents of fake kidnappings and AI-generated child sexual abuse material (CSAM) in the country to emphasize the gravity of the dangers posed by AI on children.

Scenarios of harm:

The letter pointed out scenarios of dangers that AI poses to children:

  • Creation of deepfake: These are instances where AI or machine learning is used to digitally fabricate or modify videos and images, potentially involving a child who has already experienced abuse. This can also include the manipulation of a genuine child’s likeness, such as a photo sourced from social media, to depict abusive content.
  • Instances where a child has never experienced assault or exploitation, but their image is being utilized as if they had: It raises concerns about whether current laws (in the US) adequately address the virtual aspect of this situation, as no actual exploitation occurred, even though the child’s image is being defamed and exploited.
  • Complete digital fabrication of an imaginary child’s image for the production of pornography: Explaining this phenomenon, the letter said that many might come with the rationale it cannot cause harm to anyone since it’s not even a real individual but it will create demand for an industry that victimizes children.

Why it matters

The US is not the only one concerned about AI-generated child sexual abuse images. On September 8, Australia’s eSafety Commissioner gave the go-ahead to a new search code under which Search engines, including Google, Bing, Yahoo, and DuckDuckGo, will have to take measures to minimize the presence of child abuse content in search results and prevent AI-generated “synthetic” versions (deepfakes) of such material. Online platforms themselves (Meta, PornHub, OnlyFans) adopted a tool earlier this year to curb sharing of sexually explicit images of children.

With the constantly expanding reach of AI, new concerns are raised in every area. Apart from the European Union, no other nation has yet formulated specific regulations for the AI industry. Given the dynamic nature of the industry and its reach, it is understandable that nations are taking their time. But when it comes to issues such as the prominence of Child Sexual Abuse Material (CSAM), US prosecutors put rightly in their letter that it is a race against time to protect children from the dangers of AI. Calling on the federal lawmakers of the US, they said, ”Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

Note: You can read the letter here

STAY ON TOP OF TECH POLICY: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!

Advertisement. Scroll to continue reading.

Also Read: 

Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.



Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...


RBI Deputy Governor Rabi Shankar called for self-regulation in the fintech sector, but here's why we disagree with his stance.


Both the IT Minister and the IT Minister of State have chosen to avoid the actual concerns raised, and have instead defended against lesser...


The Central Board of Film Certification found power outside the Cinematograph Act and came to be known as the Censor Board. Are OTT self-regulating...


Jio is engaging in many of the above practices that CCI has forbidden Google from engaging in.

You May Also Like


Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...


135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...


By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...


Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ