AI-Generated Child Sexual Abuse Material (CSAM): A Minefield of Legal and Technical Challenges

--

Warning: The following article discusses child sexual abuse and may be disturbing for some readers.

The disturbing, unsettling world of Child Sexual Abuse Material (CSAM) has entered a dangerous new dimension: the rapid growth of AI-generated images and videos. This alarming development introduces unprecedented challenges for law enforcement and raises difficult questions about prosecution and the adequacy of existing legal frameworks. This frightening situation is further exacerbated by platforms like Meta adopting end-to-end encryption, severely hindering the capabilities of investigations.

Register now at: NDAA Learning Center: Generative AI Child Sexual Abuse Images: What You Need to Know Now

Understanding the Technicalities

Unlike traditional CSAM, AI-powered generation offers perpetrators tools to create synthetic but disturbingly realistic content. These advanced programs don’t require an actual victim in the physical world, and they can tailor images to avoid existing detection filters. Determining the difference between a horrifyingly genuine video and one synthesized by AI can be extraordinarily difficult, posing an immense challenge to prosecuting CSAM-related offenses [1].

Legal Challenges

The primary difficulty in prosecuting AI-generated CSAM lies in the traditional legal frameworks that define child sexual exploitation material based on the involvement of real children. Many jurisdictions require the depiction of an actual minor for the material to be considered illegal. AI-generated content that does not directly involve or harm a specific child challenges these definitions, necessitating a reevaluation of legal statutes to encompass virtual representations that contribute to the demand for and normalization of child exploitation.

Additional legal challenges emerge from AI-generated CSAM, including:

Possession & Distribution: Prosecutors may struggle to prove that possessing or distributing synthetic images represents genuine intent to harm or a predisposition to act. Some states currently require direct links between possession and potential for real-world abuse, while others focus on content alone [2].

Production: As AI tools make manipulation simpler, prosecuting production becomes more complex. Does synthetically altering existing non-CSAM material to make it abusive meet the standard of “production”? Questions linger around intent and whether AI tools erase this element compared to older image editing methods.

First Amendment Implications: Notably, in Ashcroft v The Free Speech Coalition (2002), the Supreme Court has held that certain simulated depictions of child abuse fall under protected speech, as long as they were not produced with real children. The balance between free speech issues and preventing child exploitation will be crucial for lawmakers in addressing AI-generated CSAM.

The “Black Box” of Encryption

Social media platforms like Meta play a significant role in reporting CSAM. In just the second quarter of 2023, over 3.7 million CyberTips were reported to the National Center for Missing & Exploited Children (NCMEC) from Meta’s platforms, Facebook, Instagram, and Messenger. These reports included 48-thousand cases involving inappropriate interactions with children and 3.6 million instances related to photos and videos containing CSAM. [4] These reports are crucial as they not only remove such content but also report it to NCMEC, liaising with law enforcement when necessary.

Alarmingly, Meta’s push for end-to-end encryption, despite purported privacy reasons, creates a massive barrier for CSAM investigations. End-to-end encryption ensures that only the communicating users can read the messages. This added encryption will prevent Meta from detecting real or AI-generated CSAM on their messaging platforms, eliminating a vital source of evidence for law enforcement. UK’s National Crime Agency estimates that the implementation of end-to-end encryption could result in the loss of 92% of reports from Facebook and 85% from Instagram of detected child abuse. [5]

While Meta’s encryption initiative aims to enhance privacy for its users, the implications for child safety are profound. With the company no longer able to see the offending material, and law enforcement no longer able to obtain this evidence from Meta, coupled with the rise of AI-generated CSAM, it’s anticipated that the proliferation of online child exploitation on the platform will only increase. Offenders will continue to use Meta’s platforms to send illegal material and to select and groom future victims, unimpeded.

Policy Implications and Recommendations:

To address these challenges, policymakers and prosecutors must consider several strategies:

1. Specialized Training: Invest in comprehensive training for prosecutors and law enforcement in the complexities of AI-generated CSAM. This should cover technology, forensic tools, and understanding the psychological harm and societal impact of these synthetic materials. The National District Attorneys Association offers extensive specialized training sessions to equip law enforcement with the knowledge and skills needed to navigate these complex topics.

2. Update Legal Definitions: Legislative bodies need to address shortcomings in existing statutes around CSAM by explicitly including AI-generated material. Laws could focus on the harm such content generates rather than purely on the method of creation.

3. Collaborative Approach: Prosecutors need open lines of communication with platforms implementing end-to-end encryption. Discuss responsible avenues of reporting and detection balancing privacy features without completely crippling law enforcement action in sensitive cases [6].

4. Investing in Technology: Support the development of advanced technological tools that can assist in the identification and investigation of AI-generated CSAM without infringing on privacy rights.

4. International Cooperation: Due to the borderless nature of the Internet, collaboration between states and countries is critical. Laws and investigative techniques need harmonization to tackle this challenge comprehensively on a global scale.

Facing the New Reality

AI-generated CSAM presents a multifaceted challenge that requires a concerted effort from legal professionals, policymakers, technologists, and the broader community. By updating legal frameworks, fostering international collaboration, and leveraging technology responsibly, it is possible to address the complexities of prosecuting AI-generated CSAM while respecting privacy and promoting a safer digital environment for all.

Don’t Miss This Urgent Panel Discussion

To explore deeper into these critical issues, we’re hosting a free online panel discussion on March 28 at 2 pm EST designed specifically for prosecutors: Generative AI Child Sexual Abuse Images: What You Need to Know Now.

A panel of experts will address recent developments and trends in the creation and distribution of Generative AI child sexual abuse material (CSAM), including its impact on investigations and prosecution. Discussions will cover legal challenges, legislative efforts, and the delicate balance between addressing CSAM and respecting First Amendment rights. A Q&A session will be offered at the end for all your pressing questions.

March 28, 2024: 3:00–4:30 pm ET.
Free for members and non-members.

Presenters:

· Angela Bruson, Deputy District Attorney, Riverside County District Attorney’s Office
· Ross Goldman, Senior Policy and Appellate Counsel, Child Exploitation and Obscenity Section
· Robert Leazenby, Raven, Associate Vice-President of NWC3

Don’t miss this opportunity to gain valuable insights and engage in crucial discussions. Register now to secure your spot!

REGISTER NOW: NDAA Learning Center: Generative AI Child Sexual Abuse Images: What You Need to Know Now

Citations

[1] TechPolicyPress: LAION and the Challenges of Preventing AI-Generated CSAM: https://techpolicy.press/laion-and-the-challenges-of-preventing-ai-generated-csam

[2] National Center for Missing & Exploited Children: https://www.missingkids.org/theissues/csam

[3] Ashcroft v. Free Speech Coalition (2002): https://www.casebriefs.com/blog/law/constitutional-law/constitutional-law-keyed-to-cohen/restrictions-on-time-place-or-matter-of-expression/ashcroft-v-the-free-speech-coalition/

[4] Meta Transparency Center: https://transparency.fb.com/en-gb/ncmec-q2-2023/

[5] National Crime Agency: NCA response to Meta’s rollout of end-to-end encryption: NCA response to Meta’s rollout of end-to-end-encryption — National Crime Agency

[6] Bipartisan Policy Center: Legal Challenges Against Generative AI: Key Takeaways: https://bipartisanpolicy.org/blog/legal-challenges-against-generative-ai-key-takeaways/

--

--

National District Attorneys Association
National District Attorneys Association

Written by National District Attorneys Association

The National District Attorneys Association (NDAA) is the oldest and largest national organization representing state and local prosecutors in the country.

Responses (1)