Safety by Design for Generative AI: Preventing Child Sexual Abuse

Issue Overview

Offline and online sexual harms against children have been accelerated by the internet. The child safety ecosystem is already overtaxed. In 2022 reports to the National Center for Missing and Exploited Children (NCMEC) contained over 88 million files of CSAM and other files related to child sexual exploitation; in 2023, reports contained over 100 million such files.

The internet has its roots in information sharing. Starting in the 1960s, government researchers explored the use of network computing to share information. The internet as it currently exists grew out of efforts to standardize communication protocols between different networks. Cybersecurity concepts of preventing adversarial misuse of the internet were embedded throughout the ideation and development of the internet. However, similar concepts around preventing adversarial misuse of the internet to scale harms against children did not enter the broader conversation until the 1990s, with debates around the Communications Decency Act. The consequences of this are self apparent: the internet turned the problem of child sexual abuse material (CSAM) from one that was more contained (limited to small networks via the postal service) to where we are today, where platforms, hosting services, internet service providers and more all face the reality that CSAM can – and does – circulate on their services. There are many organizations that have pursued transparent and collaborative governance of their service, by preventing, detecting, removing and reporting CSAM. Yet the fact remains that this work continues to be an uphill battle.

We are at another, similar crossroads with generative AI. Using this technology, human-like text, photorealistic images and videos, music, art and other content can be automatically generated. It is a straightforward task for a person to use these models to create content at scale, irrespective of their technical expertise. This technology unlocks the ability for a single human to easily create and distribute millions of pieces of content.

In this moment, generative AI holds the potential for numerous benefits to consumers in diverse applications. These benefits extend to improving child safety protections: e.g. existing detection technologies can be updated to use new deep learning architectures, while automatic image and text summarization can accelerate prioritization and triage.

However: misuse of this same technology has profound implications across victim identification, victimization, prevention and abuse proliferation.

Looking at each of these separately, misuse of generative AI technologies:

  • Impedes victim identification
    • Bad actors use generative AI to create AI-generated child sexual abuse material (AIG-CSAM). Models that the actors have access to – broadly shared models that were trained on minimally curated datasets – are misused by bad actors to create AIG-CSAM. Victim identification is already a needle in the haystack problem for law enforcement: sifting through huge amounts of content to find the child in active harm’s way. The expanding prevalence of AIG-CSAM is growing that haystack even further, making victim identification more difficult.
  • Creates new ways to victimize and re-victimize children
    • This same technology is used to newly victimize children, as bad actors can now easily generate new abuse material of children, and/or sexualize benign imagery of a child. Bad actors use this technology to perpetrate re-victimization using primarily broadly shared models and fine-tuning them on existing child abuse imagery to generate additional explicit images of these children. They collaborate to make these images match the exact likeness of a particular child, but produce new poses, acts and egregious content like sexual violence. These images depict both identified and unidentified survivors of child sexual abuse. Bad actors also use this technology to scale their grooming and sexual extortion efforts, using generative AI to scale the creation of content necessary to target a child. This technology is further used in bullying scenarios, where sexually explicit AI-generated imagery of children is being used by children to bully and harass others.
  • Reduces social and technical barriers to sexualizing minors
    • The ease of creating AIG-CSAM, and the ability to do so without the victim’s involvement or knowledge, may perpetuate the misconception of this content being “harmless”. Bad actors use this technology to produce AIG-CSAM and other sexualising content of children, as well as to engage in fantasy sexual role-play with generative AI companions who mimic the voice of children. Research suggests bad actors viewing CSAM may have their fantasies reinforced by viewing abuse imagery and may be at a heightened risk for committing hands on abuse acts.
  • Enables information sharing for abuse proliferation
    • Bad actors use generative AI models (particularly text or image editing) in abuse proliferation. Models can support bad actors by providing instruction for hands-on sexual abuse of a child, information on coercive control, details on destroying evidence and manipulating artifacts of abuse, or advice on ensuring victims don’t disclose.

This misuse, and its associated downstream harm, is already occurring, and warrants collective action, today. The need is clear: we must mitigate the misuse of generative AI technologies to perpetrate, proliferate, and further sexual harms against children. This moment requires a proactive response. The prevalence of AIG-CSAM is small, but growing. Now is the time to act, and put child safety at the center of this technology as it emerges. Now is the time for Safety by Design.

READ MORE

Comments are closed.