Can Someone Be Prosecuted for Creating, Receiving, Possessing, or Publishing This Material?
The digital landscape is rapidly evolving, and with it, the challenges faced by law enforcement and child safety organizations are evolving. One of the most concerning developments is the escalating volume of sexually explicit images of children generated by artificial intelligence. Child safety experts are sounding the alarm, warning that this AI-driven content surge is overwhelming existing resources and hindering efforts to identify and rescue real-life victims of child sexual abuse.
The Bleak Reality: AI’s Impact on Child Exploitation
Prosecutors and child safety groups dedicated to combating crimes against children are witnessing a disturbing trend. AI-generated images have achieved such realism that it’s becoming increasingly difficult to distinguish them from authentic images depicting real children subjected to actual harm. A single AI model can churn out tens of thousands of these images in a short period, leading to a massive influx of content on both the dark web and the mainstream internet.
Drowning in a Sea of Content
Even before the advent of AI, authorities were struggling to manage the sheer volume of child sexual abuse material (CSAM) created and shared online. Millions of reports are filed annually, stretching the resources of safety groups and law enforcement agencies thin.
A Department of Justice prosecutor, speaking anonymously, emphasized the gravity of the situation: “We’re just drowning in this stuff already. From a law enforcement perspective, crimes against children are one of the more resource-strapped areas, and there is going to be an explosion of content from AI.”
AI: A Predator’s Multitool
The National Center for Missing and Exploited Children (NCMEC) has already documented various ways predators are utilizing AI. These include:
- Generating Child Abuse Imagery: Entering text prompts to create explicit images of children.
- Altering Existing Content: Modifying previously uploaded files to make them sexually explicit and abusive.
- Creating New Content from Existing CSAM: Uploading known CSAM and using AI to generate new images based on those originals.
- Seeking Guidance from Chatbots: Using AI chatbots to obtain instructions on how to find and harm children.
Legal Loopholes and Challenges to Prosecution
The use of AI to alter images of child victims raises concerns about offenders attempting to evade detection. While possessing depictions of child sexual abuse is a federal crime in the US, proving the authenticity of images in court can be challenging.
The DoJ prosecutor explains: “When charging cases in the federal system, AI doesn’t change what we can prosecute, but there are many states where you have to be able to prove it’s a real child. Quibbling over the legitimacy of images will cause problems at trials. If I were a defense attorney, that’s exactly what I’d argue.”
Currently, many states lack laws specifically prohibiting the possession of AI-generated sexually explicit material depicting minors. Furthermore, the very act of creating such images is often not covered by existing legislation.
The Erosion of Hash Matching: A Critical Defense Under Threat
One of the primary tools used to identify known images of child sexual abuse is hash matching. This technique relies on the unique digital fingerprints, known as hash values, of images. The NCMEC maintains a database of over 5 million hash values that images can be compared against, providing a crucial resource for law enforcement.
Tech companies often employ software that monitors online activity, intercepting and blocking known images of child sexual abuse based on their hash values and reporting the users to law enforcement. However, material that doesn’t have a known hash value, such as newly created content, remains undetectable by this type of scanning software. Critically, any edit or alteration to an image using AI changes its hash value, rendering the existing system less effective.
Marcoux warns of the potential consequences: “Hash matching is the front line of defense. With AI, every image that’s been generated is regarded as a brand-new image and has a different hash value. It erodes the efficiency of the existing front line of defense. It could collapse the system of hash matching.”
The Tipping Point: The Rise of Generative AI
Child safety experts trace the surge in AI-generated CSAM back to late 2022, coinciding with the public release of OpenAI’s ChatGPT and the broader adoption of generative AI technologies. The earlier launch of the LAION-5B database, an open-source catalog of over 5 billion images used to train AI models, also played a significant role.
Researchers at Stanford University discovered that this database inadvertently included previously detected images of child sexual abuse. As a result, AI models trained on this database were capable of producing CSAM. Experts highlight that children are likely being harmed during the creation of almost all CSAM generated using AI.
Tech Companies Respond
OpenAI acknowledges the issue and states that it reviews and reports known CSAM to the NCMEC when users upload it to their image tools. A spokesperson for the company said, “We have made significant efforts to minimize the potential for our models to generate content that harms children.”
A Call to Action
The proliferation of AI-generated CSAM presents a significant challenge to law enforcement, child safety organizations, and society as a whole. Addressing this issue requires a multi-faceted approach, including:
- Strengthening Legislation: Enacting laws that specifically criminalize the creation and possession of AI-generated CSAM.
- Developing AI Detection Tools: Investing in research and development to create AI-powered tools that can identify and flag AI-generated CSAM.
- Enhancing Collaboration: Fostering collaboration between law enforcement, tech companies, and child safety organizations to share information and develop effective strategies.
- Raising Awareness: Educating the public about the dangers of AI-generated CSAM and how to report it.
The fight against child sexual abuse in the digital age requires vigilance, innovation, and a commitment to protecting vulnerable children from harm.
The digital landscape is rapidly evolving, bringing with it unprecedented opportunities and challenges. One of the most pressing concerns is the proliferation of child sexual abuse material (CSAM), particularly that generated by artificial intelligence (AI). In 2023, the National Center for Missing and Exploited Children (NCMEC) received a staggering 36.2 million reports of child abuse online, marking a 12% increase from the previous year. While the majority of these reports involved real-life photos and videos of sexually abused children, a disturbing trend has emerged: the rise of AI-generated CSAM.
The Scope of the Problem: AI’s Role in Child Exploitation
The NCMEC reported receiving 4,700 reports of images or videos depicting the sexual exploitation of children created by generative AI. This figure, while seemingly small compared to the overall number of reports, represents a significant and growing threat. AI allows predators to create thousands of new CSAM images with minimal effort and time, overwhelming the already strained resources of child safety organizations and law enforcement.
The NCMEC’s Concerns: A Call for Accountability
The NCMEC has voiced strong concerns about the lack of proactive measures taken by AI companies to prevent or detect the production of CSAM. Shockingly, only five generative AI platforms submitted reports to the organization last year. A significant portion (over 70%) of the reported AI-generated CSAM originated from social media platforms, where the material was shared. This indicates a critical gap in the AI industry’s responsibility to monitor and prevent the misuse of its technology for child exploitation.
Fallon McNulty, director of the NCMEC’s CyberTipline, highlighted the issue, stating, “There are numerous sites and apps that can be accessed to create this type of content, including open-source models, which are not engaging with the CyberTipline and are not employing other safety measures, to our knowledge.”
Meta’s Encryption Plans: A Double-Edged Sword
Meta’s decision to encrypt Facebook Messenger and its plans to encrypt messages on Instagram have sparked controversy among child safety groups. While encryption can enhance privacy, it also raises concerns that many of the millions of CSAM cases occurring on these platforms each year will go undetected.
Adding to the complexity, Meta has integrated generative AI features into its social networks, with AI-generated pictures becoming increasingly popular. Although Meta has policies against child nudity, abuse, and exploitation, including CSAM created using GenAI, the potential for misuse remains a significant challenge. A Meta spokesperson stated, “We report all apparent instances of CSAM to NCMEC, in line with our legal obligations.”
The Federal Bureau of Investigation (FBI) issued a warning in March, stating that producing child sexual abuse material using AI is illegal. However, the problem runs deeper than just the creation of these disturbing images. A recent investigation by the Stanford Internet Observatory discovered hundreds of known images of child sexual abuse material in a dataset used to train AI text-to-image generation models.
The implications of this discovery are far-reaching and deeply concerning. Lisa Thompson, the vice president and director of the research institute at the National Center on Sexual Exploitation, described the issue as a “crisis.” She went on to say, “Now imagine that you’re a survivor of child sexual abuse exploitation. Has your abuse been used to train AI … that would be so haunting to even think about, to contemplate.”
The misuse of technology to create and distribute these disturbing images is not only causing harm to the victims but also making it more difficult for law enforcement officials to rescue real victims. Investigators are wasting valuable time and resources trying to identify and track down exploited children who do not even exist.
In response to this crisis, lawmakers are taking action. A flurry of legislation has been passed to ensure that local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children.
The U.S. Supreme Court struck down a federal ban on virtual child sexual abuse material in 2002. However, a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed “obscene.” This law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes that there’s no requirement “that the minor depicted actually exist.”