INHOPE | Webinar Recap: How AI is being abused to create CSAM
Article
Events & Campaigns

Webinar Recap: How AI is being abused to create CSAM

During our second webinar of the season, we were joined by ‘Alex’ from the Internet Watch Foundation (IWF) as well as 400+ attendees curious to learn about the evolving threat of AI misuse. Alex, an experienced analyst, walked us through how AI models are being exploited to create harmful content and shared insights from the IWF's work in handling these reports.

The IWF was one of the first INHOPE hotlines to receive and process reports containing AI-generated content. They have already assessed thousands of AI-generated images of child sexual abuse, over 90% of which included "realistic" imagery. “Because AI allows you to create anything you want, people assume that AI content will tend to be extreme,” Alex shared. "In most cases however AI content seems to mirror 'real' CSAM trends." To better understand this evolving threat, the IWF deepened their research efforts, examining different AI models and analysing conversations among perpetrators on the dark web to understand how and why they create or consume this material.

How are Perpetrators Using AI?

Alex guided us through different ways in which perpetrators are using AI to create CSAM:

  • Text-to-image Base Models: In most cases, perpetrators use readily available free tools to generate harmful material.
  • Open-source Models: Certain text-to-image AI tools offer open-source code that can be modified freely by users. These models can be used locally/offline, complicating the detection of harmful conduct.
  • Fine-tuned Models: To overcome certain AI model limitations, some perpetrators utilise fine-tuning tools, which are trained on a specific type of content to generate more 'accurate' results. Sometimes, this includes training on CSAM images.
  • Alternative Avenues: Perpetrators also abuse image-to-image generation systems, such as 'nudifying' tools, used to create a synthetic nude image based on a real picture.
  • Commercial AI abuse: On the open web, the IWF has discovered perpetrator networks selling subscriptions to AI-generated CSAM. In many cases, perpetrators have been found to offer services to generate specific (‘bespoke’) AI-generated CSAM on request.
  • Perpetrator Forums: Similar to paedophile manuals that provide instruction on abusing children and accessing ‘real’ CSAM, the IWF has reported instances of perpetrators sharing tips on how to write prompts and use text-to-image models that generate more 'accurate' results.


Why is it Harmful?

It is commonly argued that AI-generated CSAM might have positive implications by satisfying perpetrators' fantasies without harming real children. However, as Alex emphasised, these systems are often trained on real abusive content, perpetuating the cycle of abuse and even normalising hands-on abuse of children. "For the IWF this material, rather than mitigating risks from perpetrators to children, actually makes them more likely to act on their fantasies,” said Alex.

There's also concern about how AI-generated content will impact sextortion cases. The IWF is already seeing more self-reporters, with young people reporting images edited or 'nudified' with AI tools. This reflects the increasing trend of sextortion cases globally and raises questions about how AI can facilitate various types of online abuse.

Identifying AI-generated Content

The realistic nature of AI-generated content makes it difficult to distinguish from 'real' material, posing challenges for analysts and law enforcement agents in handling reports."In many jurisdictions AI CSAM will not have the same legal status,” Alex explained, “but in the UK AI-generated CSAM is criminal and treated the same as 'real' images." This means that, in the UK, all criminal AI-generated images are tagged as AI when they are passed onto national law enforcement and international partners.

But how can hotline analysts distinguish when content was produced by AI? Each text-to-image AI model has its own method of digital watermarking, Alex explained. By analysing the metadata, certain tools can identify AI-generated material. Unfortunately, these systems aren't always reliable and can be fooled by perpetrators removing or otherwise modifying the metadata. The IWF's AI CSAM Report outlines further considerations for Detection and Enforcement and proposes potential solutions to improve detection efforts in the future.

The Future of AI

Over the past two years, we have witnessed the immense progress generative AI has made in creating entirely realistic imagery. While currently, text-to-image systems pose the main threat in the online child protection landscape, we have to be prepared for the fast development of text-to-video systems. "As we saw with the progression of image generator models, the growth of these tools is rapid," Alex explained. "It’s safe to say that text-to-video will experience the same rapid development."


Visit the IWF's website for more insights from their Report on AI-generated CSAM.

Webinar Recap: How AI is being abused to create CSAM
15.03.2024
'

Visit the IWF's website for more insights from their Report on AI-generated CSAM.

'