INHOPE | Webinar Recap: How the misuse of generative AI mirrors established predatory behaviour
Article
Events & Campaigns

Webinar Recap: How the misuse of generative AI mirrors established predatory behaviour

Last week, over 180 industry professionals joined our Expert Insights Webinar featuring Henry Adams and Danielle Williams from Resolver, a Kroll business. The session delved into the pressing issue of child safety in the era of generative AI (GenAI), highlighting the evolving threats and necessary safeguards.


Henry Adams kicked off the webinar with an overview of Resolver’s advanced capabilities, which allow for the tracing from content signals, to users, to entire bad actor networks across both the surface and deep web. He then highlighted the disturbing ways predators are exploiting GenAI, sharing insights into how they exchange knowledge to misuse these technologies for malicious purposes.

Drawing parallels between traditional abuse methods and the emerging GenAI threats, Henry noted both the opportunities and challenges they present. He explained that while we understand these risks and have strategies to mitigate them, GenAI amplifies these threats, necessitating a larger-scale response from the industry. Throughout his presentation, Henry stressed the importance of maintaining a victim-centric approach in tackling these issues.

Exploring Predatory Behaviour

Danielle Williams, Resolver's lead subject matter expert on online child endangerment, provided an in-depth exploration of Content of Interest to Predators (COITP). She explained that COITP refers to legal content that predators collect and share within their communities for their sexual gratification. Often, this includes innocent images of children, which minors may unknowingly produce due to social media trends or at the request of offenders during grooming.

Danielle highlighted how predators exploit social media platforms to gather and consume these images, often focusing on niche fetishes that are difficult to recognise and do not typically violate platform policies. While the content itself may not be overtly sexualised, the act of predators consuming it poses significant dangers.

How are bad actors utilising GenAI to create customised fetish content? GenAI allows offenders to bypass the need for direct interaction with minors, which they perceive as a safer alternative. However, this does not mitigate the risk, as engagement with AI-generated imagery can lead to eventual contact with real children. Predators are also discussing technical aspects like model training, prompt optimisation, and using Low-Rank Adaptation (LORA) to enhance the quality of generated content, Danielle explained.

Combatting GenAI Misuse through Advanced Techniques

Detecting and mitigating these threats relies heavily on behaviour-based approaches, as emphasised by both Henry and Danielle. They recommended focusing on identifying usage patterns across iterative prompts to spot predatory behaviour. By tracking these patterns, GenAI platforms can prevent misuse more effectively, allowing for timely intervention and better safeguarding against exploitation. They also highlighted the need to understand the intersection of Child Sexual Abuse Material (CSAM) and GenAI. Even though AI-generated CSAM does not involve real children, it is often trained on real CSAM, which perpetuates the victimisation of children. Therefore, it remains crucial to prevent the creation and distribution of such synthetic content.

Danielle pointed out that both traditional and AI-generated CSAM are commercialised, fostering a market which predators exploit to their advantage. Predators thrive in echo chambers where GenAI offers a false sense of safety. Additionally, there's a concerning rise in financial sextortion cases, as offenders leverage GenAI to produce blackmail material.

Policy Recommendations and Solutions

  • Behaviour-based approaches: Focus on detecting patterns indicative of predatory behaviour across multiple prompts. A content-only approach will not make a difference on its own.
  • Monitoring Financial Transactions: Track financial flows related to CSAM to disrupt predatory networks.
  • Prohibiting Harmful Collections: Update community guidelines to prohibit the creation and sharing of collections that could be exploited by predators.
  • Implementing Protections: If collections are essential to the user experience, establish safeguards to prevent their misuse.

Key Takeaways

  • Heightened Impact of GenAI: Generative AI aggravates the risks associated with child exploitation, requiring new and innovative mitigation strategies.
  • Reality of Synthetic Content: AI-generated CSAM, though synthetic, is still harmful and often relies on real CSAM for training.
  • Importance of Source Material: Understanding and controlling the source material used in AI training is essential to combatting abuse.

Henry wrapped up the session with a call to action, urging everyone to come together and collaborate. He stressed the importance of sharing insights and intelligence and proposed cross-platform operations. Resolver is committed to stepping up and playing an active role in this collective effort.


Learn more and get in touch with Resolver here.

Webinar Recap: How the misuse of generative AI mirrors established predatory behaviour
31.05.2024
'

Although AI-generated CSAM doesn't involve 'real' children, it's often trained on real CSAM, perpetuating the victimisation of children. Thus, it's crucial to prevent the creation and distribution of this synthetic content.

'