INHOPE - Association of Internet Hotline Providers | What is generative AI?
Article
Educational Articles

What is generative AI?

Generative Artificial Intelligence (AI) is a type of artificial intelligence capable of producing various types of content, including text, imagery, audio, and other media.

Generative AI models use machine learning algorithms to learn patterns and structure of their input training data to generate new data with similar characteristics. While AI can have a variety of different use cases, the main objective of generative AI is to create new, artificial content. Recently, INHOPE member hotlines have discovered bad actors using this technology to create child sexual abuse imagery, often indistinguishable from real abuse material.

Generative AI and CSAM

Since the start of 2023, there have been increasingly frequent reports of child sexual abuse material (CSAM) generated by artificial intelligence. The Internet Watch Foundation (IWF), INHOPE's member hotline in the United Kingdom (UK), reported to have found seven URLs on the open web containing suspected AI-generated CSAM. Now, after an in-depth investigation into a dark web CSAM forum, the hotline has uncovered almost 3000 cases of AI-generated images depicting content illegal under UK law.

Besides the increasing numbers of this content, analysts have also discovered online 'manuals' dedicated to helping offenders refine their prompts and train AI to return more realistic results. Perpetrators can legally download everything they need to generate these images and produce as many images as they want – offline, with no opportunity for detection. Various tools exist for improving and editing generated images until they look exactly like the perpetrator wants.

What are the Implications of this content?

As technology for generative AI is constantly evolving we cannot foresee the entire depth of the implications it might hold. However, what we can say for certain at this time is that this type of content has a severe impact on both victims of CSAM and the workload of hotline analysts processing and removing this content to prevent further revictimisation. We will continue to investigate and track this content. For now, a few of the major implications include:

  • Increased volume of reports analysts have to review
  • Increased difficulty in identifying cases with real-life victims who need to be safeguarded
  • Increased volume of CSAM, which can contribute to normalising and perpetuation of child sexual abuse.
  • Increased potential to victimise famous children and children known to perpetrators by using their pictures to create AI-generated CSAM.
  • Offers another route for perpetrators to profit from child sexual abuse.



Technology is continuously improving, and bad actors will continue using that to their advantage. To stay ahead of the curve we must keep our eyes open for new developments and address emerging threats as fast as possible.

To access the full IWF research report on how AI is being used to create CSAM click here.

What is generative AI?
15.01.2024
'

To access the full IWF research report on how AI is being used to create CSAM click here.

'