INHOPE | The importance of human content moderators
Article
Educational Articles
Industry News & Trends

The importance of human content moderators

Trust and Safety (T&S) agents are the first and most important line of defence for any online service provider. They work to maintain community guidelines by detecting and removing harmful and illegal material.

To assist Trust and Safety agents in their work there are some Artificial Intelligence (AI) and Machine learning (ML) based approaches. However, while the development of AI and ML is crucial in the fight against Child Sexual Abuse Material (CSAM), these technologies are not error-free and are often not able to detect all the nuanced signs of Child Sexual Abuse (CSA). Grooming, for instance, is such a gradual process that it can be incredibly challenging to recognise reoccurring patterns.

This is why we aim to provide human content moderators with a specific set of guidelines that they can follow to unify our efforts in the fight against CSAM. Quantitative analysis of previous chatroom conversations between groomers and their victims show certain similarities that T&S agents can look out for to interfere with grooming attempts within the early stages.

Best practices to detect grooming activity

While perpetrators use a variety of different approaches such as flattery, bribery, or threats, there are certain behavioural patterns that groomers tend to exhibit when targeting children online.

One of the first stages of the grooming process is deceptive trust development, during which the perpetrator compels the child to share personal information, such as their age, name, place of residence, or relationship status. Analysts are advised to use keyword searches to screen for the disclosure of personal information between users. However, as sharing personal information is not inherently a sign of criminal intent it can be very tough to identify predatory behaviour.
Perpetrators use different approaches such as flattery or pressure to initiate sexual conversation or to elicit intimate content from their victim. Even though the themes of persuasion differ, there is a common characteristic that most groomers share. Recent research by the University of Gothenburg has found that 98% of groomers reveal their intentions within the first two days of communication.

With this information, T&S agents can detect suspected grooming behaviour by using keyword searches to scan for sexual requests within the first two days of contact. Detecting inappropriate activity within this stage is crucial as groomers tend to become more demanding with prolonged contact and begin convincing the child to either meet up with them or create intimate content of themselves that is later abused for extortion or distribution purposes.

The key takeaways

While ML technology, as well as AI detection, is becoming increasingly effective, human behaviour especially when it comes to criminal activity does not always follow an identifiable pattern. This is why human T&S agents are crucial in keeping online spaces safe. To support T&S agents we need to keep analysing CSA online, especially more subtle forms of abuse that are not easy to detect to be able to provide content analysts with the best possible resources.

To achieve this, we need to encourage data and knowledge sharing as well as deep-dive research across different stakeholders. Working partnerships between Law Enforcement Administration (LEA) and NGOs are crucial in creating a more efficient process.

If you suspect your child or someone you know to be a victim of grooming, reach out to your national helpline here.

Read more about how to help keep your child safe online.

The importance of human content moderators
12.05.2022
Photo by INHOPE
'

To support T&S agents we need to keep analysing CSA online, especially more subtle forms of abuse that are not easy to detect to be able to provide content analysts with the best possible resources.

'