Article
Educational Articles
What is a Deepfake?
Deepfake technology is used to create images or videos of people doing or saying things which they did not do or say. The name comes from a combination of “deep learning”, which refers to the type of Artificial Intelligence (AI) tool used to create the material, and “fake”.
To create a deepfake, the tool is fed images or videos of the activity which the creator wants to portray, alongside images or videos of the person who will appear to perform the activity. The more data the AI is fed, the more realistic the deepfake will be.
How common are deepfakes?
Deepfake technology originally entered the public domain in 2017 when a Reddit user used the technology to digitally create non-consensual porn using female celebrity’s faces. Since then, the technology has spread rapidly, with 85.047 deepfake videos being detected by Deeptrace by December 2020, and the number of non-consensual and harmful deepfake videos generated by expert creators doubling roughly every 6 months. In 2019, Deeptrace estimated that 96% of the deepfake videos circulating online contained pornographic content.
How do deepfakes affect the fight against Child Sexual Abuse Material (CSAM)?
In 2018, one in five police officers already reported finding deepfakes in their investigations, and three quarters of those surveyed believed the prevalence of deepfakes would increase in the future.
Deepfakes present unique challenges for victim identification. Technology can be used to obscure the face of the child in material depicting genuine abuse making identification much harder. In other cases, the face of a different child might be superimposed onto the original image meaning law enforcement waste time trying to identify the wrong child.
This is expected to get harder as deepfake technology improves meaning LEA need good tools to identify which images and videos are generated using deepfake technology.
Are children harmed by sexualizing images and videos?
Although a child may not be physically harmed, the creation of CSAM deepfakes is a form of sexualization of the child. Another example of this is the use of everyday images of children, such as innocently captured photos and videos of a child in the bathtub, for sexual purposes.
In a recent research report by Red Barnet, the operators of the Danish hotline, they argue that placing everyday images of children in sexualizing context constitutes a violation of a child’s right to be protected from being made a sexual object – appearing in article 34 of the United Nations Convention on the Rights of the Child. This is regardless of whether the child finds out or not.
If a child finds out about the sexualized images, there may be many further harmful impacts. Examples include feelings of powerlessness and anxiety about the number of people who have viewed the material, and feelings of shame and guilt. This can have a deep and long lasting psychological impact.
If those in the child’s immediate circle are exposed to the material, there may also be further repercussions, such as bullying and social exclusion, or sextortion.
Deepfakes and the law
The development of deepfake technology is an example of the need for legislation to be continually developed as the tools which people use to create and share CSAM change.
Many countries already include digitally generated depictions of child sexual abuse in their legal definition of CSAM, meaning it can be reported and removed from the internet like other forms of CSAM. Find out whether your hotline accepts reports of digitally generated CSAM here.
Learn how encryption affects the fight against CSAM here.
If you'd like to learn more about topics like this, then
click here to sign up for INHOPE Insights and Events.