Age verification not a one-stop-shop for protecting children online
We know certain activities, places, and products can be harmful to children in the offline world. That’s why the law requires people to show their ID before buying certain products, watching certain films, or entering certain venues.
The same is true in the online world, where the possibility to connect with others, engage with content, and share personal details and media can create a hazardous environment for children.
Although many social media and video sharing platforms impose age restrictions for who can use their services, without a human physically present to check the age of prospective users, the enforcement of these restrictions is often insufficient.
The most recent company to come under the spotlight for failing to prevent people under 18 from using their platform is content subscription service OnlyFans where cases are being uncovered of the sexual exploitation and trafficking of children, and the proliferation of self-generated Child Sexual Abuse Material (CSAM).
Methods and Challenges for Age verification
A range of techniques are used by Electronic Service Providers such as OnlyFans in attempts to prevent people who are underage (commonly those either under 13 or under 18) from accessing their services.
- Asking prospective users to provide their date of birth
- Verifying their date of birth according to existing data, such as on an ID or credit card
- Cross referencing the ID with a selfie which users must upload to prove they are the owner of the ID
- Analysing a selfie of the user with AI tools to assess their age
- Using AI to flag behaviour patterns which indicate younger users
As more resources are invested into age verification processes, they are getting increasingly resilient to duping by underage users. However, no technique is bulletproof, and the ability of young people to circumvent each method, by misrepresenting their age, using fake or borrowed ID’s, or even using fake or borrowed faces remains inevitable.
On the flip side, the robustness of each method must be weighed against its reliance on sharing personal data with private and third-party companies. Many users who are old enough to use the relevant platforms are alarmed by the thought of having the details of their online behaviour linked to their official ID. When children, who cannot give informed consent to the use of their data, are concerned, we need to be even more cautious.
A multi-faceted approach to keeping children safe online
This difficult balance between robustness and data privacy does not mean we should give up on developing age verification techniques, but it does mean we cannot rely on them alone. Cases of online grooming and sextortion have been steadily rising over the last few years. At the same time, and discussed in a research report but the National Centre on Sexual Exploitation, professionals are increasingly coming across cases of child-on-child abuse, with many linking these incidents to early exposure to hardcore porn.
Protecting children online requires being realistic about the role the internet plays in their lives and minimizing the risks when they do explore spaces we might try and prevent them from accessing. It also requires everyone to get involved.
Industry working to make online spaces safer for children with features of safe design
Legislators protecting children’s interests and providing clarity for actors working in this space
Educators teaching children how to use the internet safely
Parents monitoring children’s internet use, and implementing parental controls on devices and apps
At INHOPE, we know all too well the harms that both children and adults can experience online which is why we work every day to remove illegal and harmful content and help make the internet safe for everyone.
If you'd like to read more articles like this, then
click here to sign up for INHOPE Insights and Events.