October 26, 2023 - 4:00pm

A recent investigation by the Internet Watch Foundation has found that paedophiles are using AI to create images of celebrities as children, as well as manipulate photographs of child actors and generate scores of images of real victims of child sex abuse. The report details how 11,000 AI images were shared on a single darknet child abuse website, 3000 of which would be illegal under UK law. 

More than one in five of these images were classified as Category A, the most serious kind of imagery, and more than half depicted primary school-aged children. In another survey, 80% of respondents in a dark web paedophile forum admitted they had or intended to use AI tools to create child sexual abuse images.

Some have tried to argue that simulated images could offer a “less harmful” alternative because children are not being assaulted or exploited “in reality”. Yet these images normalise predatory behaviour, create demand for a fundamentally exploitative industry, complicate police investigations and waste precious resources, all while using the faces and likenesses of real children.

Predators use real videos and images of child sex abuse to “train” AI programmes to create more content, or use social media images of children to create “deepfake” lookalikes. In one small town in Spain, over 20 girls aged between 11-17 had AI-generated naked photographs of themselves circulated; they weren’t “real”, but that hardly absolves those responsible. 

Without regulation, where does this go next? Amazon, Instagram and Etsy have all been criticised for allowing sellers to advertise sex dolls that approximate the size and features of children. If AI can create child pornography simulations, then it could also use recordings to animate sex dolls or robots with children’s voices or vocabulary.

Yet despite these obvious dangers, “sharenting” (the practice of parents publicising content about their children on social media) is more popular than ever. Most babies now make their digital debut within an hour of birth; parents then go on to post an average of 1500 images of their children on social media before they even go to primary school. 

Around one in four parents have a public profile, meaning anyone is allowed to see their posts, while 80% of parents admit to not knowing all their social media friends or having followers who they have never met face to face. For years celebrities have blurred out images of their children’s faces in paparazzi shots because of safety fears, and yet now everyone from “mummy influencers” with millions of followers to regular people with a few dozen online friends share the intimate ins and outs of their children’s lives with impunity.

Once again, the UK’s passivity on this issue is notable. Many US states have made huge strides in terms of cracking down on minors accessing pornography; France has introduced age verification for social media sites and stricter parental controls; while the EU has forced TikTok to make its “For You” algorithm optional and banned adverts targeted at 11-18 year olds. The UK has done none of these things. France has even introduced a bill banning parents from sharing children’s photographs on social media, citing the fact that half of all pictures exchanged on paedophile forums originate from photographs posted by families on these platforms. The UK is unlikely to follow suit anytime soon.

This is a bold move, but we cannot rely on Big Tech being able to moderate its own content (we already know, for example, that Instagram fails to remove accounts that have been flagged for posting sexualised content of children). Until AI is regulated or real legislation brought in, the only way to protect children from facial recognition, profiling, data mining, the loss of their anonymity, and being potentially turned into a pornographic avatar, is to simply stop posting about them.


Kristina Murkett is a freelance writer and English teacher.

kristinamurkett