UK Tech Companies and Child Safety Officials to Test AI's Ability to Generate Abuse Images
Tech firms and child safety organizations will be granted authority to evaluate whether artificial intelligence tools can produce child abuse images under new UK laws.
Substantial Rise in AI-Generated Harmful Content
The declaration coincided with findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will allow approved AI developers and child safety groups to inspect AI systems – the foundational technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from creating images of child exploitation.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now detect the risk in AI models promptly."
Tackling Legal Obstacles
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at preventing that problem by enabling to halt the creation of those images at their origin.
Legislative Framework
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI systems developed to create child sexual abuse material.
Practical Impact
This recently, the official visited the London base of Childline and listened to a mock-up call to counsellors involving a report of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a cause of extreme anger in me and justified anger amongst parents," he stated.
Alarming Statistics
A leading internet monitoring foundation reported that cases of AI-generated abuse content – such as online pages that may contain multiple images – had significantly increased so far this year.
Instances of category A content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a vital step to guarantee AI tools are secure before they are launched," commented the head of the online safety foundation.
"AI tools have made it so victims can be targeted all over again with just a simple actions, giving criminals the capability to create potentially endless quantities of sophisticated, lifelike exploitative content," she continued. "Content which additionally exploits survivors' trauma, and renders young people, particularly female children, less safe both online and offline."
Counseling Interaction Information
Childline also published details of support sessions where AI has been mentioned. AI-related risks discussed in the conversations include:
- Employing AI to evaluate weight, physique and looks
- Chatbots discouraging children from talking to safe adults about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, the helpline delivered 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, including using AI assistants for assistance and AI therapeutic applications.