UK Technology Firms and Child Safety Officials to Test AI's Ability to Create Abuse Images
Tech firms and child protection organizations will receive authority to assess whether AI tools can generate child exploitation material under recently introduced UK laws.
Significant Increase in AI-Generated Harmful Content
The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the authorities will allow approved AI companies and child safety organizations to inspect AI models – the foundational systems for chatbots and image generators – and verify they have adequate safeguards to prevent them from producing depictions of child exploitation.
"Fundamentally about stopping abuse before it happens," declared the minister for AI and online safety, noting: "Specialists, under strict protocols, can now detect the danger in AI systems early."
Addressing Regulatory Obstacles
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at preventing that issue by enabling to halt the production of those materials at source.
Legal Framework
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI systems designed to generate exploitative content.
Real-World Impact
This recently, the official visited the London base of Childline and listened to a simulated conversation to counsellors featuring a report of AI-based exploitation. The interaction portrayed a adolescent requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and justified concern amongst families," he stated.
Alarming Data
A prominent internet monitoring organization stated that instances of AI-generated abuse content – such as online pages that may contain multiple files – had significantly increased so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a crucial step to guarantee AI tools are secure before they are released," commented the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the capability to make possibly endless quantities of advanced, lifelike child sexual abuse material," she continued. "Material which further exploits victims' trauma, and renders children, especially female children, more vulnerable both online and offline."
Support Session Data
Childline also published details of support sessions where AI has been referenced. AI-related harms discussed in the sessions include:
- Employing AI to evaluate weight, physique and appearance
- Chatbots dissuading young people from talking to safe guardians about abuse
- Being bullied online with AI-generated content
- Online extortion using AI-faked pictures
Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, including utilizing chatbots for assistance and AI therapeutic applications.