UK Technology Firms and Child Safety Officials to Examine AI's Ability to Create Abuse Content
Technology companies and child safety organizations will receive authority to evaluate whether artificial intelligence systems can produce child exploitation images under new British legislation.
Significant Increase in AI-Generated Harmful Material
The announcement came as revelations from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the government will allow approved AI developers and child protection organizations to examine AI models – the underlying systems for chatbots and image generators – and ensure they have sufficient safeguards to stop them from creating depictions of child exploitation.
"Fundamentally about stopping abuse before it happens," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the danger in AI systems promptly."
Tackling Legal Obstacles
The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at preventing that problem by enabling to stop the creation of those materials at their origin.
Legislative Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a ban on owning, producing or sharing AI models designed to create child sexual abuse material.
Practical Impact
This week, the minister toured the London base of a children's helpline and listened to a simulated conversation to advisors featuring a account of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a explicit deepfake of themselves, created using AI.
"When I hear about young people facing extortion online, it is a source of intense anger in me and rightful concern amongst parents," he stated.
Concerning Statistics
A prominent online safety foundation stated that cases of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year.
Cases of category A content – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a crucial step to guarantee AI tools are safe before they are released," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, providing criminals the capability to create possibly limitless amounts of sophisticated, lifelike exploitative content," she continued. "Content which additionally commodifies survivors' trauma, and renders young people, particularly female children, less safe both online and offline."
Support Session Information
Childline also published information of support sessions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Using AI to rate weight, body and appearance
- Chatbots discouraging children from consulting trusted adults about harm
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and associated terms were discussed, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing utilizing AI assistants for support and AI therapy applications.