British Tech Companies and Child Protection Officials to Test AI's Ability to Create Exploitation Content
Tech firms and child safety agencies will be granted authority to assess whether AI systems can produce child exploitation material under recently introduced British laws.
Substantial Increase in AI-Generated Illegal Material
The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the government will permit designated AI companies and child safety groups to inspect AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have sufficient safeguards to stop them from producing depictions of child exploitation.
"Ultimately about preventing abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under strict protocols, can now detect the risk in AI systems early."
Tackling Regulatory Challenges
The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and others cannot create such images as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is designed to averting that issue by helping to halt the creation of those materials at source.
Legal Structure
The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or sharing AI models designed to create child sexual abuse material.
Practical Impact
This recently, the minister toured the London base of Childline and heard a simulated conversation to counsellors featuring a account of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a sexualised deepfake of themselves, created using AI.
"When I learn about children facing extortion online, it is a source of extreme anger in me and rightful anger amongst families," he stated.
Concerning Statistics
A leading online safety organization stated that instances of AI-generated exploitation material – such as webpages that may contain multiple images – had more than doubled so far this year.
Instances of the most severe content – the gravest form of abuse – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to ensure AI tools are secure before they are released," commented the chief executive of the internet monitoring organization.
"AI tools have enabled so survivors can be victimised repeatedly with just a simple actions, giving offenders the capability to create possibly limitless quantities of advanced, photorealistic exploitative content," she continued. "Content which additionally commodifies survivors' suffering, and makes children, especially female children, less safe both online and offline."
Counseling Session Information
Childline also released information of support sessions where AI has been mentioned. AI-related risks mentioned in the conversations include:
- Using AI to rate body size, physique and appearance
- Chatbots dissuading children from consulting safe adults about harm
- Being bullied online with AI-generated material
- Digital blackmail using AI-manipulated pictures
Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and associated topics were discussed, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including using chatbots for assistance and AI therapy applications.