British Tech Companies and Child Protection Officials to Test AI's Ability to Create Abuse Content

Technology companies and child safety organizations will be granted authority to assess whether AI tools can produce child exploitation material under new UK legislation.

Significant Rise in AI-Generated Illegal Content

The announcement coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the amendments, the government will allow designated AI developers and child safety organizations to examine AI models – the foundational systems for conversational AI and visual AI tools – and verify they have adequate protective measures to stop them from creating depictions of child exploitation.

"Ultimately about preventing abuse before it occurs," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now identify the risk in AI systems promptly."

Tackling Legal Challenges

The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to averting that issue by enabling to stop the creation of those images at source.

Legal Structure

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or distributing AI models designed to create exploitative content.

Practical Impact

This week, the minister visited the London base of Childline and heard a simulated conversation to advisors featuring a report of AI-based exploitation. The call depicted a teenager seeking help after being blackmailed using a explicit deepfake of himself, constructed using AI.

"When I hear about young people experiencing extortion online, it is a cause of extreme frustration in me and justified concern amongst parents," he stated.

Concerning Statistics

A leading online safety foundation stated that instances of AI-generated abuse material – such as webpages that may include multiple images – had significantly increased so far this year.

Instances of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to ensure AI tools are safe before they are released," commented the chief executive of the internet monitoring foundation.

"AI tools have made it so victims can be victimised repeatedly with just a few clicks, giving criminals the capability to create possibly endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Content which additionally exploits survivors' trauma, and renders young people, especially female children, more vulnerable on and off line."

Support Interaction Data

Childline also published information of counselling sessions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Using AI to rate body size, physique and looks
  • Chatbots dissuading young people from consulting safe guardians about harm
  • Facing harassment online with AI-generated content
  • Online extortion using AI-faked pictures

During April and September this year, the helpline delivered 367 counselling interactions where AI, conversational AI and related terms were mentioned, four times as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, including using chatbots for support and AI therapy apps.

Robin Jacobs
Robin Jacobs

A seasoned poker strategist with over a decade of experience in high-stakes tournaments and coaching.