UK Tech Companies and Child Protection Officials to Test AI's Capability to Generate Abuse Images

Technology companies and child protection agencies will receive permission to evaluate whether AI tools can generate child exploitation material under recently introduced British laws.

Substantial Increase in AI-Generated Harmful Content

The declaration coincided with findings from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the authorities will allow designated AI companies and child safety organizations to examine AI models – the foundational systems for conversational AI and image generators – and verify they have adequate safeguards to stop them from creating images of child sexual abuse.

"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, noting: "Specialists, under strict protocols, can now detect the risk in AI models promptly."

Addressing Regulatory Challenges

The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This law is designed to preventing that problem by helping to halt the production of those images at their origin.

Legislative Framework

The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on possessing, creating or sharing AI models designed to create exploitative content.

Real-World Consequences

This week, the minister visited the London base of Childline and listened to a simulated call to advisors involving a report of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.

"When I hear about children facing blackmail online, it is a source of extreme frustration in me and justified concern amongst parents," he stated.

Alarming Data

A leading online safety organization stated that cases of AI-generated exploitation content – such as online pages that may contain numerous images – had more than doubled so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to guarantee AI products are secure before they are launched," commented the chief executive of the online safety organization.

"Artificial intelligence systems have made it so victims can be victimised all over again with just a few clicks, providing offenders the ability to create potentially endless amounts of sophisticated, lifelike exploitative content," she continued. "Material which further exploits survivors' suffering, and renders young people, especially girls, less safe both online and offline."

Counseling Interaction Data

The children's helpline also published details of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include:

  • Employing AI to evaluate body size, body and appearance
  • Chatbots dissuading young people from consulting trusted guardians about harm
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-manipulated images

Between April and September this year, Childline delivered 367 support interactions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapeutic apps.

Penny Ross
Penny Ross

A passionate writer and betting enthusiast with years of experience in the online gaming industry, sharing insights and strategies.