UK Technology Companies and Child Safety Officials to Test AI's Ability to Generate Abuse Content

Technology companies and child protection organizations will be granted authority to evaluate whether AI systems can generate child exploitation material under new UK laws.

Substantial Rise in AI-Generated Illegal Content

The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the authorities will permit designated AI companies and child protection organizations to examine AI models – the underlying systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Ultimately about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Specialists, under strict conditions, can now identify the risk in AI systems early."

Addressing Legal Obstacles

The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Until now, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to preventing that problem by helping to stop the production of those materials at source.

Legal Structure

The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI systems designed to create exploitative content.

Real-World Impact

This week, the minister visited the London headquarters of Childline and listened to a simulated call to advisors featuring a report of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.

"When I learn about children facing blackmail online, it is a source of intense anger in me and justified anger amongst families," he stated.

Alarming Statistics

A leading internet monitoring foundation stated that cases of AI-generated abuse material – such as online pages that may include multiple files – had significantly increased so far this year.

Cases of the most severe material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a vital step to guarantee AI products are safe before they are released," stated the head of the internet monitoring organization.

"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a few clicks, providing criminals the capability to create potentially endless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which further exploits victims' suffering, and renders young people, particularly female children, less safe both online and offline."

Support Interaction Data

The children's helpline also published details of counselling sessions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Employing AI to evaluate weight, body and appearance
  • Chatbots dissuading young people from talking to safe guardians about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-manipulated images

During April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the same period last year.

Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapeutic apps.

Katherine Herring
Katherine Herring

Elara is a linguist and writer with a passion for exploring how words shape our world and connect cultures.