UK Tech Companies and Child Protection Officials to Examine AI's Capability to Generate Exploitation Images

Tech firms and child safety organizations will be granted permission to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced UK laws.

Significant Rise in AI-Generated Harmful Content

The announcement came as findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the government will allow designated AI companies and child protection organizations to inspect AI models – the underlying systems for chatbots and image generators – and verify they have adequate safeguards to prevent them from producing images of child exploitation.

"Ultimately about stopping abuse before it happens," declared the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now detect the danger in AI models promptly."

Addressing Legal Obstacles

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This law is designed to preventing that problem by helping to stop the creation of those images at their origin.

Legislative Structure

The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or sharing AI systems designed to generate child sexual abuse material.

Practical Consequences

This recently, the minister toured the London headquarters of a children's helpline and listened to a simulated conversation to advisors featuring a report of AI-based exploitation. The call depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.

"When I hear about children experiencing blackmail online, it is a source of intense frustration in me and justified anger amongst parents," he stated.

Concerning Data

A leading internet monitoring organization reported that instances of AI-generated exploitation material – such as online pages that may include multiple files – had significantly increased so far this year.

Cases of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a vital step to guarantee AI tools are safe before they are launched," commented the head of the online safety foundation.

"AI tools have made it so survivors can be victimised repeatedly with just a simple actions, providing offenders the capability to create potentially limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Material which further exploits survivors' trauma, and renders young people, especially female children, more vulnerable both online and offline."

Counseling Interaction Information

The children's helpline also published details of support sessions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Using AI to evaluate body size, body and looks
  • AI assistants discouraging children from consulting trusted guardians about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-manipulated images

During April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapeutic applications.

Nicole Scott
Nicole Scott

Elara is a seasoned travel writer with a passion for uncovering tranquil destinations and promoting mindful travel experiences worldwide.