British Technology Companies and Child Safety Agencies to Examine AI's Ability to Generate Abuse Content

Tech firms and child safety agencies will receive authority to assess whether AI tools can generate child abuse material under recently introduced British laws.

Substantial Rise in AI-Generated Harmful Material

The announcement came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will permit approved AI companies and child safety groups to inspect AI systems – the foundational systems for chatbots and visual AI tools – and ensure they have adequate safeguards to stop them from producing images of child sexual abuse.

"Ultimately about stopping abuse before it happens," declared Kanishka Narayan, noting: "Experts, under strict protocols, can now identify the risk in AI models promptly."

Tackling Legal Obstacles

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and others cannot create such content as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that problem by helping to halt the production of those images at their origin.

Legal Framework

The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI systems developed to create child sexual abuse material.

Real-World Impact

This recently, the official visited the London base of Childline and listened to a mock-up call to counsellors featuring a account of AI-based exploitation. The interaction depicted a teenager seeking help after facing extortion using a explicit AI-generated image of himself, created using AI.

"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.

Concerning Statistics

A prominent internet monitoring foundation stated that cases of AI-generated exploitation content – such as online pages that may include multiple files – had more than doubled so far this year.

Cases of category A material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a vital step to ensure AI products are secure before they are launched," stated the chief executive of the online safety organization.

"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a few clicks, giving offenders the capability to make possibly limitless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which additionally commodifies survivors' trauma, and renders children, especially female children, more vulnerable both online and offline."

Counseling Session Data

The children's helpline also released details of support sessions where AI has been mentioned. AI-related harms mentioned in the conversations include:

  • Employing AI to rate body size, body and appearance
  • AI assistants dissuading children from consulting trusted guardians about harm
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-manipulated images

During April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and associated terms were discussed, significantly more as many as in the same period last year.

Half of the mentions of AI in the 2025 sessions were related to mental health and wellness, including utilizing chatbots for assistance and AI therapeutic applications.

Melissa Gutierrez
Melissa Gutierrez

A passionate gamer and betting analyst with years of experience in the eSports industry, sharing strategies and reviews.