British Technology Companies and Child Safety Agencies to Examine AI's Ability to Create Exploitation Images
Tech firms and child safety organizations will be granted permission to evaluate whether AI systems can produce child abuse material under new UK legislation.
Substantial Increase in AI-Generated Illegal Material
The announcement coincided with revelations from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the authorities will permit approved AI developers and child safety groups to examine AI models – the foundational systems for chatbots and image generators – and verify they have sufficient protective measures to prevent them from creating images of child sexual abuse.
"Fundamentally about stopping exploitation before it occurs," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the risk in AI models early."
Addressing Regulatory Obstacles
The amendments have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that issue by enabling to halt the production of those materials at their origin.
Legislative Structure
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI systems developed to create exploitative content.
Real-World Impact
This week, the minister toured the London base of Childline and heard a mock-up conversation to counsellors featuring a account of AI-based exploitation. The call portrayed a teenager requesting help after being blackmailed using a sexualised AI-generated image of himself, created using AI.
"When I learn about children experiencing extortion online, it is a source of intense anger in me and justified concern amongst parents," he stated.
Alarming Statistics
A prominent internet monitoring organization stated that instances of AI-generated exploitation material – such as webpages that may include multiple images – had significantly increased so far this year.
Cases of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a crucial step to guarantee AI products are safe before they are released," commented the chief executive of the online safety organization.
"Artificial intelligence systems have made it so victims can be victimised all over again with just a few clicks, providing criminals the capability to create possibly endless amounts of advanced, photorealistic exploitative content," she continued. "Content which further exploits survivors' suffering, and makes children, particularly female children, less safe both online and offline."
Counseling Session Information
The children's helpline also released information of support interactions where AI has been mentioned. AI-related risks discussed in the conversations comprise:
- Using AI to rate body size, body and looks
- Chatbots dissuading young people from talking to trusted guardians about harm
- Facing harassment online with AI-generated content
- Digital extortion using AI-faked pictures
During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated topics were discussed, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapeutic applications.