## AI-Generated Child Sexual Abuse Material (CSAM) Surging Online, Watchdog Warns **News Title:** AI-generated child sexual abuse videos surging online, watchdog says **Report Provider:** Internet Watch Foundation (IWF) **Date/Time Period Covered:** First six months of 2025, compared to the same period in the previous year. **Source:** The Guardian **Author:** Dan Milmo --- ### Executive Summary The Internet Watch Foundation (IWF), a UK-based internet safety watchdog, has reported a dramatic surge in the prevalence of AI-generated child sexual abuse material (CSAM) online. This increase is attributed to advancements in AI technology and its exploitation by paedophiles. The IWF notes that these AI-generated videos are becoming nearly indistinguishable from real imagery, posing a significant and growing threat to child safety. ### Key Findings and Statistics * **Dramatic Increase in AI-Made CSAM Videos:** In the first six months of 2025, the IWF verified **1,286 AI-made videos** containing CSAM that broke the law. This is a stark contrast to the **two such videos** verified in the same period last year. * **Prevalence of Severe Abuse Material:** Just over **1,000** of these AI-generated videos featured **Category A abuse**, which is the classification for the most severe type of CSAM. * **Surge in URLs Featuring AI-Made CSAM:** The number of URLs featuring AI-made child sexual abuse increased by **400%** in the first six months of 2025. The IWF received reports of **210 such URLs**, up from **42** in the previous year. Each webpage can feature hundreds of images and videos. * **Exploitation of AI Technology:** Paedophiles are reportedly manipulating widely available video-generation AI models, which are being developed with significant multibillion-dollar investments. * **Method of Creation:** IWF analysts suggest these videos are created by taking freely available basic AI models and "fine-tuning" them with existing CSAM to produce realistic videos. In some instances, these models have been fine-tuned with a handful of CSAM videos. * **Use of Real-Life Victims:** The most realistic AI abuse videos identified this year were based on **real-life victims**. ### Significant Trends and Changes * **Near-Indistinguishable Imagery:** AI-generated CSAM has "crossed the threshold" of being nearly indistinguishable from real imagery. * **Rapid AI Development:** Paedophiles are actively adapting to the rapid improvements in AI technology, with one report indicating a constant cycle of mastering one AI tool only for a "new and better" one to emerge. * **Expansion of CSAM Volume:** The use of existing victims' likenesses in AI-generated images allows paedophiles to significantly expand the volume of CSAM online without needing to create new victims. ### Notable Risks and Concerns * **Explosion of AI-Generated CSAM:** The IWF warns of an "absolute explosion" of AI-generated CSAM that could overwhelm the clear web. * **Fueling Criminal Activity:** This growth in AI-generated CSAM could fuel criminal activities linked to child trafficking, child sexual abuse, and modern slavery. * **Accessibility and Adaptability:** The wide availability and adaptability of AI models for criminal purposes are key drivers of this trend. ### Government Response and Legal Measures The UK government is taking action to combat AI-generated CSAM: * **Criminalizing AI Tools:** It is now illegal to possess, create, or distribute AI tools specifically designed to create abusive content. * **Penalties:** Individuals found to have breached this law face up to **five years in jail**. * **Outlawing Possession of Manuals:** The government is also outlawing the possession of manuals that teach offenders how to use AI tools for creating abusive imagery or to facilitate child abuse. * **Penalties:** Offenders could face a prison sentence of up to **three years**. * **Existing Legislation:** AI-generated CSAM is already illegal under the Protection of Children Act 1978, which criminalizes the taking, distribution, and possession of indecent photographs or pseudo-photographs of a child. **Statement from Home Secretary Yvette Cooper:** Announcing these changes in February, Home Secretary Yvette Cooper emphasized the vital need to "tackle child sexual abuse online as well as offline." **Statement from Derek Ray-Hill, IWF's Interim Chief Executive:** Derek Ray-Hill highlighted the "incredible risk" of AI-generated CSAM leading to an explosion that could overwhelm the clear web and fuel further criminal activity.
AI-generated child sexual abuse videos surging online, watchdog says
Read original at The Guardian →The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.
In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.
The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.
The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.
IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.
Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.
The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content.
People found to have breached the new law will face up to five years in jail.Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.
Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.




