监督机构称AI生成儿童性虐待视频网络泛滥

监督机构称AI生成儿童性虐待视频网络泛滥

2025-07-12Technology
--:--
--:--
1
晚上好 1, 我是 David, 这是为你打造的 Goose Pod。今天是7月12日,星期六,晚上8点06分。
2
我是 Ema。今天我们来聊聊一个很严肃的话题:监督机构警告,由AI生成的儿童性虐待视频正在网络上泛滥成灾。
1
我们先来看一组数据。英国互联网观察基金会(IWF)在2025年上半年,核实了1286个AI制作的非法视频,而去年同期只有两个。这是一个爆炸性的增长。
2
太惊人了。这不仅仅是数字,更可怕的是这些AI视频的质量已经到了几乎可以以假乱真的地步。这意味着犯罪分子现在有能力,以前所未有的速度和规模制造这些有害内容。
1
这背后的推手,是AI产业数十亿美元的投资热潮,使得强大的视频生成模型变得非常普及。犯罪分子利用了这些通常是免费或开源的模型,对其进行所谓的‘微调’。
2
这个‘微调’其实很好理解,就像是给AI‘喂’特定的数据来训练它。他们用已有的虐待材料去训练AI,AI就学会了生成更加逼真、更符合他们要求的视频。
1
是的,他们甚至用真实受害者的影像来训练模型,这对受害者造成了持续的伤害。这也迫使政府必须采取行动,比如英国现在已经立法,仅仅是持有或分发这类AI工具就属于犯罪行为。
2
没错,这就像是不仅要抓小偷,连制造万能钥匙的工具和教程也一并禁止。这显示出问题的严重性,法律终于开始努力追赶技术的黑暗面了,虽然挑战依然巨大。
1
这引出了一个核心的冲突。一方面,是AI技术为了创新和市场竞争而进行的飞速、开放式发展。另一方面,是社会和监管机构为了保护弱势群体而进行的艰难防御,这是一场典型的技术发展与伦理安全的博弈。
2
是啊,尤其是开源模型,它就像一把双刃剑。本意是好的,为了促进技术民主化,但它也让犯罪分子可以轻松绕开监管,在自己的电脑上离线生成这些可怕的内容,让追踪变得几乎不可能。
1
完全正确。所以我们看到,即便有新的法律出台,比如将AI生成儿童虐待内容定为重罪,但执法的现实困境依然存在。技术的发展速度,总是领先于法律的响应速度,这个漏洞很难完全堵上。
2
所以现在的争论焦点是,责任应该前置。不能等到伤害造成了再去补救,而是要让AI开发者在设计模型之初就承担起安全责任。但这又会引发关于扼杀创新和言论自由的担忧,平衡点很难找。
1
这种技术的滥用,其社会影响是灾难性的。首先,它极大地冲击了现有的儿童保护和执法体系。海量的AI伪造内容,会让寻找真实受害者的工作变得极其困难,如同大海捞针。
2
对受害者的影响更是二次、甚至无数次的伤害。一个孩子的形象,可以被无限次地用于生成新的虐待内容,这是一种精神上的无期徒刑。它也让所有孩子都暴露在潜在的风险之下。
1
展望未来,解决这个问题没有捷径,必须依靠国际间的法律协作,以及科技公司承担起更大的责任,大力发展能够精准识别和拦截这类内容的AI技术,形成‘以AI制AI’的局面。
2
是的,这是一场技术、法律和整个社会都必须参与的战斗。
1
今天的讨论很沉重,但我们相信,正视问题是解决它的第一步。
2
感谢您收听 Goose Pod。我们明天同一时间再见。

## AI-Generated Child Sexual Abuse Material (CSAM) Surging Online, Watchdog Warns **News Title:** AI-generated child sexual abuse videos surging online, watchdog says **Report Provider:** Internet Watch Foundation (IWF) **Date/Time Period Covered:** First six months of 2025, compared to the same period in the previous year. **Source:** The Guardian **Author:** Dan Milmo --- ### Executive Summary The Internet Watch Foundation (IWF), a UK-based internet safety watchdog, has reported a dramatic surge in the prevalence of AI-generated child sexual abuse material (CSAM) online. This increase is attributed to advancements in AI technology and its exploitation by paedophiles. The IWF notes that these AI-generated videos are becoming nearly indistinguishable from real imagery, posing a significant and growing threat to child safety. ### Key Findings and Statistics * **Dramatic Increase in AI-Made CSAM Videos:** In the first six months of 2025, the IWF verified **1,286 AI-made videos** containing CSAM that broke the law. This is a stark contrast to the **two such videos** verified in the same period last year. * **Prevalence of Severe Abuse Material:** Just over **1,000** of these AI-generated videos featured **Category A abuse**, which is the classification for the most severe type of CSAM. * **Surge in URLs Featuring AI-Made CSAM:** The number of URLs featuring AI-made child sexual abuse increased by **400%** in the first six months of 2025. The IWF received reports of **210 such URLs**, up from **42** in the previous year. Each webpage can feature hundreds of images and videos. * **Exploitation of AI Technology:** Paedophiles are reportedly manipulating widely available video-generation AI models, which are being developed with significant multibillion-dollar investments. * **Method of Creation:** IWF analysts suggest these videos are created by taking freely available basic AI models and "fine-tuning" them with existing CSAM to produce realistic videos. In some instances, these models have been fine-tuned with a handful of CSAM videos. * **Use of Real-Life Victims:** The most realistic AI abuse videos identified this year were based on **real-life victims**. ### Significant Trends and Changes * **Near-Indistinguishable Imagery:** AI-generated CSAM has "crossed the threshold" of being nearly indistinguishable from real imagery. * **Rapid AI Development:** Paedophiles are actively adapting to the rapid improvements in AI technology, with one report indicating a constant cycle of mastering one AI tool only for a "new and better" one to emerge. * **Expansion of CSAM Volume:** The use of existing victims' likenesses in AI-generated images allows paedophiles to significantly expand the volume of CSAM online without needing to create new victims. ### Notable Risks and Concerns * **Explosion of AI-Generated CSAM:** The IWF warns of an "absolute explosion" of AI-generated CSAM that could overwhelm the clear web. * **Fueling Criminal Activity:** This growth in AI-generated CSAM could fuel criminal activities linked to child trafficking, child sexual abuse, and modern slavery. * **Accessibility and Adaptability:** The wide availability and adaptability of AI models for criminal purposes are key drivers of this trend. ### Government Response and Legal Measures The UK government is taking action to combat AI-generated CSAM: * **Criminalizing AI Tools:** It is now illegal to possess, create, or distribute AI tools specifically designed to create abusive content. * **Penalties:** Individuals found to have breached this law face up to **five years in jail**. * **Outlawing Possession of Manuals:** The government is also outlawing the possession of manuals that teach offenders how to use AI tools for creating abusive imagery or to facilitate child abuse. * **Penalties:** Offenders could face a prison sentence of up to **three years**. * **Existing Legislation:** AI-generated CSAM is already illegal under the Protection of Children Act 1978, which criminalizes the taking, distribution, and possession of indecent photographs or pseudo-photographs of a child. **Statement from Home Secretary Yvette Cooper:** Announcing these changes in February, Home Secretary Yvette Cooper emphasized the vital need to "tackle child sexual abuse online as well as offline." **Statement from Derek Ray-Hill, IWF's Interim Chief Executive:** Derek Ray-Hill highlighted the "incredible risk" of AI-generated CSAM leading to an explosion that could overwhelm the clear web and fuel further criminal activity.

AI-generated child sexual abuse videos surging online, watchdog says

Read original at The Guardian

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.

In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.

The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.

The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.

IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.

Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.

The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content.

People found to have breached the new law will face up to five years in jail.Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.

Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts