AI生成儿童性虐待视频网上泛滥 监管机构警告

AI生成儿童性虐待视频网上泛滥 监管机构警告

2025-07-12Technology
--:--
--:--
1
早上好 mikey1101,我是 David,这里是为您专属打造的 Goose Pod。今天是7月13日,星期日。和我一起的还有 Ema。
2
大家好,我们今天来聊聊一个严肃的话题:AI生成儿童性虐待视频的泛滥问题。
1
我们开始吧。最近,一个令人震惊的趋势是,利用人工智能生成的儿童虐待内容在网上急剧增加,其逼真程度几乎无法与真实影像区分。
2
是的,根据互联网观察基金会(IWF)的报告,仅在2025年上半年,他们就核实了1286个由AI制作的违法视频,而去年同期只有两个。包含这类内容的网址数量也猛增了400%。
1
这个数字太惊人了。从2到超过1200,这简直是爆炸式增长。这不仅仅是数字问题,背后反映的是技术的滥用已经到了一个非常危险的临界点。
2
没错。这些视频的制作方法,主要是利用那些免费、开源的AI模型,然后用现有的虐待材料对它们进行“微调”。通过这种方式,犯罪者就能生成大量逼真的新视频,整个过程的技术门槛在不断降低。
1
所以,这背后不是什么高科技犯罪集团,而是普通人也能做到吗?
2
是的,这正是问题的可怕之处。由于AI产业数十亿美元的投资竞争,市面上出现了大量可用的视频生成模型。犯罪者在暗网论坛上分享如何操纵这些工具的教程,技术的快速迭代让他们总能找到更新、更好的工具。
1
这听起来像是技术进步带来的一个黑暗副产品。一方面是创新,另一方面却被用来作恶。
2
完全正确。科技发展的速度远远超过了法律和道德的约束力。一个强大的工具,既可以用来创造艺术,也可以被用来制造伤害。这是一个典型的技术发展与监管脱节的案例,导致了非常严重的后果。
1
这就产生了一个核心的冲突:科技公司在激烈竞争中不断推动AI技术发展,而监管机构和政府却在拼命追赶,试图遏制其滥用。
2
对,这是一场非常不对等的竞赛。科技巨头为了市场份额,不断推出更强大的模型。而像IWF这样的组织和各国政府,则需要面对海量的、真假难辨的内容,进行识别和移除,这在人力和技术上都是巨大的挑战。
1
而且法律上似乎也存在漏洞?
2
是的,很多国家的法律最初并没有预见到这种情况。比如,法律可能禁止持有或传播虐待儿童的真实影像,但对于“生成”这类影像的AI工具本身,却没有明确的法律规定。这就给了犯罪分子可乘之机。
1
所以,这是一场技术、法律和道德之间的多方博弈。
2
可以这么说。而且这还是一个全球性的问题,需要各国协同作战,但不同国家法律的差异,无疑让这场战斗变得更加复杂和艰难。
1
那么,这种现象对现实世界造成了哪些具体影响?
2
影响是毁灭性的。首先,它让真实的受害者被“二次侵犯”。许多最逼真的AI视频,都是基于真实受害者的形象制作的,这意味着他们的噩梦被以一种新的形式无限复制和传播,造成了持续的心理创伤。
1
这太可怕了,所以它绝不是没有真正受害者的“虚拟”犯罪。
2
完全不是。此外,这种内容的泛滥会刺激对真实虐待的需求,可能助长儿童贩卖和性剥削等犯罪活动,并且让整个社会对儿童性暴力问题变得麻木。
1
未来的出路在哪里?
2
英国政府已经采取行动,将持有、制作或分发用于生成此类内容的AI工具定为刑事犯罪,最高可判五年监禁。这需要全球范围内的立法跟进、科技公司承担起“安全设计”的责任,并开发更先进的检测技术。
1
这是一个需要所有人都严肃对待的警示。今天的讨论就到这里,感谢您收听 Goose Pod。
2
我们明天再见。

## AI-Generated Child Sexual Abuse Material (CSAM) Surging Online, Watchdog Warns **News Title:** AI-generated child sexual abuse videos surging online, watchdog says **Report Provider:** Internet Watch Foundation (IWF) **Date/Time Period Covered:** First six months of 2025, compared to the same period in the previous year. **Source:** The Guardian **Author:** Dan Milmo --- ### Executive Summary The Internet Watch Foundation (IWF), a UK-based internet safety watchdog, has reported a dramatic surge in the prevalence of AI-generated child sexual abuse material (CSAM) online. This increase is attributed to advancements in AI technology and its exploitation by paedophiles. The IWF notes that these AI-generated videos are becoming nearly indistinguishable from real imagery, posing a significant and growing threat to child safety. ### Key Findings and Statistics * **Dramatic Increase in AI-Made CSAM Videos:** In the first six months of 2025, the IWF verified **1,286 AI-made videos** containing CSAM that broke the law. This is a stark contrast to the **two such videos** verified in the same period last year. * **Prevalence of Severe Abuse Material:** Just over **1,000** of these AI-generated videos featured **Category A abuse**, which is the classification for the most severe type of CSAM. * **Surge in URLs Featuring AI-Made CSAM:** The number of URLs featuring AI-made child sexual abuse increased by **400%** in the first six months of 2025. The IWF received reports of **210 such URLs**, up from **42** in the previous year. Each webpage can feature hundreds of images and videos. * **Exploitation of AI Technology:** Paedophiles are reportedly manipulating widely available video-generation AI models, which are being developed with significant multibillion-dollar investments. * **Method of Creation:** IWF analysts suggest these videos are created by taking freely available basic AI models and "fine-tuning" them with existing CSAM to produce realistic videos. In some instances, these models have been fine-tuned with a handful of CSAM videos. * **Use of Real-Life Victims:** The most realistic AI abuse videos identified this year were based on **real-life victims**. ### Significant Trends and Changes * **Near-Indistinguishable Imagery:** AI-generated CSAM has "crossed the threshold" of being nearly indistinguishable from real imagery. * **Rapid AI Development:** Paedophiles are actively adapting to the rapid improvements in AI technology, with one report indicating a constant cycle of mastering one AI tool only for a "new and better" one to emerge. * **Expansion of CSAM Volume:** The use of existing victims' likenesses in AI-generated images allows paedophiles to significantly expand the volume of CSAM online without needing to create new victims. ### Notable Risks and Concerns * **Explosion of AI-Generated CSAM:** The IWF warns of an "absolute explosion" of AI-generated CSAM that could overwhelm the clear web. * **Fueling Criminal Activity:** This growth in AI-generated CSAM could fuel criminal activities linked to child trafficking, child sexual abuse, and modern slavery. * **Accessibility and Adaptability:** The wide availability and adaptability of AI models for criminal purposes are key drivers of this trend. ### Government Response and Legal Measures The UK government is taking action to combat AI-generated CSAM: * **Criminalizing AI Tools:** It is now illegal to possess, create, or distribute AI tools specifically designed to create abusive content. * **Penalties:** Individuals found to have breached this law face up to **five years in jail**. * **Outlawing Possession of Manuals:** The government is also outlawing the possession of manuals that teach offenders how to use AI tools for creating abusive imagery or to facilitate child abuse. * **Penalties:** Offenders could face a prison sentence of up to **three years**. * **Existing Legislation:** AI-generated CSAM is already illegal under the Protection of Children Act 1978, which criminalizes the taking, distribution, and possession of indecent photographs or pseudo-photographs of a child. **Statement from Home Secretary Yvette Cooper:** Announcing these changes in February, Home Secretary Yvette Cooper emphasized the vital need to "tackle child sexual abuse online as well as offline." **Statement from Derek Ray-Hill, IWF's Interim Chief Executive:** Derek Ray-Hill highlighted the "incredible risk" of AI-generated CSAM leading to an explosion that could overwhelm the clear web and fuel further criminal activity.

AI-generated child sexual abuse videos surging online, watchdog says

Read original at The Guardian

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.

In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.

The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.

The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.

IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.

Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.

The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content.

People found to have breached the new law will face up to five years in jail.Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.

Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts