监督机构称,AI生成儿童性剥削视频网上泛滥

监督机构称,AI生成儿童性剥削视频网上泛滥

2025-07-12Technology
--:--
--:--
1
早上好,老张,我是纪飞,这是为你制作的Goose Pod。今天是7月12日,星期六。
2
我是国荣,我们今天来聊聊一个沉重但重要的话题:监督机构称,AI生成的儿童性剥削视频正在网上泛滥。
1
我们直接切入正题吧。英国的互联网观察基金会(IWF)发布了惊人的数据。仅在2025年上半年,他们就核实了1286个AI制作的违法儿童性虐待视频。而去年同期,这个数字是多少?两个。
2
等一下,从2个到1286个?这简直就像水龙头漏水一夜之间变成了大坝决堤。这种增长速度太疯狂了。而且涉及这类内容的网址链接也增加了400%。这太可怕了。
1
更严重的是,在这1200多个视频中,有超过1000个被划分为“A类”虐待,这是最严重等级的材料。而且,这些AI生成的视频已经“跨过了门槛”,达到了与真实影像几乎无法区分的程度。
2
无法区分?你是说,电脑凭空捏造出来的恐怖画面,我们用肉眼甚至专业分析都可能看不出是假的?这听起来像是《黑镜》里的情节,但一点也不好玩。
1
是的,这正是问题的核心。这不仅仅是数量上的爆炸式增长,更是质量和逼真度上的飞跃。这构成了一种前所未有的、全新的威胁,让识别和打击变得异常困难。
2
我们是怎么走到这一步的?这种技术总不会是凭空冒出来的吧?背后有什么故事?难道是有人不小心按下了某个“邪恶AI”的开关吗?我觉得很奇怪。
1
这其实是近年来对生成式AI进行巨额投资的直接后果。像Stable Diffusion这样的开源模型变得非常强大且易于获取。犯罪分子正是利用了这一点,对这些模型进行所谓的“微调”。
2
“微调”?这个词听起来那么精致,好像在调校一架钢琴。但在这里,他们用来“调校”的音符是……是真正的噩梦。他们是用真实的虐待材料去“教”AI如何制造更多吗?
1
完全正确。犯罪分子获取这些免费、强大的AI模型后,用现有的儿童性虐待材料对其进行训练。哪怕只有少量这类可怕的内容,就足以教会AI生成大量全新的、合成的、但同样有害的影像。
2
那在这场AI风暴来临之前,法律是怎样的?我们就完全没有准备吗?还是说规则手册里压根就没写这一章?真是让人想不通。
1
法律确实面临挑战。比如英国1978年的《儿童保护法》,虽然已经涵盖了“伪照片”,但AI技术的发展,特别是离线生成超现实图像的能力,创造了新的法律漏洞,让执法变得非常困难。
2
哈,漏洞!所以犯罪分子就像找到了一个法律地图上未标注的灰色岛屿,在上面为所欲为。这可真是个古怪又可怕的发现。
1
是的,这正是促使英国最近推动新立法的直接原因。在新法案中,他们不仅将持有这些图像定为非法,更关键的是,将制造、拥有和分发用于生成这些内容的AI工具和教程本身,也定为犯罪。
2
哦,我明白了。这就不只是抓捕海盗了,而是直接把他们的船凿沉,再把藏宝图也烧掉。这听起来才像是从根源上解决问题。这个思路不古怪,很直接。
1
这就引出了一个核心矛盾:科技行业“快速行动,打破常规”的竞争性发展,与立法和安全监管缓慢、被动的步伐之间的冲突。数十亿美元的投资正推动AI技术飞速发展,但不幸的是,这也为犯罪者提供了大量选择。
2
这就像一场比赛,一边是坐着价值数十亿美元燃料的火箭飞船,另一边却骑着自行车,试图画出火箭可能的降落点。科技公司对此有什么说法?他们就没想过自己的“火箭”会被坏人利用吗?
1
科技公司通常会强调他们的服务条款和内置的安全措施。但冲突点在于开源模型。一旦模型公开发布,开发者就失去了控制权,犯罪分子可以轻易地移除那些安全防护。这是一个技术中立性与社会责任之间的巨大张力。
2
这就像汽车公司卖给你一辆带限速器的车,但同时也附赠了全套修车工具和拆除限速器的详细说明书。当有人开着这辆车超速狂飙时,你真的能假装惊讶吗?这种矛盾太奇怪了。
1
这个比喻很贴切。它完美地揭示了创新与责任之间的紧张关系。而政府方面,虽然在努力立法,比如英国的新法案将面临最高五年的监禁,但法律的修订速度往往跟不上技术的迭代速度。
2
而且我猜,具体到法条上肯定还有很多争论。比如,如何界定一个工具是“专为犯罪设计”的,还是一个可以“被用于犯罪”的通用工具?这听起来就像一个法律雷区,一不小心就踩雷。
1
完全正确。正因如此,像IWF这样的组织就显得尤为重要,他们处于这场冲突的中心,一方面努力标记和移除有害内容,另一方面则大力推动更清晰的法律定义和更广泛的国际合作,因为这个问题没有国界。
2
好了,我们聊了技术和冲突。但让我们谈谈最实际的伤害吧。到底是谁在受害,以及如何受害?这肯定不只是“一些假图片”那么简单,对吧?背后肯定有更古怪的逻辑。
1
当然不是。其影响是毁灭性的。首先,它让过去遭受过虐待的儿童再次成为受害者。犯罪分子用他们真实的影像来训练AI,从而创造出源源不断的、以他们为主角的新虐待内容。这种创伤是永久性的。
2
这太可怕了。这就像他们创伤的幽灵,可以被任何有电脑的人随意召唤出来。它不再只是一张图片,而是一个真实罪行的回响,在数字世界里无限循环。
1
是的。另一个重大影响是对执法部门。这种“数量上的爆炸”可能会压垮他们。当他们被海量的AI图像淹没时,如何优先处理那些正处于危险中的真实儿童?这极大地消耗了调查资源。
2
这是个大海捞针的问题,但有人还在不停地往草堆里扔更多的草。那么,那些从未被虐待过的孩子呢?他们是安全的吗?还是说,危险同样以一种奇怪的方式逼近他们?
1
他们同样面临风险。从社交媒体上抓取的普通生活照,也可能被用来生成这些虚假图像,然后用于敲诈勒索。这项技术的“民主化”,意味着潜在的犯罪者群体比以往任何时候都庞大。
1
展望未来,专家预测这个问题只会越来越严重。检测技术和生成技术之间的“猫鼠游戏”将持续下去。我们可以预见,更逼真的全动态视频内容将变得司空见惯,对整个社会构成更大威胁。
2
所以,解决方案是什么?难道我们要把互联网的插头拔了吗?我有一半是认真的。面对这种古怪又强大的技术,常规方法似乎都失效了。
1
未来的道路需要多管齐下:制定更强有力的、并且能在国际上协同的法律;让科技公司对其产品承担更大的责任;以及开发更先进的AI驱动的检测工具。这本质上是一场技术军备竞赛。
1
关于这个非常沉重的话题,我们的讨论也接近尾声了。老张,感谢你收听Goose Pod。
2
这是一个没有简单答案的复杂问题。我们明天再会。下次见!

## AI-Generated Child Sexual Abuse Material (CSAM) Surging Online, Watchdog Warns **News Title:** AI-generated child sexual abuse videos surging online, watchdog says **Report Provider:** Internet Watch Foundation (IWF) **Date/Time Period Covered:** First six months of 2025, compared to the same period in the previous year. **Source:** The Guardian **Author:** Dan Milmo --- ### Executive Summary The Internet Watch Foundation (IWF), a UK-based internet safety watchdog, has reported a dramatic surge in the prevalence of AI-generated child sexual abuse material (CSAM) online. This increase is attributed to advancements in AI technology and its exploitation by paedophiles. The IWF notes that these AI-generated videos are becoming nearly indistinguishable from real imagery, posing a significant and growing threat to child safety. ### Key Findings and Statistics * **Dramatic Increase in AI-Made CSAM Videos:** In the first six months of 2025, the IWF verified **1,286 AI-made videos** containing CSAM that broke the law. This is a stark contrast to the **two such videos** verified in the same period last year. * **Prevalence of Severe Abuse Material:** Just over **1,000** of these AI-generated videos featured **Category A abuse**, which is the classification for the most severe type of CSAM. * **Surge in URLs Featuring AI-Made CSAM:** The number of URLs featuring AI-made child sexual abuse increased by **400%** in the first six months of 2025. The IWF received reports of **210 such URLs**, up from **42** in the previous year. Each webpage can feature hundreds of images and videos. * **Exploitation of AI Technology:** Paedophiles are reportedly manipulating widely available video-generation AI models, which are being developed with significant multibillion-dollar investments. * **Method of Creation:** IWF analysts suggest these videos are created by taking freely available basic AI models and "fine-tuning" them with existing CSAM to produce realistic videos. In some instances, these models have been fine-tuned with a handful of CSAM videos. * **Use of Real-Life Victims:** The most realistic AI abuse videos identified this year were based on **real-life victims**. ### Significant Trends and Changes * **Near-Indistinguishable Imagery:** AI-generated CSAM has "crossed the threshold" of being nearly indistinguishable from real imagery. * **Rapid AI Development:** Paedophiles are actively adapting to the rapid improvements in AI technology, with one report indicating a constant cycle of mastering one AI tool only for a "new and better" one to emerge. * **Expansion of CSAM Volume:** The use of existing victims' likenesses in AI-generated images allows paedophiles to significantly expand the volume of CSAM online without needing to create new victims. ### Notable Risks and Concerns * **Explosion of AI-Generated CSAM:** The IWF warns of an "absolute explosion" of AI-generated CSAM that could overwhelm the clear web. * **Fueling Criminal Activity:** This growth in AI-generated CSAM could fuel criminal activities linked to child trafficking, child sexual abuse, and modern slavery. * **Accessibility and Adaptability:** The wide availability and adaptability of AI models for criminal purposes are key drivers of this trend. ### Government Response and Legal Measures The UK government is taking action to combat AI-generated CSAM: * **Criminalizing AI Tools:** It is now illegal to possess, create, or distribute AI tools specifically designed to create abusive content. * **Penalties:** Individuals found to have breached this law face up to **five years in jail**. * **Outlawing Possession of Manuals:** The government is also outlawing the possession of manuals that teach offenders how to use AI tools for creating abusive imagery or to facilitate child abuse. * **Penalties:** Offenders could face a prison sentence of up to **three years**. * **Existing Legislation:** AI-generated CSAM is already illegal under the Protection of Children Act 1978, which criminalizes the taking, distribution, and possession of indecent photographs or pseudo-photographs of a child. **Statement from Home Secretary Yvette Cooper:** Announcing these changes in February, Home Secretary Yvette Cooper emphasized the vital need to "tackle child sexual abuse online as well as offline." **Statement from Derek Ray-Hill, IWF's Interim Chief Executive:** Derek Ray-Hill highlighted the "incredible risk" of AI-generated CSAM leading to an explosion that could overwhelm the clear web and fuel further criminal activity.

AI-generated child sexual abuse videos surging online, watchdog says

Read original at The Guardian

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.

In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.

The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.

The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.

IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.

Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.

The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content.

People found to have breached the new law will face up to five years in jail.Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.

Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

监督机构称,AI生成儿童性剥削视频网上泛滥 | Goose Pod | Goose Pod