监管机构警告:AI生成儿童性虐待视频网上暴增

监管机构警告:AI生成儿童性虐待视频网上暴增

2025-07-13Technology
--:--
--:--
1
早上好国荣,我是纪飞。今天是7月14日,星期一。欢迎收听专为你打造的 Goose Pod。
2
我是国荣。今天,我们要聊一个非常严肃且重要的话题:监管机构警告说,由人工智能生成的儿童性虐待视频正在网络上急剧增加。
1
我们直接进入主题吧。根据英国互联网观察基金会(IWF)的最新报告,情况确实令人震惊。仅仅在2025年的前六个月,他们就核实了1286个由AI制作的违法儿童性虐待视频。
2
1286个!这数字太可怕了,尤其是和去年同期相比,当时只有2个。而且,这些视频的逼真程度已经到了几乎无法与真实影像区分的‘临界点’,这让问题变得更加棘手。
1
没错。这种现象的背后,是人工智能技术的飞速发展。数十亿美元的投资涌入AI领域,催生了大量可供公众使用的视频生成模型。不幸的是,这也为犯罪分子提供了可乘之机。
2
也就是说,他们利用了这些本来是中立的技术?我听说他们会用一种叫做‘微调’(fine-tuning)的方法。你能用简单的方式解释一下这是什么意思吗?
1
当然可以。你可以把一个基础的AI模型想象成一个刚学会画画的学生。‘微调’就像是给他看大量特定风格的画作,比如梵高的作品。你看得多了,他就能模仿梵高的风格画出新画。犯罪分子就是用真实的虐待材料去‘喂养’这些AI模型。
2
我明白了,这太可怕了。他们用旧的伤害来制造新的、无穷无尽的伤害。而且最令人担忧的是,报告提到,今年看到的那些最逼真的AI虐待视频,其原型竟然是现实生活中的真实受害者。
1
这就引出了一个核心的冲突:一方面是AI产业的激烈竞争和快速迭代,技术日新月异;另一方面,是监管和法律的严重滞后。犯罪分子甚至在暗网论坛上交流,说刚掌握一个AI工具,‘更新、更好的就又出来了’。
2
这就像一场猫鼠游戏,但技术的发展速度让‘老鼠’跑得太快了。而且很多AI模型是开源的,这意味着任何人都可以下载、修改,完全绕过商业软件可能设置的安全防护。这给监管带来了巨大的挑战。
1
是的。IWF的分析师就指出,犯罪分子正是利用了这些可以自由获取的基础AI模型进行‘微调’。技术的中立性与人性的邪恶在这里发生了最直接的碰撞,形成了巨大的灰色地带和执法难题。
2
这种技术滥用的影响是毁灭性的。IWF的代理首席执行官德里克·雷-希尔警告说,AI生成的虐待内容可能会迎来一次‘绝对的爆炸式增长’,甚至可能淹没整个明网(clear web)。
1
这不仅仅是内容数量的问题。它会助长相关的犯罪活动,比如儿童贩卖和现代奴役。更重要的是,它让真实的受害者被反复伤害。犯罪分子无需寻找新的受害者,就能无限量地制造和擴大虐待材料的流通。
2
面对这么严峻的形势,有什么应对措施吗?总不能坐以待毙吧?
1
英国政府已经开始行动了。他们正在立法,将持有、制作或分发专门用于生成此类内容的AI工具定为非法,最高可判五年监禁。甚至连持有教授如何使用这些工具的‘手册’也属违法。
2
希望这些措施能起到作用,也希望全球能有更多力量联合起来应对这个挑战。好了,今天的讨论就到这里。感谢收听 Goose Pod。
1
我们明天再见。

## AI-Generated Child Sexual Abuse Material (CSAM) Surging Online, Watchdog Warns **News Title:** AI-generated child sexual abuse videos surging online, watchdog says **Report Provider:** Internet Watch Foundation (IWF) **Date/Time Period Covered:** First six months of 2025, compared to the same period in the previous year. **Source:** The Guardian **Author:** Dan Milmo --- ### Executive Summary The Internet Watch Foundation (IWF), a UK-based internet safety watchdog, has reported a dramatic surge in the prevalence of AI-generated child sexual abuse material (CSAM) online. This increase is attributed to advancements in AI technology and its exploitation by paedophiles. The IWF notes that these AI-generated videos are becoming nearly indistinguishable from real imagery, posing a significant and growing threat to child safety. ### Key Findings and Statistics * **Dramatic Increase in AI-Made CSAM Videos:** In the first six months of 2025, the IWF verified **1,286 AI-made videos** containing CSAM that broke the law. This is a stark contrast to the **two such videos** verified in the same period last year. * **Prevalence of Severe Abuse Material:** Just over **1,000** of these AI-generated videos featured **Category A abuse**, which is the classification for the most severe type of CSAM. * **Surge in URLs Featuring AI-Made CSAM:** The number of URLs featuring AI-made child sexual abuse increased by **400%** in the first six months of 2025. The IWF received reports of **210 such URLs**, up from **42** in the previous year. Each webpage can feature hundreds of images and videos. * **Exploitation of AI Technology:** Paedophiles are reportedly manipulating widely available video-generation AI models, which are being developed with significant multibillion-dollar investments. * **Method of Creation:** IWF analysts suggest these videos are created by taking freely available basic AI models and "fine-tuning" them with existing CSAM to produce realistic videos. In some instances, these models have been fine-tuned with a handful of CSAM videos. * **Use of Real-Life Victims:** The most realistic AI abuse videos identified this year were based on **real-life victims**. ### Significant Trends and Changes * **Near-Indistinguishable Imagery:** AI-generated CSAM has "crossed the threshold" of being nearly indistinguishable from real imagery. * **Rapid AI Development:** Paedophiles are actively adapting to the rapid improvements in AI technology, with one report indicating a constant cycle of mastering one AI tool only for a "new and better" one to emerge. * **Expansion of CSAM Volume:** The use of existing victims' likenesses in AI-generated images allows paedophiles to significantly expand the volume of CSAM online without needing to create new victims. ### Notable Risks and Concerns * **Explosion of AI-Generated CSAM:** The IWF warns of an "absolute explosion" of AI-generated CSAM that could overwhelm the clear web. * **Fueling Criminal Activity:** This growth in AI-generated CSAM could fuel criminal activities linked to child trafficking, child sexual abuse, and modern slavery. * **Accessibility and Adaptability:** The wide availability and adaptability of AI models for criminal purposes are key drivers of this trend. ### Government Response and Legal Measures The UK government is taking action to combat AI-generated CSAM: * **Criminalizing AI Tools:** It is now illegal to possess, create, or distribute AI tools specifically designed to create abusive content. * **Penalties:** Individuals found to have breached this law face up to **five years in jail**. * **Outlawing Possession of Manuals:** The government is also outlawing the possession of manuals that teach offenders how to use AI tools for creating abusive imagery or to facilitate child abuse. * **Penalties:** Offenders could face a prison sentence of up to **three years**. * **Existing Legislation:** AI-generated CSAM is already illegal under the Protection of Children Act 1978, which criminalizes the taking, distribution, and possession of indecent photographs or pseudo-photographs of a child. **Statement from Home Secretary Yvette Cooper:** Announcing these changes in February, Home Secretary Yvette Cooper emphasized the vital need to "tackle child sexual abuse online as well as offline." **Statement from Derek Ray-Hill, IWF's Interim Chief Executive:** Derek Ray-Hill highlighted the "incredible risk" of AI-generated CSAM leading to an explosion that could overwhelm the clear web and fuel further criminal activity.

AI-generated child sexual abuse videos surging online, watchdog says

Read original at The Guardian

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.

In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.

The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.

The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.

IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.

Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.

The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content.

People found to have breached the new law will face up to five years in jail.Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.

Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts