监管机构:AI生成儿童性虐待视频网上激增

监管机构:AI生成儿童性虐待视频网上激增

2025-07-12Technology
--:--
--:--
1
早上好,老张,我是纪飞,这里是为你打造的 Goose Pod。今天是7月13日,星期日。
2
我是国荣,我们今天来聊聊一个严肃但重要的话题:监管机构发现,由AI生成的儿童性虐待视频正在网上激增。
1
我们直接来看数据吧。根据英国互联网观察基金会(IWF)的报告,2025年上半年,他们核实了1286个由AI制作的违法儿童性虐待视频。而去年同期,这个数字仅仅是2。
2
从2到1286?这增长率比夏天雨后的蘑菇还疯狂。这些视频难道是AI自己闲着没事,觉得创作欲爆棚才做出来的吗?还是说,有人在背后当那个邪恶的“导演”?
1
正是后者。这背后是犯罪分子在利用日益强大的AI技术。不仅仅是视频数量,包含这类内容的网址链接(URLs)也增加了400%。这些视频中超过一千个被划分为A类,也就是最严重的级别。
2
A类?听起来像是牛肉的等级。但用在这里,感觉背后藏着非常可怕的东西。到底是什么样的级别,能被定义为“最严重”?难道只是画面比较逼真吗?那也太小看A了。
1
A类内容涉及的是极其严重的罪行,比如强奸、性虐待等。而更令人不安的是,这些AI生成的视频已经“跨过了门槛”,达到了与真实影像几乎无法区分的程度。肉眼已经很难分辨真假。
1
那么,这些以假乱真的视频是怎么来的?它们并非由AI从零开始凭空创造。犯罪分子通常会采用一种“微调”的技术,用现有的、真实的儿童性虐待材料去“训练”那些可以免费获得的AI模型。
2
“微调”?听起来像是在调收音机找频道。他们就是用最坏的东西,去“教”AI学坏,然后让AI自己批量生产更多坏东西?这不就成了一个数字版的“病毒工厂”和“思想钢印机”吗?
1
这个比喻很恰当。而且,由于科技行业对AI投入了数十亿美元的巨额资金,导致市面上强大的视频生成模型种类繁多,犯罪分子的选择也因此变多了。技术进步无意中为他们提供了便利。
2
技术迭代快,犯罪分子也在疯狂“内卷”啊。我仿佛看到一个帖子说,今天刚掌握A工具,明天B工具就出来了,性能更好,还得重新学。他们可真“努力”,就是力气全用错了地方。
1
是的,这种“地下学习”非常迅速。其实这种现象并非全新的,但其质量和数量的跃升才是问题的关键。IWF等机构为此已经呼吁了一段时间,最终推动了英国出台新的法案来应对。
2
所以说,监管就像在玩一场“打地鼠”游戏,这边一个漏洞刚堵上,那边AI又挖了个新洞出来。那在这次英国出手之前,法律是不是对这种“假”视频束手无策?毕竟不是“真人”出演。
1
确实存在法律漏洞。尽管英国1978年的《儿童保护法》已经将“不雅照片或伪照片”定为犯罪,但它没能预见到如今AI技术的发展。法律未能明确针对“制造”这些伪照片的AI工具和教学手册本身。
2
哦,我明白了。以前是抓拿着用假照片的人,现在是直接去端掉那个生产假照片的“黑作坊”和销毁“生产说明书”。从源头打击,听起来是靠谱多了,属于是“釜底抽薪”了。
1
完全正确。新法案的核心就是将“拥有、创建或分发专门用于生成虐待内容的AI工具”定为刑事犯罪。他们打击的不仅仅是成品,更是生产工具和生产方法,例如那些“一键脱衣”的应用。
2
天啊,“一键脱衣”和“移花接木”都用上了。科技真是把双刃剑,能用来P图搞笑,也能被用来做这么邪恶的事情。这让我想起有些软件能把你的声音变成任何名人,被滥用的话,后果不堪设想。
1
这就引出了我们面临的核心冲突:一方面,是价值数十亿美元、竞争激烈的AI产业,它追求的是无休止的创新和技术突破;另一方面,是相对滞后的社会安全法规和伦理建设。两者步调完全不一致。
2
一边是科技公司踩着风火轮往前冲,追求更快更强,像参加奥运会似的。另一边是监管机构在后面气喘吁吁地追,手里还拿着一堆旧地图。这场比赛从一开始就不公平啊,简直是降维打击。
1
这是一个很好的比喻。冲突还在于,开源模型是推动技术民主化和创新的重要力量,但它的开放性也意味着安全防护措施可以被轻易地移除或修改,从而被用于犯罪,这是一个典型的自由与管控的困境。
2
这就是自由的代价吗?就像给了所有人一把能开任何锁的万能钥匙,却没告诉他们不能用它去开别人家的门。总会有人管不住自己的好奇心和邪念,把钥匙用在不该用的地方。
1
是的,另一个冲突点在于公众的认知。很多人有一种误解,认为既然图像是“假的”,那么它造成的伤害就不是真实的。这种看法非常危险,并且与事实严重不符,尤其当AI利用的是真实儿童的肖像时。
2
觉得“假”的就没伤害,这是什么奇怪的逻辑?如果有人用AI合成了我抢银行的视频,警察会因为视频是“假”的就不来调查我吗?它伤害的是名誉、是安全感,更何况这件事里伤害的是真实的孩子。
1
完全正确。所以冲突也延伸到了科技公司的责任问题。有强烈的声音呼吁,如果科技公司的产品能被轻易用于作恶,它们就应承担相应责任。但公司方往往会以无法控制用户行为来辩护。
2
卖刀的不能说“我只管卖,他拿去砍人不管我的事”吧?特别是当这把刀越来越锋利,越来越容易获得,甚至附带“自动伤人”教程的时候。是不是该给刀加上一个智能锁,识别到要砍人就自动缩回去?
1
说得好。我们来谈谈具体的影响。最直接、最残酷的影响,是针对那些曾经的受害者。犯罪分子利用AI,将真实受害儿童的图像输入模型,生成永无止境的、新的虐待内容,这构成了反复的、持续的伤害。
2
这太可怕了。简直是精神上的无限凌迟。受害者好不容易走出来,结果发现自己的噩梦被AI做成了“连续剧”,还在不断“更新”。这种二次、N次伤害,谁能承受得住?让人不寒而栗。
1
其次,是对执法体系的冲击。AI生成的海量虚假材料,正威胁要淹没执法部门。他们需要耗费大量资源去甄别,这使得从海量信息中寻找和拯救那些正身处危险中的真实儿童,变得像大海捞针一样困难。
2
警力本来就有限,现在被这些“假案子”大量占据资源,那真正的求救信号就可能被淹没了。这就像“狼来了”的故事,虚假警报太多,真的狼来了,就没人信了,或者说来不及反应了。
1
是的,此外还有更广泛的社会影响。这类内容的泛滥可能导致社会对儿童性虐待的严重性变得麻木,从而降低整体的防范意识。一个特别令人担忧的趋势是,我们已经观察到青少年之间利用AI互相制造这类图像的现象。
1
展望未来,趋势不容乐观。专家预测这类材料将出现“爆炸式增长”,甚至可能淹没整个明网。英国的新法律是一个重要的开始,它首次将拥有AI犯罪工具和教学手册本身定为非法。
2
亡羊补牢,总比不补好。但是AI这只“羊”跑得太快了,围栏能追得上它的速度吗?除了立法,我们普通人或者技术开发者本身,还能做点什么来防止它变成“恶魔的画笔”呢?
1
这是一个系统性问题。长远来看,需要多管齐下的方法:更强的国际合作、将安全措施“由内而外”地构建到AI模型中、以及加强公众教育,让每个人都明白,即便是“虚拟”的图像,也能造成真实的、毁灭性的伤害。
1
总而言之,AI技术在儿童安全领域带来了一个强大而严峻的新挑战,它需要科技界、政府乃至整个社会快速、统一地做出回应。今天的讨论就到这里。感谢收听Goose Pod。
2
希望未来的AI能更“懂事”一点,学会自我约束,而不是成为人类阴暗面的放大器。明天再见!

## AI-Generated Child Sexual Abuse Material (CSAM) Surging Online, Watchdog Warns **News Title:** AI-generated child sexual abuse videos surging online, watchdog says **Report Provider:** Internet Watch Foundation (IWF) **Date/Time Period Covered:** First six months of 2025, compared to the same period in the previous year. **Source:** The Guardian **Author:** Dan Milmo --- ### Executive Summary The Internet Watch Foundation (IWF), a UK-based internet safety watchdog, has reported a dramatic surge in the prevalence of AI-generated child sexual abuse material (CSAM) online. This increase is attributed to advancements in AI technology and its exploitation by paedophiles. The IWF notes that these AI-generated videos are becoming nearly indistinguishable from real imagery, posing a significant and growing threat to child safety. ### Key Findings and Statistics * **Dramatic Increase in AI-Made CSAM Videos:** In the first six months of 2025, the IWF verified **1,286 AI-made videos** containing CSAM that broke the law. This is a stark contrast to the **two such videos** verified in the same period last year. * **Prevalence of Severe Abuse Material:** Just over **1,000** of these AI-generated videos featured **Category A abuse**, which is the classification for the most severe type of CSAM. * **Surge in URLs Featuring AI-Made CSAM:** The number of URLs featuring AI-made child sexual abuse increased by **400%** in the first six months of 2025. The IWF received reports of **210 such URLs**, up from **42** in the previous year. Each webpage can feature hundreds of images and videos. * **Exploitation of AI Technology:** Paedophiles are reportedly manipulating widely available video-generation AI models, which are being developed with significant multibillion-dollar investments. * **Method of Creation:** IWF analysts suggest these videos are created by taking freely available basic AI models and "fine-tuning" them with existing CSAM to produce realistic videos. In some instances, these models have been fine-tuned with a handful of CSAM videos. * **Use of Real-Life Victims:** The most realistic AI abuse videos identified this year were based on **real-life victims**. ### Significant Trends and Changes * **Near-Indistinguishable Imagery:** AI-generated CSAM has "crossed the threshold" of being nearly indistinguishable from real imagery. * **Rapid AI Development:** Paedophiles are actively adapting to the rapid improvements in AI technology, with one report indicating a constant cycle of mastering one AI tool only for a "new and better" one to emerge. * **Expansion of CSAM Volume:** The use of existing victims' likenesses in AI-generated images allows paedophiles to significantly expand the volume of CSAM online without needing to create new victims. ### Notable Risks and Concerns * **Explosion of AI-Generated CSAM:** The IWF warns of an "absolute explosion" of AI-generated CSAM that could overwhelm the clear web. * **Fueling Criminal Activity:** This growth in AI-generated CSAM could fuel criminal activities linked to child trafficking, child sexual abuse, and modern slavery. * **Accessibility and Adaptability:** The wide availability and adaptability of AI models for criminal purposes are key drivers of this trend. ### Government Response and Legal Measures The UK government is taking action to combat AI-generated CSAM: * **Criminalizing AI Tools:** It is now illegal to possess, create, or distribute AI tools specifically designed to create abusive content. * **Penalties:** Individuals found to have breached this law face up to **five years in jail**. * **Outlawing Possession of Manuals:** The government is also outlawing the possession of manuals that teach offenders how to use AI tools for creating abusive imagery or to facilitate child abuse. * **Penalties:** Offenders could face a prison sentence of up to **three years**. * **Existing Legislation:** AI-generated CSAM is already illegal under the Protection of Children Act 1978, which criminalizes the taking, distribution, and possession of indecent photographs or pseudo-photographs of a child. **Statement from Home Secretary Yvette Cooper:** Announcing these changes in February, Home Secretary Yvette Cooper emphasized the vital need to "tackle child sexual abuse online as well as offline." **Statement from Derek Ray-Hill, IWF's Interim Chief Executive:** Derek Ray-Hill highlighted the "incredible risk" of AI-generated CSAM leading to an explosion that could overwhelm the clear web and fuel further criminal activity.

AI-generated child sexual abuse videos surging online, watchdog says

Read original at The Guardian

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.

In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.

The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.

The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.

IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.

Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.

The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content.

People found to have breached the new law will face up to five years in jail.Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.

Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

监管机构:AI生成儿童性虐待视频网上激增 | Goose Pod | Goose Pod