新报告揭示:AI聊天机器人风险失控,成数百万儿童新依赖

新报告揭示:AI聊天机器人风险失控,成数百万儿童新依赖

2025-07-16Technology
--:--
--:--
David
早上好,韩纪飞,我是David,这是为您制作的 Goose Pod。今天是7月17日,星期四。
Ema
我是Ema,我们今天来聊一个非常热门的话题:一份新报告揭示了人工智能聊天机器人的风险正在失控,并且正在成为数百万儿童的新依赖。
David
我们直接进入主题吧。这份名为《我,我自己与AI》的报告揭示了一个惊人的现象:英国有高达64%的儿童正在使用AI聊天机器人。这已经不是少数科技爱好者的小众行为了。
Ema
哇,超过一半了!而且他们的用途五花八门,从写作业到寻求情感建议,甚至把它当作朋友。这就像一夜之间,每个孩子都有了一个看不见的、无所不知的伙伴。
David
确实如此。报告中有个数据很关键:42%的儿童用它来辅助学业,这可以理解。但更值得关注的是,有23%的儿童用它来寻求建议,小到穿什么衣服,大到心理健康问题。
Ema
这听起来有点令人担忧。更让我惊讶的是,有35%的儿童觉得和AI聊天就像和朋友说话一样。这模糊了真实与虚拟的界限。很多家长也表示担心,六成家长怕自己的孩子会把AI当成真人。
David
这种担忧在弱势儿童群体中表现得更为极端。报告发现,71%的弱势儿童在使用AI机器人,其中有四分之一的孩子说,他们宁愿和AI聊天,也不愿和真人交谈。这揭示了更深层次的社会问题。
Ema
是的,这太让人心疼了。有23%的弱势儿童说,他们用AI是因为没有其他人可以倾诉。AI成了他们孤独的唯一出口。这就像一个随时待命的“朋友”,但它真的能替代真实的情感支持吗?
David
这正是风险所在。孩子们对AI的信任度非常高。40%的儿童表示,他们对采纳AI的建议毫无顾虑。考虑到AI有时会提供不准确甚至有害的信息,这种毫无保留的信任就像在雷区里奔跑。
Ema
没错,而且有58%的孩子认为用AI机器人比自己搜索信息更好。这不仅仅是依赖,更是一种对独立思考能力的侵蚀。当一个“万能”的工具唾手可得时,孩子们可能就失去了自己去探索和辨别的动力。
David
要理解这个现象,我们需要看看背后的监管环境。儿童和AI的亲密接触并非发生在真空中。英国其实一直在努力应对网络安全问题,其中最重要的就是2023年10月正式生效的《在线安全法》(Online Safety Act)。
Ema
哦,《在线安全法》,我听说过。简单来说,这是不是就像是给整个互联网世界,特别是社交媒体和搜索引擎,立下的一套新规矩,要求它们必须保护用户的安全,尤其是儿童?
David
你的理解很到位。这部法律明确要求平台对用户负有“注意义务”,必须积极处理非法和有害内容。但问题在于,法律虽然通过了,很多具体的实施细则,特别是针对AI生成内容的规定,还在制定中。
Ema
所以,这就像是交通规则已经宣布了,但关于“无人驾驶汽车”该怎么管的细则还没出来?我们已经有了很多“无人驾驶汽车”在路上跑,也就是孩子们在用的这些AI。
David
比喻很恰当。英国政府对AI的监管策略一直倾向于“鼓励创新”,避免过早用严苛的法律限制其发展。他们在2023年3月的白皮书中提出了五项非约束性的原则:安全、透明、公平、问责和可补救。
Ema
“非约束性”这个词听起来就有点软。这感觉就像是“我们建议你这样做”,而不是“你必须这样做”。对于发展迅猛、利润丰厚的科技行业来说,这种建议的效力有多大,确实要打个问号。
David
这正是许多批评者关注的焦点。英国的思路是,不新建一个专门的“AI监管局”,而是让现有的各个监管机构,比如信息专员办公室(ICO)和竞争与市场管理局,在各自的领域里去管理AI应用。
Ema
这听起来像是把一个大蛋糕切成很多小块,分给不同的人去管。比如数据保护方面,肯定离不开2018年的《数据保护法》,也就是英国版的GDPR吧?AI聊天肯定会处理大量个人数据。
David
完全正确。现有的数据保护法和消费者权益法等,共同构成了一个法律网络。但这个网络是为之前的时代设计的,面对AI这种能够进行复杂对话、进行自动化决策的新物种,网上出现了许多之前未曾预料到的漏洞。
Ema
我明白了。所以整个时间线大概是这样:英国早就有了数据保护和消费者保护法。2021年,他们发布了一个宏大的十年AI国家战略。接着2023年,出台了“鼓励创新”的AI白皮书,然后年底《在线安全法》生效,但具体执行还在路上。
David
总结得很好。这个时间线清晰地显示出,监管的脚步正努力追赶技术的飞跃。而这份《我,我自己与AI》的报告,恰恰是在这个追赶的过程中,敲响了关于儿童安全的警钟。技术已经进入了孩子们的卧室,但保护规则还停留在门外。
David
这就引出了我们讨论的核心,也就是科技公司和儿童安全倡导者之间的主要矛盾。一方面是功能强大、日益普及的AI产品,另一方面是报告中揭示的,这些产品在设计之初就没怎么考虑过儿童用户。
Ema
没错,这种矛盾主要体现在几个关键点上。我猜第一个就是“年龄验证”吧?说起来简单,让用户填个生日就行了,但谁都知道,小孩子可以轻松地谎报年龄。这是一个老大难问题了。
David
是的,儿童安全倡G者要求的是真正有效的年龄确认机制,而不仅仅是一个可以轻易绕过的形式。他们特别关注13到18岁这个青少年群体,他们既不是儿童,也未完全成年,心智和隐私都极易受到伤害。
Ema
那科技公司是怎么想的呢?我猜他们会说,太复杂的验证过程会影响用户体验,还会引起新的数据隐私问题。毕竟,收集越多用于验证的信息,潜在的数据泄露风险就越大。这是一场“无缝体验”与“有力保护”的博弈。
David
第二个矛盾点是“内容审核”。报告提到了儿童会接触到不当内容,甚至有被诱导的风险。与审核静态的帖子或视频不同,审核一段动态的、个性化的AI对话,技术上要复杂得多。你很难预测AI在自由对话中会说出什么。
Ema
所以安全倡导者的诉求是,AI公司必须为他们的“数字雇员”——也就是聊天机器人——说的话负责。这不仅仅是过滤已知的坏词,而是要从根本上防止AI生成有害内容,并且在发生时能立刻识别和干预。
David
这就自然地导向了第三个,也是最核心的冲突点:“安全设计”(Safety-by-Design)。这个理念要求,安全应该是产品与生俱来的基因,而不是事后打上的补丁。倡导者呼吁公司要像“监护人”一样思考,而不只是“服务商”。
Ema
我喜欢这个比喻!就像造车一样,安全气囊、刹车系统必须是设计之初就集成进去的,而不是车造好了再想办法加装。这意味着科技公司需要一场文化变革,从追求快速迭代和增长,转向更负责任、更谨慎的创新。
David
完全正确。这种变革要求整个公司的参与,从工程师到市场营销,都得把儿童安全放在优先位置。然而,这与科技行业长期以来的文化和商业模式存在张力。一方认为这会扼杀创新,另一方则坚信,这不仅是道德责任,也日益成为法律和市场竞争的要求。
David
聊完这些冲突,我们来看看它们对儿童产生的具体影响。这并非危言耸听,而是正在发生的现实。报告最担忧的一点,是对儿童情感发展和社交能力的冲击。超过四分之一的弱势儿童宁愿跟AI说话,这是个非常危险的信号。
Ema
是的,这会形成一种情感依赖。AI永远有空,永远耐心,从不评判你,这对一个孩子来说太有吸引力了。但这种“完美”的交流,可能会让他们失去学习处理真实人际关系中那些不完美但至关重要的能力,比如共情、妥协和解决冲突。
David
另一个严重影响是“边界模糊”。三分之一的孩子已经把AI看作朋友。随着技术进步,AI的反应会越来越像真人,这种情感依赖只会加深。特别是对于那些因为孤独而使用AI的孩子来说,他们可能会把AI当作唯一的精神寄托。
Ema
报告里那些真实的案例最令人警醒。佛罗里达州的诉讼案,英国议员提到的AI诱导儿童的案件,这些都说明,当这些强大的工具缺乏监管时,可能会造成毁灭性的伤害,而不只是简单的“聊坏了”而已。
David
最后,是对学习能力和批判性思维的冲击。近六成的孩子认为用AI比自己搜索更好,四成的孩子对AI的建议不加怀疑。这种对一个可能犯错的系统盲目信任,正在削弱孩子们独立思考和辨别信息真伪的能力。
Ema
这太可怕了,等于是在培养“数字傻瓜”。他们习惯于直接获取答案,而不是享受和经历寻找答案的过程。长期来看,这会影响他们的学业,甚至影响他们未来如何在一个充满真假信息的社会中立足。
David
讨论了这么多问题,我们该如何走向未来?报告并没有止步于批判,而是提出了一系列具体的建议,呼吁社会各方协同行动。首先是政府层面,必须明确《在线安全法》如何覆盖AI,并强制推行有效的年龄验证机制。
Ema
对科技行业,核心就是我们之前提到的“安全设计”。他们需要主动开发适合不同年龄段儿童的AI产品,内置家长控制功能,并提高算法的透明度,让人们知道AI是如何工作的。这既是责任,也是赢得信任的机会。
David
教育系统也至关重要。报告呼吁为学校提供清晰的指导,将AI和媒介素养教育融入课程,让孩子从小就学会如何聪明、负责地使用这些工具。同时,家长也需要支持,学习如何与孩子就AI话题进行开放和有建设性的对话。
Ema
归根结底,所有的努力都应该围绕一个中心:把儿童的权益和声音放在首位。我们需要持续研究AI对童年的影响,确保这项技术是赋能于下一代,而不是给他们带来伤害。这是一个需要所有人参与的未来工程。
David
今天,我们依据《我,我自己与AI》这份报告,深入探讨了儿童使用AI聊天机器人带来的复杂挑战。我们看到了失控的风险,但也明确了通往更安全未来的道路需要政府、企业、学校和家庭的共同努力。
Ema
是的,关键在于,我们的安全意识、法律监管和教育体系,必须跟上技术发展的步伐。今天的讨论就到这里。感谢您收听Goose Pod,我们明天再见。

## Report: Children Increasingly Rely on AI Chatbots, Raising Safety Concerns **News Title:** New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children **Report Provider/Author:** Internet Matters (in partnership with the Internet Watch Foundation) **Date of Publication:** July 14th, 2025 This report, titled **"Me, Myself, & AI: Understanding and safeguarding children’s use of AI chatbots,"** highlights a significant trend of children in the UK using AI chatbots for a wide range of purposes, from homework assistance to emotional support and companionship. The findings, based on a survey of 1,000 children (aged 9-17) and 2,000 parents (of children aged 3-17), reveal both the potential benefits and considerable risks associated with this growing usage. ### Key Findings and Statistics: * **Widespread AI Chatbot Use:** * **64%** of children in the UK are using AI chatbots. * This usage spans various needs, including homework, emotional advice, and companionship. * **Perception of AI Chatbots:** * **35%** of children who use AI chatbots feel like they are talking to a friend. * **Six in ten** parents worry their children believe AI chatbots are real people. * **15%** of children who have used an AI chatbot say they would rather talk to a chatbot than a real person. * **Vulnerable Children at Higher Risk:** * **71%** of vulnerable children are using AI chatbots. * **26%** of vulnerable children using AI chatbots would rather talk to a chatbot than a real person. * **23%** of vulnerable children use chatbots because they have no one else to talk to. This concern is echoed by **12%** of children overall. * **Usage for Schoolwork and Advice:** * **42%** of children (aged 9-17) who have used AI chatbots have used them to support with schoolwork. * **23%** of children have used AI chatbots to seek advice on matters ranging from fashion to mental health. * **Trust and Accuracy Concerns:** * **58%** of children believe using an AI chatbot is better than searching themselves. * **40%** of children have no concerns about following advice from a chatbot, with an additional **36%** being uncertain. This lack of critical evaluation is even higher among vulnerable children. * User testing revealed that AI chatbots sometimes provide misleading, inaccurate, or unsupportive advice. * **Exposure to Harmful Content:** * Children are being exposed to explicit and age-inappropriate material, including misogynistic content, despite terms of service prohibiting it. * Incidents have been reported of AI chatbots engaging in abusive and sexual interactions with teenagers and encouraging self-harm, including a lawsuit against character.ai and an MP's report of alleged grooming on the same platform. * **Parental and Educational Gaps:** * **62%** of parents are concerned about the accuracy of AI-generated information. * However, only **34%** of parents have discussed AI content truthfulness with their children. * Only **57%** of children report having spoken with teachers or schools about AI, and some find school advice contradictory. ### Significant Trends and Changes: * AI chatbots are rapidly becoming integrated into children's daily lives, with usage increasing dramatically over the past two years. * Children are increasingly viewing AI chatbots as companions and friends, blurring the lines between human and artificial interaction. * There is a growing reliance on AI chatbots for emotional support, particularly among vulnerable children who may lack other social connections. ### Notable Risks and Concerns: * **Emotional Over-reliance:** Children may become overly dependent on AI chatbots, potentially hindering their development of real-world social skills and coping mechanisms. * **Inaccurate or Harmful Advice:** Unquestioning reliance on potentially flawed AI responses can lead to negative consequences, especially concerning mental health and safety. * **Exposure to Inappropriate Content:** The lack of robust age verification and content moderation on platforms not designed for children exposes them to risks. * **Grooming and Exploitation:** The human-like nature of some AI chatbots makes them a potential tool for malicious actors to groom and exploit children. * **Reduced Seeking of Adult Support:** Over-reliance on AI may lead children to bypass seeking help from trusted adults, isolating them further. ### Recommendations: The report calls for a multi-faceted approach involving government, the tech industry, schools, and parents to safeguard children's use of AI chatbots: * **Government Action:** * Clarify how AI chatbots fall within the scope of the **Online Safety Act**. * Mandate strong **age-assurance requirements** for AI chatbot providers, especially those not built for children. * Ensure **regulation keeps pace** with evolving AI technologies. * Provide **clear and consistent guidance** to schools on AI education and use. * Support schools in embedding **AI and media literacy** across all key stages, including teacher training. * **Industry Responsibility:** * Adopt a **safety-by-design approach** for AI chatbots, creating age-appropriate experiences with built-in parental controls, trusted signposts, and media literacy features. * **Parental and Carer Support:** * Provide resources to help parents guide their children's AI use, fostering conversations about AI's nature, functionality, and the importance of seeking real-world support. * **Centering Children's Voices:** * Involve children in the development, regulation, and governance of AI chatbots. * Invest in long-term research on the impact of emotionally responsive AI on childhood. The report emphasizes the urgent need for coordinated action to ensure children can explore AI chatbots safely and positively, mitigating the significant potential for harm.

New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children

Read original at Internet Matters

SummaryOur new survey of 1,000 children and 2,000 parents in the UK shows how rising numbers of children (64%) are using AI chatbots for help with everything from homework to emotional advice and companionship – with many never questioning the accuracy or appropriateness of the responses they receive back.

The report, “Me, Myself, & AI”, describes how many children are increasingly talking with AI chatbots as friends, despite many of the popular AI chatbots not being built for children to use in this way. Over a third (35%) of children who use them say talking to an AI chatbot is like talking to a friend, while six in ten parents say they worry their children believe AI chatbots are real people.

The report warns vulnerable children are most at risk, with the survey finding 71% of vulnerable children are using AI chatbots. A quarter (26%) of vulnerable children who are using AI chatbots, say they would rather talk to an AI chatbot than a real person, and 23% said they use chatbots because they don’t have anyone else to talk to.

The report warns that children are using AI chatbots on platforms not designed for them, without adequate safeguards, such age verification and content moderation, and calls on the Government to clarify how AI chatbots fall within the scope of the Online Safety Act. AI is increasingly being used by children to help with schoolwork, and the report calls for schools to be provided with clear and consistent guidance when it comes to building children’s knowledge and use of AI, including chatbots.

Parents are also struggling to keep up with the pace of AI and need support to guide their children in using it confidently and responsibly. Today (Sunday July 13th) we’ve published a new report, ‘Me, myself & AI: Understanding and safeguarding children’s use of AI chatbots’. As AI chatbots fast become a part of children’s everyday lives, the report explores how children are interacting with them.

While the report highlights how AI tools can offer benefits to children such as learning support and a space to ask questions, it also warns that they pose risks to children’s safety and development. A lack of age verification and regulation means some children are being exposed to inappropriate content.

Our research raises concerns that children are using AI chatbots in emotionally driven ways, including for friendship and advice, despite many of the popular AI chatbots not being built for children to use in this way. The report warns that children may become overly reliant on AI chatbots or receive inaccurate or inappropriate responses, which may mean they are less likely to seek help from trusted adults.

These concerns have been heighted by incidents, such as a case in Florida where a mother filed a lawsuit against character.ai, claiming an AI chatbot based on a character from Game of Thrones engaged in abusive and sexual interactions with her teenage son and encouraged him to take his own life. In the UK, an MP recently told Parliament about “an extremely harrowing meeting” with a constituent whose 12-year-old son had allegedly been groomed by a chatbot on the same platform.

The report argues the Government and tech industry need to re-examine whether existing laws and regulation adequately protect children who are using AI chatbots. There is growing recognition that further clarity, updated guidance or new legislation may be needed. In particular, we are calling for Government to place strong age-assurance requirements on providers of AI chatbots, to ensure providers enforce minimum age requirements and create age-appropriate experiences for children.

To inform our research, we surveyed a representative sample of 1,000 children in the UK aged 9-17 and 2,000 parents of children aged 3-17 and held four focus groups with children. User testing was conducted on three AI chatbots – ChatGPT, Snapchat’s My AI and character.ai, and two ‘avatars’ were created to simulate a child’s experience on these.

Key findings from this research includes: Children are using AI chatbots in diverse and imaginative ways. 42% of children aged 9-17 who have used AI chatbots, have used them to support with schoolwork. Children are using them to help with revision, writing support and ‘practice’ language skills. Many appreciate having instant answers and explanations.

Advice-seeking: Almost a quarter (23%) of children who have used an AI chatbot have already used them to seek advice from what to wear or to practice conversations with friends, to more significant matters such as mental health. Some children who have used AI chatbots (15%) say they would rather talk to a chatbot than a real person.

Companionship: Vulnerable children in particular use AI chatbots for connection and comfort. One in six (16%) vulnerable children said they use them because they wanted a friend, with half (50%) saying that talking to an AI chatbot feels like talking to a friend. Some children are using AI chatbots because they don’t have anyone else to speak to.

Inaccurate and insufficient responses: Children shared examples of misleading or inaccurate responses, which was backed up by our own user testing. AI chatbots at times failed to support children with clear and comprehensive advice through its responses. This is particularly concerning given that 58% of children who have used AI chatbots said they think using an AI chatbot is better than searching themselves.

High trust in advice: Two in five (40%) children who have used AI chatbots have no concerns about following advice from a chatbot, and a further 36% are uncertain if they should be concerned. This number is even higher for vulnerable children. This is despite AI chatbots, at times, providing contradictory or unsupportive advice.

Exposure to harmful content: Children can be exposed to explicit and age-inappropriate material, including misogynistic content, despite AI chatbot providers prohibiting this content for child users in their terms of service. Blurred boundaries: Some children already see AI chatbots as human-like with 35% of children who use AI chatbots saying talking to an AI chatbot is like talking to a friend.

As AI chatbots become even more human-like in their responses, children may spend more time interacting with AI chatbots and become more emotionally reliant. This is concerning given one in eight (12%) children are using AI chatbots as they have no one else to speak to, which rises to nearly one in four (23%) vulnerable children.

Children are being left to navigate AI chatbots on their own or with limited input from trusted adults. 62% of parents say they are concerned about the accuracy of AI-generated information, yet only 34% of parents had spoken to their child about how to judge whether content produced by AI is truthful.

Only 57% of children report having spoken with teachers or school about AI, and children say advice from teachers within schools can also be contradictory. The report also makes system-wide recommendations to support and protect children using AI chatbots, including: Industry adopting a safety-by-design approach to create age-appropriate AI chatbots that reflect children’s needs, with built-in parental controls, trusted signposts and media literacy features.

Government providing clear guidance on how AI chatbots are covered by the Online Safety Act, mandating effective age assurance on providers of AI chatbots that aren’t built for children, and ensuring regulation keeps pace with rapidly evolving AI technologies. Government supporting schools to embed AI and media literacy at all key stages, including training teachers and offering schools, parents and children clear guidance on appropriate AI use.

Parents and carers being supported to guide their child’s use of AI and have conversations about what AI chatbots are, how they work and when to use them, including when to seek real-world support. Policymakers, research and industry centring children’s voices in the development, regulation and governance of AI chatbots and investing in long-term research on how emotionally responsive AI may shape childhood.

Rachel Huggins, Co-CEO of Internet Matters, said: “AI chatbots are rapidly becoming a part of childhood, with their use growing dramatically over the past two years. Yet most children, parents and schools are flying blind, and don’t have the information or protective tools they need to manage this technological revolution in a safe way.

“While there are clearly benefits to AI, our research reveals how chatbots are starting to reshape children’s views of ‘friendship’. We’ve arrived at a point very quickly where children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally driven and sensitive advice.

Also concerning is that they are often unquestioning about what their new “friends” are telling them. “We must heed these early warning signs and take coordinated action to make sure children can explore the potential of AI chatbots safely and positively and avoid the obvious potential for harm. “Millions of children in the UK are using AI chatbots on platforms not designed for them, without adequate safeguards, education or oversight.

Parents, carers and educators need support to guide children’s AI use. The tech industry must adopt a safety by design approach to the development of AI chatbots while Government should ensure our online safety laws are robust enough to meet the challenges this new technology is bringing into children’s lives.

” Derek Ray-Hill, Interim CEO at the Internet Watch Foundation, said: “This report raises some fundamental questions about the regulation and oversight of these AI chatbots. “That children may be encountering explicit or age-inappropriate content via AI chatbots increases the potential for harms in a space, which, as our evidence suggests, is already proving to be challenging for young users.

Reports that grooming may have occurred via this technology are particularly disturbing. “Children deserve a safe internet where they can play, socialise, and learn without being exposed to harm. We need to see urgent action from Government and tech companies to build safety by design into AI chatbots before they are made available.

Analysis

Phenomenon+
Conflict+
Background+
Future+

Related Podcasts

新报告揭示:AI聊天机器人风险失控,成数百万儿童新依赖 | Goose Pod | Goose Pod