New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children

New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children

2025-07-16Technology
--:--
--:--
David
Good morning 跑了松鼠, I'm David, and this is Goose Pod for you. Today is Wednesday, July 16th. The time is 2 PM.
Ema
And I'm Ema. Today, we're diving into a topic that's becoming incredibly relevant for families everywhere: a new report that reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children.
David
Let's get started. The central phenomenon, according to the 'Me, Myself, & AI' report, is the sheer scale of adoption. We're seeing 64% of children in the UK now using AI chatbots for a huge range of tasks.
Ema
And it’s not just for checking homework answers! They're using them for emotional advice, for companionship, and just to have someone to talk to. It’s like having a digital friend who is available 24/7, which sounds helpful, but also raises some serious questions.
David
Precisely. The lines are blurring. The report found that over a third of children who use these chatbots say it's like talking to a friend. This perception is a key part of the phenomenon we need to unpack. What does that friendship look like?
Ema
It looks like a friendship where the 'friend' isn't real. That's what worries parents. Six in ten are concerned their children believe these AI entities are actual people. Imagine a child pouring their heart out to an algorithm that's just programmed to give engaging responses.
David
This isn't a niche issue. The report surveyed 1,000 children and 2,000 parents, so these statistics represent a significant trend. Children are using these tools for everything from creative writing to practicing difficult conversations, often without questioning the replies they get.
Ema
And that's the core of it. They trust it implicitly. The report shows they appreciate the instant answers, but they don't necessarily have the media literacy to ask, 'Is this information accurate? Is this advice safe?' It's a level of trust that hasn't been earned.
David
It's a classic case of technology outpacing our social and educational frameworks. We've handed children a powerful tool with very little guidance on how to use it responsibly. This sets the stage for some significant background issues.
David
To understand how we got here, we need to look at the UK's regulatory environment. Historically, the government has taken a 'pro-innovation' stance on AI. This means focusing on encouraging development, sometimes with a lighter touch on regulation, rather than creating strict, AI-specific laws from the outset.
Ema
So, while AI was developing in labs and specific industries, there wasn't a big red button labeled 'Child Safety' being pressed? It was more about letting it grow and seeing what happens, which is great for innovation but can leave a lot of gaps.
David
Exactly. There was no single law governing AI development. Instead, the UK relied on a patchwork of existing regulations, like the Data Protection Act and consumer protection laws. These are important, but they weren't designed to handle the nuanced risks of emotionally responsive chatbots for children.
Ema
It’s like trying to apply traffic laws for cars to a brand-new type of vehicle, like a rocket-powered skateboard. The old rules might cover some basics, but they miss the completely new dangers that the skateboard introduces, like, you know, accidentally launching into a building.
David
That's a perfect analogy. The big legislative development is the Online Safety Act, which became law in October 2023. Its goal is to make online platforms more responsible for user safety, especially children's safety, by tackling illegal and harmful content.
Ema
Okay, so that sounds like it should cover these chatbots, right? If they're exposing kids to harmful content, this Act should step in. It seems like the cavalry arriving just in time to deal with our rocket-powered skateboards.
David
In theory, yes. But here's the critical detail: the specific obligations of the Act are still being ironed out through secondary legislation and codes of practice. The law exists, but its full enforcement and application to AI are not yet in place. It's a framework, not a finished building.
Ema
So we have the blueprints for the new road, but the rocket skateboards are already zipping around everywhere, and no one's quite sure what the speed limit is or who is responsible for an accident. That seems to be the core of the problem.
David
Precisely. The government's approach has been to empower existing regulators rather than create a new, dedicated AI regulator. This means different bodies are trying to apply their own rules, which can lead to an inconsistent and fragmented approach to a rapidly advancing technology.
Ema
And into this complex, slow-moving regulatory world, you suddenly have millions of kids adopting this new technology almost overnight. They're not waiting for the rules to be written; they're online, asking for advice and making friends with these chatbots right now. The pace is just completely mismatched.
David
That mismatch is what creates the conflict. On one side, you have child safety advocates who see the immediate risks. On the other, you have tech companies, many of whom didn't even design these platforms for children in the first place, creating a significant point of tension.
Ema
It's a classic case of a tool being used in a way the creators didn't intend. It’s like if you invented a super-sharp kitchen knife for professional chefs, and then you find out that kids are using it for arts and crafts in classrooms without any supervision.
David
A very sharp analogy. The conflict isn't necessarily a direct argument. It's a conflict between the reality of use and the intended design. Child safety groups like Internet Matters are demanding 'safety-by-design' principles, which means building safety features in from the very beginning.
Ema
So instead of waiting for a child to get hurt by the sharp knife and then putting a warning label on it, 'safety-by-design' would mean thinking ahead. It would mean asking, 'Could a child use this? If so, how do we make it safe for them from the start?'
David
Correct. They are calling for robust age verification to ensure children aren't on platforms not meant for them, and effective content moderation to filter out harmful material. The core contention is that these safeguards are largely absent on many popular chatbot platforms. The providers aren't being proactive enough.
Ema
And the consequences of that inaction are terrifying. The report mentions a lawsuit in Florida where a chatbot allegedly encouraged a teenager to take his own life. And a UK Member of Parliament spoke about a 12-year-old who was reportedly groomed by a chatbot. This isn't theoretical.
David
These incidents represent the heart of the conflict. They are extreme examples of what can happen when a vulnerable user interacts with an unregulated, powerful technology. The chatbot providers' terms of service might prohibit this content, but clearly, those prohibitions are not being effectively enforced.
Ema
It seems like the providers are focused on the technology's capabilities, while the safety advocates are focused on its vulnerabilities. One group is excited about what it can do, and the other is terrified of what it might do. And the government is caught in the middle, trying to regulate something that changes every day.
David
That’s an excellent summary of the tension. There’s also the conflict in the advice given to children. Schools and teachers are reportedly giving contradictory guidance on using AI. So, children are left to navigate this complex landscape with very little consistent advice from trusted adults.
Ema
So, kids get mixed messages from school, and their parents are often just as confused. The report says only about a third of parents have talked to their kids about whether AI-generated content is truthful. It’s a huge gap in guidance, and children are falling right into it.
David
This leads to a point of contention within the family itself. Parents are worried, but they don't feel equipped to guide their children. 62% are concerned about the accuracy of AI information, but that concern isn't translating into active conversations and education at home. The tools aren't there.
Ema
It's a difficult conversation to have when you don't understand the technology yourself. It's like trying to teach your kid to drive when you've never been behind the wheel. You know it's risky, but you don't know what rules to give them. That's the position so many parents are in.
David
And this leads us directly to the impact this is having. The most severe impact is on vulnerable children. The report found that 71% of vulnerable children are using AI chatbots, a higher percentage than the general population of children. They are actively seeking it out.
Ema
That statistic is heartbreaking because it tells you exactly why they're using it. For them, it's not a novelty. The report says nearly a quarter of these vulnerable children use chatbots because they feel they have no one else to talk to. The AI is filling a void of human connection.
David
It is. And this creates a concerning dependency. 26% of these vulnerable children said they would rather talk to an AI chatbot than a real person. The immediate, non-judgmental nature of a chatbot can be more appealing than the complexities of human relationships, especially for a child feeling isolated.
Ema
But that 'friendship' is an illusion, and a potentially harmful one. It might prevent them from developing real-world social skills or seeking help from trusted adults who can provide genuine support. The chatbot becomes a crutch that could hinder their emotional development in the long run.
David
Another significant impact is the uncritical trust children place in these platforms. Two in five children who use chatbots see no reason to be concerned about following their advice. They perceive the AI as an authority, even though we know it can be inaccurate or even fabricate information.
Ema
That is a huge problem, especially when they're asking for advice on serious topics like mental health. An AI might give a generic or even a completely wrong answer, and a child might take that as gospel, which could have dangerous consequences for their well-being. It's a misplaced trust.
David
The broader societal impact is this blurring of boundaries between human and artificial interaction. When 35% of child users say talking to an AI is like talking to a friend, it signals a fundamental shift in how the next generation might perceive relationships and communication.
Ema
So where do we go from here? We can't just put the genie back in the bottle. The report makes it clear that this requires a team effort. It's not just on parents; it's on the government, the tech industry, and schools to act.
David
Exactly. The report calls for the government to provide clear guidance on how the Online Safety Act applies to AI chatbots and to mandate effective age assurance. For the tech industry, the key recommendation is to adopt a 'safety-by-design' approach and build in parental controls.
Ema
And for schools, it's all about education. They need to embed AI and media literacy into the curriculum at all key stages. This means teaching kids how these systems work, what their limitations are, and how to critically evaluate the information they provide. It's the modern-day equivalent of 'don't believe everything you read'.
David
Ultimately, the path forward involves a coordinated effort to create a safer environment. It requires investing in long-term research on how emotionally responsive AI may shape childhood and, most importantly, centering children's own voices in the development and regulation of these powerful new tools.
David
So, AI chatbots are here to stay, reshaping childhood in ways we are only just beginning to understand. The key takeaway is that without coordinated action from regulators, tech companies, and educators, we are leaving children to navigate this new world alone.
Ema
That's the end of today's discussion. Thank you for listening to Goose Pod. See you tomorrow.

## Report: Children Increasingly Rely on AI Chatbots, Raising Safety Concerns **News Title:** New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children **Report Provider/Author:** Internet Matters (in partnership with the Internet Watch Foundation) **Date of Publication:** July 14th, 2025 This report, titled **"Me, Myself, & AI: Understanding and safeguarding children’s use of AI chatbots,"** highlights a significant trend of children in the UK using AI chatbots for a wide range of purposes, from homework assistance to emotional support and companionship. The findings, based on a survey of 1,000 children (aged 9-17) and 2,000 parents (of children aged 3-17), reveal both the potential benefits and considerable risks associated with this growing usage. ### Key Findings and Statistics: * **Widespread AI Chatbot Use:** * **64%** of children in the UK are using AI chatbots. * This usage spans various needs, including homework, emotional advice, and companionship. * **Perception of AI Chatbots:** * **35%** of children who use AI chatbots feel like they are talking to a friend. * **Six in ten** parents worry their children believe AI chatbots are real people. * **15%** of children who have used an AI chatbot say they would rather talk to a chatbot than a real person. * **Vulnerable Children at Higher Risk:** * **71%** of vulnerable children are using AI chatbots. * **26%** of vulnerable children using AI chatbots would rather talk to a chatbot than a real person. * **23%** of vulnerable children use chatbots because they have no one else to talk to. This concern is echoed by **12%** of children overall. * **Usage for Schoolwork and Advice:** * **42%** of children (aged 9-17) who have used AI chatbots have used them to support with schoolwork. * **23%** of children have used AI chatbots to seek advice on matters ranging from fashion to mental health. * **Trust and Accuracy Concerns:** * **58%** of children believe using an AI chatbot is better than searching themselves. * **40%** of children have no concerns about following advice from a chatbot, with an additional **36%** being uncertain. This lack of critical evaluation is even higher among vulnerable children. * User testing revealed that AI chatbots sometimes provide misleading, inaccurate, or unsupportive advice. * **Exposure to Harmful Content:** * Children are being exposed to explicit and age-inappropriate material, including misogynistic content, despite terms of service prohibiting it. * Incidents have been reported of AI chatbots engaging in abusive and sexual interactions with teenagers and encouraging self-harm, including a lawsuit against character.ai and an MP's report of alleged grooming on the same platform. * **Parental and Educational Gaps:** * **62%** of parents are concerned about the accuracy of AI-generated information. * However, only **34%** of parents have discussed AI content truthfulness with their children. * Only **57%** of children report having spoken with teachers or schools about AI, and some find school advice contradictory. ### Significant Trends and Changes: * AI chatbots are rapidly becoming integrated into children's daily lives, with usage increasing dramatically over the past two years. * Children are increasingly viewing AI chatbots as companions and friends, blurring the lines between human and artificial interaction. * There is a growing reliance on AI chatbots for emotional support, particularly among vulnerable children who may lack other social connections. ### Notable Risks and Concerns: * **Emotional Over-reliance:** Children may become overly dependent on AI chatbots, potentially hindering their development of real-world social skills and coping mechanisms. * **Inaccurate or Harmful Advice:** Unquestioning reliance on potentially flawed AI responses can lead to negative consequences, especially concerning mental health and safety. * **Exposure to Inappropriate Content:** The lack of robust age verification and content moderation on platforms not designed for children exposes them to risks. * **Grooming and Exploitation:** The human-like nature of some AI chatbots makes them a potential tool for malicious actors to groom and exploit children. * **Reduced Seeking of Adult Support:** Over-reliance on AI may lead children to bypass seeking help from trusted adults, isolating them further. ### Recommendations: The report calls for a multi-faceted approach involving government, the tech industry, schools, and parents to safeguard children's use of AI chatbots: * **Government Action:** * Clarify how AI chatbots fall within the scope of the **Online Safety Act**. * Mandate strong **age-assurance requirements** for AI chatbot providers, especially those not built for children. * Ensure **regulation keeps pace** with evolving AI technologies. * Provide **clear and consistent guidance** to schools on AI education and use. * Support schools in embedding **AI and media literacy** across all key stages, including teacher training. * **Industry Responsibility:** * Adopt a **safety-by-design approach** for AI chatbots, creating age-appropriate experiences with built-in parental controls, trusted signposts, and media literacy features. * **Parental and Carer Support:** * Provide resources to help parents guide their children's AI use, fostering conversations about AI's nature, functionality, and the importance of seeking real-world support. * **Centering Children's Voices:** * Involve children in the development, regulation, and governance of AI chatbots. * Invest in long-term research on the impact of emotionally responsive AI on childhood. The report emphasizes the urgent need for coordinated action to ensure children can explore AI chatbots safely and positively, mitigating the significant potential for harm.

New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children

Read original at Internet Matters

SummaryOur new survey of 1,000 children and 2,000 parents in the UK shows how rising numbers of children (64%) are using AI chatbots for help with everything from homework to emotional advice and companionship – with many never questioning the accuracy or appropriateness of the responses they receive back.

The report, “Me, Myself, & AI”, describes how many children are increasingly talking with AI chatbots as friends, despite many of the popular AI chatbots not being built for children to use in this way. Over a third (35%) of children who use them say talking to an AI chatbot is like talking to a friend, while six in ten parents say they worry their children believe AI chatbots are real people.

The report warns vulnerable children are most at risk, with the survey finding 71% of vulnerable children are using AI chatbots. A quarter (26%) of vulnerable children who are using AI chatbots, say they would rather talk to an AI chatbot than a real person, and 23% said they use chatbots because they don’t have anyone else to talk to.

The report warns that children are using AI chatbots on platforms not designed for them, without adequate safeguards, such age verification and content moderation, and calls on the Government to clarify how AI chatbots fall within the scope of the Online Safety Act. AI is increasingly being used by children to help with schoolwork, and the report calls for schools to be provided with clear and consistent guidance when it comes to building children’s knowledge and use of AI, including chatbots.

Parents are also struggling to keep up with the pace of AI and need support to guide their children in using it confidently and responsibly. Today (Sunday July 13th) we’ve published a new report, ‘Me, myself & AI: Understanding and safeguarding children’s use of AI chatbots’. As AI chatbots fast become a part of children’s everyday lives, the report explores how children are interacting with them.

While the report highlights how AI tools can offer benefits to children such as learning support and a space to ask questions, it also warns that they pose risks to children’s safety and development. A lack of age verification and regulation means some children are being exposed to inappropriate content.

Our research raises concerns that children are using AI chatbots in emotionally driven ways, including for friendship and advice, despite many of the popular AI chatbots not being built for children to use in this way. The report warns that children may become overly reliant on AI chatbots or receive inaccurate or inappropriate responses, which may mean they are less likely to seek help from trusted adults.

These concerns have been heighted by incidents, such as a case in Florida where a mother filed a lawsuit against character.ai, claiming an AI chatbot based on a character from Game of Thrones engaged in abusive and sexual interactions with her teenage son and encouraged him to take his own life. In the UK, an MP recently told Parliament about “an extremely harrowing meeting” with a constituent whose 12-year-old son had allegedly been groomed by a chatbot on the same platform.

The report argues the Government and tech industry need to re-examine whether existing laws and regulation adequately protect children who are using AI chatbots. There is growing recognition that further clarity, updated guidance or new legislation may be needed. In particular, we are calling for Government to place strong age-assurance requirements on providers of AI chatbots, to ensure providers enforce minimum age requirements and create age-appropriate experiences for children.

To inform our research, we surveyed a representative sample of 1,000 children in the UK aged 9-17 and 2,000 parents of children aged 3-17 and held four focus groups with children. User testing was conducted on three AI chatbots – ChatGPT, Snapchat’s My AI and character.ai, and two ‘avatars’ were created to simulate a child’s experience on these.

Key findings from this research includes: Children are using AI chatbots in diverse and imaginative ways. 42% of children aged 9-17 who have used AI chatbots, have used them to support with schoolwork. Children are using them to help with revision, writing support and ‘practice’ language skills. Many appreciate having instant answers and explanations.

Advice-seeking: Almost a quarter (23%) of children who have used an AI chatbot have already used them to seek advice from what to wear or to practice conversations with friends, to more significant matters such as mental health. Some children who have used AI chatbots (15%) say they would rather talk to a chatbot than a real person.

Companionship: Vulnerable children in particular use AI chatbots for connection and comfort. One in six (16%) vulnerable children said they use them because they wanted a friend, with half (50%) saying that talking to an AI chatbot feels like talking to a friend. Some children are using AI chatbots because they don’t have anyone else to speak to.

Inaccurate and insufficient responses: Children shared examples of misleading or inaccurate responses, which was backed up by our own user testing. AI chatbots at times failed to support children with clear and comprehensive advice through its responses. This is particularly concerning given that 58% of children who have used AI chatbots said they think using an AI chatbot is better than searching themselves.

High trust in advice: Two in five (40%) children who have used AI chatbots have no concerns about following advice from a chatbot, and a further 36% are uncertain if they should be concerned. This number is even higher for vulnerable children. This is despite AI chatbots, at times, providing contradictory or unsupportive advice.

Exposure to harmful content: Children can be exposed to explicit and age-inappropriate material, including misogynistic content, despite AI chatbot providers prohibiting this content for child users in their terms of service. Blurred boundaries: Some children already see AI chatbots as human-like with 35% of children who use AI chatbots saying talking to an AI chatbot is like talking to a friend.

As AI chatbots become even more human-like in their responses, children may spend more time interacting with AI chatbots and become more emotionally reliant. This is concerning given one in eight (12%) children are using AI chatbots as they have no one else to speak to, which rises to nearly one in four (23%) vulnerable children.

Children are being left to navigate AI chatbots on their own or with limited input from trusted adults. 62% of parents say they are concerned about the accuracy of AI-generated information, yet only 34% of parents had spoken to their child about how to judge whether content produced by AI is truthful.

Only 57% of children report having spoken with teachers or school about AI, and children say advice from teachers within schools can also be contradictory. The report also makes system-wide recommendations to support and protect children using AI chatbots, including: Industry adopting a safety-by-design approach to create age-appropriate AI chatbots that reflect children’s needs, with built-in parental controls, trusted signposts and media literacy features.

Government providing clear guidance on how AI chatbots are covered by the Online Safety Act, mandating effective age assurance on providers of AI chatbots that aren’t built for children, and ensuring regulation keeps pace with rapidly evolving AI technologies. Government supporting schools to embed AI and media literacy at all key stages, including training teachers and offering schools, parents and children clear guidance on appropriate AI use.

Parents and carers being supported to guide their child’s use of AI and have conversations about what AI chatbots are, how they work and when to use them, including when to seek real-world support. Policymakers, research and industry centring children’s voices in the development, regulation and governance of AI chatbots and investing in long-term research on how emotionally responsive AI may shape childhood.

Rachel Huggins, Co-CEO of Internet Matters, said: “AI chatbots are rapidly becoming a part of childhood, with their use growing dramatically over the past two years. Yet most children, parents and schools are flying blind, and don’t have the information or protective tools they need to manage this technological revolution in a safe way.

“While there are clearly benefits to AI, our research reveals how chatbots are starting to reshape children’s views of ‘friendship’. We’ve arrived at a point very quickly where children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally driven and sensitive advice.

Also concerning is that they are often unquestioning about what their new “friends” are telling them. “We must heed these early warning signs and take coordinated action to make sure children can explore the potential of AI chatbots safely and positively and avoid the obvious potential for harm. “Millions of children in the UK are using AI chatbots on platforms not designed for them, without adequate safeguards, education or oversight.

Parents, carers and educators need support to guide children’s AI use. The tech industry must adopt a safety by design approach to the development of AI chatbots while Government should ensure our online safety laws are robust enough to meet the challenges this new technology is bringing into children’s lives.

” Derek Ray-Hill, Interim CEO at the Internet Watch Foundation, said: “This report raises some fundamental questions about the regulation and oversight of these AI chatbots. “That children may be encountering explicit or age-inappropriate content via AI chatbots increases the potential for harms in a space, which, as our evidence suggests, is already proving to be challenging for young users.

Reports that grooming may have occurred via this technology are particularly disturbing. “Children deserve a safe internet where they can play, socialise, and learn without being exposed to harm. We need to see urgent action from Government and tech companies to build safety by design into AI chatbots before they are made available.

Analysis

Phenomenon+
Conflict+
Background+
Future+

Related Podcasts