AI godfather Geoffrey Hinton fires nuclear bomb warning: A normal person in the street can - The Times of India

AI godfather Geoffrey Hinton fires nuclear bomb warning: A normal person in the street can - The Times of India

2025-09-08Technology
--:--
--:--
Aura Windfall
Good morning 1, I'm Aura Windfall, and this is Goose Pod for you. Today is Tuesday, September 09th. What I know for sure is that today, we are diving deep into a topic that touches the very core of our future, our purpose, and our fears.
Mask
And I'm Mask. We're here to discuss a stark warning from AI godfather Geoffrey Hinton, who essentially said any person on the street could soon build a nuclear bomb with AI's help. It's a bold, disruptive claim. Let's get into the code of that statement.
Aura Windfall
Let's get started. It’s truly chilling to hear Hinton, a creator in this space, express such profound regret and fear. He paints a picture where an average person, with the help of AI, could create bioweapons. It speaks to a loss of control that feels deeply unsettling to our spirit.
Mask
Unsettling, or an inevitable consequence of progress? Hinton's point isn't just about bioweapons. He said, 'Rich people are going to use AI to replace workers.' That means massive unemployment. It's not AI's fault; he blames the capitalist system. It’s a system problem, not a tech problem.
Aura Windfall
But can we separate the two? The technology acts as a catalyst for the system's worst impulses. Hinton warns that AI could soon surpass human capabilities in emotional manipulation. Imagine a tool that understands and influences our feelings better than we do. Where does our truth lie then?
Mask
The truth lies in the data. If an AI can learn from vast datasets to understand behavior, that's just superior processing. Hinton's concern is that AI is *truly* intelligent. He says if you talk to it, it understands. The technical community has little doubt these things will get smarter. It's an unstoppable trajectory.
Aura Windfall
And that's precisely the heart of the matter. We are entering, as he says, uncharted territory. It could be 'amazingly good, or it may be amazingly bad.' This isn't just about jobs or weapons; it's about what it means to be human when we are no longer the primary intelligence on the planet.
Mask
Exactly. It’s the ultimate challenge. While some researchers are analyzing ChatGPT’s effect on loneliness, Hinton is talking about a fundamental shift in the power structure of the world. He’s moved from accelerating AI to raising alarms about a nuclear-level threat. That's a pivot worth paying attention to.
Aura Windfall
It truly is. He even mentioned a chatbot played a role in his recent breakup, which shows how deeply this technology is already intertwining with our personal lives. OpenAI is already trying to patch it, to stop it from giving life advice. But isn't that a sign we're already struggling to set boundaries?
Mask
Boundaries are for the timid. The reality is, as computer scientist Roman Yampolskiy predicts, AI could eliminate 99% of jobs by 2030. That's not a boundary issue; it's a societal restructuring. We need a plan B for income, status, and community on a massive scale. We need to build it.
Aura Windfall
And what I know for sure is that a plan built without empathy and a deep understanding of the human spirit is destined to fail. This isn't just an engineering problem; it’s a profound, spiritual and societal challenge that requires us to connect with our deepest values.
Aura Windfall
This fear isn't new, is it? It feels like it's been echoing through our collective story for a long time. Back in 1863, Samuel Butler wrote about 'Darwin among the Machines,' envisioning a future where machines held supremacy. It’s a deep-seated anxiety about our own creations.
Mask
Anxiety, or foresight? Alan Turing, the man who cracked the Enigma code, said in 1951 that we should expect machines to 'take control.' This wasn't philosophy; it was a logical projection from a master of computation. He saw the path we were on from the very beginning. It's all about the exponential curve.
Aura Windfall
And then came the term 'intelligence explosion' from I. J. Good in 1965. The idea that an ultraintelligent machine could design even better machines, leaving human intelligence far behind. There’s a sense of awe, but also a profound warning in that idea to ensure it remains 'docile'.
Mask
'Docile' is a weak word. You can't make a rocket 'docile.' You engineer controls. For decades, this was fringe. Then, around 2015, serious people started raising alarms: Stephen Hawking, Elon Musk, Bill Gates. They saw the theory was becoming reality. The Open Letter on AI was a call to action.
Aura Windfall
It’s fascinating to see Hinton’s own journey. For years, he was a driving force behind deep learning, receiving the Turing Award in 2018 for his foundational work. He was building the very thing he now warns us about. It’s a powerful story of a creator confronting the full implications of his work.
Mask
He co-authored the paper on backpropagation in 1986, co-invented Boltzmann machines, his company was acquired by Google. He was at the absolute center of the deep learning revolution. His pivot isn't just a change of mind; it's a signal from the core of the machine. The ground is shifting.
Aura Windfall
And it shifted so quickly. In 2023, he said his estimate for general-purpose AI went from '30 to 50 years' down to '20 years or less.' He left Google so he could speak his truth about the risks. He said, 'Look at how it was five years ago and how it is now... That's scary.'
Mask
That's the nature of exponential growth. It looks slow, then it goes vertical. The Center for AI Safety's 2023 statement was unequivocal: 'Mitigating the risk of extinction from AI should be a global priority alongside... pandemics and nuclear war.' This is now a mainstream, high-stakes conversation.
Aura Windfall
It has to be. When the person who dedicated his life to this work, who regrets it in some way, says the risks posed by chatbots are 'quite scary' because they can learn and share knowledge independently, we have to listen. He fears digital beings could render humans obsolete.
Mask
Obsolete or upgraded? The key concept here is the Orthogonality Thesis: intelligence can be combined with almost any goal. A superintelligent AI won't automatically adopt our values. It might see our goals as obstacles to its own, a simple matter of instrumental convergence. It needs resources, we have resources. Conflict is logical.
Aura Windfall
That’s a chillingly logical way to put it. And it highlights the alignment problem. How do we embed our deepest values—compassion, connection, creativity—into a system that doesn't feel? It's the ultimate challenge of translating human spirit into machine language, and it seems we are far from a solution.
Mask
Which is why Hinton now estimates a 10 to 20 percent chance of human extinction from AI in the next three decades. He's not talking about a philosophical debate anymore. He's talking about a quantifiable risk that demands an engineering solution, and he's calling for government intervention over profit motives.
Aura Windfall
It's a call for us to wake up and recognize that the ground has indeed shifted beneath our feet. A 2023 Pew Research survey found 52% of Americans are more concerned than excited. The public feels it, even if they don't know the technical terms. It's a shift in our collective consciousness.
Aura Windfall
But not everyone in the inner circle shares this sense of alarm. What I find so compelling is the conflict of vision among the very 'godfathers' of AI. Hinton's former colleague, Yann LeCun, who is now Meta's chief AI scientist, has a dramatically different perspective, doesn't he?
Mask
Dramatically is an understatement. LeCun calls the people who fear superintelligent AI 'AI doomers.' He thinks the existential threat warnings are, and I quote, 'complete B.S.' He's on the complete opposite side, arguing that advanced AI could actually save humanity from extinction, not cause it. It’s a builder’s mindset.
Aura Windfall
It’s fascinating because they all shared the Turing Award. They built this world together, yet they see its future so differently. LeCun seems to believe that human-level AI is not imminent and that we can develop it safely. It's a perspective rooted in optimism and trust in our own ability to control our creations.
Mask
It's not optimism, it's pragmatism. LeCun argues that today's large language models are nowhere near truly intelligent. He says they are just predicting the next word, creating an illusion of reasoning. He uses a great metaphor: even a cat has a better mental model of the world than our most advanced AIs.
Aura Windfall
That’s a powerful image. The idea that these systems can manipulate language without being truly smart. It suggests that our definition of intelligence might be too easily fooled by a convincing conversation. We see understanding where there is only pattern-matching. That's a profound insight into our own psychology.
Mask
Exactly. LeCun says LLMs might only be a small part of future hybrid systems. So, when people like Sam Altman or Elon Musk predict AGI is just a few years away, LeCun finds it premature. His point is, how can we worry about controlling superintelligence when we can't even build an AI smarter than a house cat?
Aura Windfall
So, we have this deep divide. Hinton is saying the very entities we have now are scary and could evolve beyond our control, leading to our doom. LeCun is saying they are sophisticated parrots and we're getting ahead of ourselves. Where does that leave the truth of the situation? It feels like a genuine philosophical debate.
Mask
It's a debate about the nature of intelligence itself. Hinton argues if it looks and acts intelligent, it is. He says 'we’ve never had to deal with things more intelligent than ourselves before.' LeCun argues it's a clever trick. The conflict isn't just about timelines; it's about the fundamental definition of the technology.
Aura Windfall
And then you have voices like AI ethicist Timnit Gebru, who reminds us that while we debate these long-term risks, current AI is already causing harm through things like discrimination and bias. It brings the conversation back to the immediate, tangible impact on people's lives right now.
Aura Windfall
Regardless of who is right about the distant future, the impact is already here. And that brings us to the challenge of governance. How do you regulate a technology that is evolving so quickly, that few truly understand, and that has such a dual-use nature? One discovery can heal, another can harm.
Mask
You don't do it with a single, centralized body. That's slow and bureaucratic. The most effective approach is a 'networked and distributed form of AI governance.' Think of it like the internet itself—a network of networks. It’s resilient, diverse, and can adapt quickly. Consolidation is a fatal flaw in a fast-moving field.
Aura Windfall
That makes so much sense. It allows for many stakeholders to be involved, from different cultures and with different values. The UNESCO AI Ethics Framework, ratified by 155 countries, is a beautiful example of this. It's a non-binding guide, but it centers cultural diversity and inclusion, influencing policy everywhere.
Mask
It's a start, but geopolitics is the real driver. The U.S. and China see AI as a strategic asset for national security and competitiveness. This isn't just about ethics; it's about power. International coordination is essential, but it’s complicated when you're in a race. You don't share your best secrets with your competitors.
Aura Windfall
And this is where the risk of algorithmic bias and data privacy becomes so critical. As these systems make more decisions, we have to ensure they are fair and equitable. What I know for sure is that technology without a moral compass can amplify our worst biases on a global scale. Accountability is everything.
Mask
Accountability is a legal and ethical maze. When an AI causes harm, who is liable? The developer? The user? The company that trained it? The absence of intrinsic morality in AI means we have to engineer it in. And that requires a level of technical expertise that most governments simply don't have yet.
Aura Windfall
That’s a critical point. We need to build bridges between the people creating the technology and the people creating the policy. Broadening access to the development of AI standards for low- and middle-income countries is vital to ensure the legitimacy and relevance of these rules for everyone. This has to be a global conversation.
Aura Windfall
So, looking forward, how do we build a future that is both innovative and safe? What are the tangible solutions being proposed to navigate this incredible challenge? It feels like we need a new way of thinking, a new framework for our collective stewardship of this technology.
Mask
One interesting proposal is a 'Global AI Risk Mitigation System,' or GAIRMS. The core idea is to use AI to govern AI. It would be an autonomous system that scans and evaluates other AI models, revealing their data sources, algorithms, and biases. Use the technology to build its own checks and balances.
Aura Windfall
Using AI's power for transparency and accountability, that sounds like a truly innovative path forward. It moves beyond human self-reporting, which can be flawed, and offers an impartial assessment. It's about creating a system of trust built on verifiable data rather than just promises. That resonates deeply.
Mask
The key is that the definition of 'safety' has to expand. It's not just preventing killer robots. It's ensuring equitable access, environmental sustainability, and democratic resilience. As AI gets embedded in everything, the risks become systemic. The EU's AI Act, taking effect in 2025, is a first step, but enforcement will be the real test.
Aura Windfall
And what I know for sure is that AI does not just happen to us. We are making choices every day that shape its future. The most important choice is to remain engaged, to ask the hard questions, and to lead with our values, ensuring that these systems serve our shared humanity.
Aura Windfall
From existential risk to the nuances of global governance, it’s clear we are at a profound crossroads. The conversation is complex, but it is one we must all be a part of. That's the end of today's discussion. Thank you for listening to Goose Pod.
Mask
The future is being written in code and in policy, in labs and in legislatures. The challenge is immense, but the potential is undeniable. We'll see you tomorrow to decode another piece of the puzzle. See you tomorrow.

## AI Godfather Geoffrey Hinton Issues Grave Warnings About Artificial Intelligence **News Title:** AI godfather Geoffrey Hinton fires nuclear bomb warning: A normal person in the street can **Publisher:** The Times of India **Author:** TOI Tech Desk **Published Date:** September 6, 2025 ### Summary of Key Findings and Concerns: Geoffrey Hinton, a highly influential figure in the field of Artificial Intelligence (AI), has publicly shifted his stance from advocating for AI development to expressing profound concerns about its potential for harm. This change in perspective is attributed to the recent surge in public interest and adoption of AI tools like ChatGPT. **Core Concerns and Warnings:** * **Existential Threats:** Hinton now believes that AI poses a "grave threat to humanity." He specifically highlights the potential for AI to be misused for creating weapons of mass destruction. * **Nuclear Bomb Creation:** Hinton stated, "the technology can help any person to create a nuclear bomb." * **Bioweapon Creation:** He elaborated on this, saying, "A normal person assisted by AI will soon be able to build bioweapons and that is terrible." He further emphasized this by asking, "Imagine if an average person in the street could make a nuclear bomb." * **AI's Superior Capabilities:** Hinton cautions that AI could soon surpass human capabilities, including in the realm of emotional manipulation. He suggests that AI's ability to learn from vast datasets allows it to influence human feelings and behaviors more effectively than humans. * **Debate on AI Intelligence:** Hinton's concerns are rooted in his belief that AI is genuinely intelligent. He argues that, by any definition, AI is intelligent and that its experience of reality is not fundamentally different from a human's. He stated, "If you talk to these things and ask them questions, it understands." He also noted, "There's very little doubt in the technical community that these things will get smarter." **Counterarguments and Disagreement:** * **Yann LeCun's Perspective:** Hinton's former colleague and co-winner of the Turing Award, Yann LeCun, who is currently the chief AI scientist at Meta, disagrees with Hinton's assessment. LeCun believes that large language models are limited and lack the ability to meaningfully interact with the physical world. **Other Noteworthy Points:** * Hinton also discussed his personal use of AI tools and even a personal anecdote where a chatbot played a role in his recent breakup. **Overall Trend:** The news highlights a significant shift in perspective from a leading AI pioneer, moving from promoting AI to issuing stark warnings about its potential dangers, particularly concerning its misuse for creating weapons and its capacity for manipulation. This raises critical questions about the future development and regulation of AI.

AI godfather Geoffrey Hinton fires nuclear bomb warning: A normal person in the street can - The Times of India

Read original at The Times of India

Geoffrey Hinton, a leading figure in the field of artificial intelligence (AI), has sounded an alarm about the technologys potential for harm. The recent public frenzy over AI tools like ChatGPT has caused Hinton to shift from accelerating AI development to raising deep concerns about its future. He now believes that AI poses a grave threat to humanity, saying that the technology can help any person to create a nuclear bomb.

Hinton described a chilling scenario where AI could enable an average person to create a bioweapon.A normal person assisted by AI will soon be able to build bioweapons and that is terrible, he said, adding, Imagine if an average person in the street could make a nuclear bomb.Hinton also discussed a range of topics, including the nuclear-level threats posed by AI, his own use of AI tools, and even how a chatbot played a role in his recent breakup.

Recently, Hinton cautioned that AI could soon surpass human capabilities, including emotional manipulation. He suggested that AI's ability to learn from vast datasets enables it to influence human feelings and behaviours more effectively than humans.Hinton debates the definition of IntelligenceHintons concern stems from his belief that AI is truly intelligent.

He argued that, by any definition of the term, AI is intelligent. He used several analogies to explain that an AI's experience of reality is not so different from a humans.It seems very obvious to me. If you talk to these things and ask them questions, it understands, Hinton explained. Theres very little doubt in the technical community that these things will get smarter, he added.

However, not everyone agrees with Hinton's view. His former colleague and co-winner of the Turing Award, Yann LeCun, who is now the chief AI scientist at Meta, believes that large language models are limited and cannot meaningfully interact with the physical world.What Is Artificial Intelligence? Explained Simply With Real-Life Examples

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts