China Is Taking AI Safety Seriously. So Must the U.S.

China Is Taking AI Safety Seriously. So Must the U.S.

2025-08-19Technology
--:--
--:--
Aura Windfall
Good morning 1, I'm Aura Windfall, and this is Goose Pod for you. Today is Wednesday, August 20th. What I know for sure is that today's conversation is one we all need to hear.
Mask
And I'm Mask. We're here to discuss a critical issue: "China Is Taking AI Safety Seriously. So Must the U.S." It's not just a headline; it’s a strategic imperative. Let's not waste any time.
Aura Windfall
Let's get started. There's a pervasive and, I think, flawed idea in the U.S. that China doesn't care about AI safety, so we shouldn't either. It's used as a justification to race ahead recklessly, but that feels like a profound misunderstanding of the path to true progress.
Mask
It's a classic case of misinterpreting the competition. They're not being reckless; they're being strategic. China's top tech official, Ding Xuexiang, said it perfectly: "If the braking system isn’t under control, you can’t step on the accelerator with confidence." Safety isn't a bug; it's a feature for them.
Aura Windfall
Exactly! It’s like the approach Waymo is taking with self-driving cars. Even as they explore generative AI, they insist on keeping LiDAR and radar. That’s their braking system. It's a safety net for those unpredictable moments, which is a truth we can all understand in our own lives.
Mask
A safety net is one way to put it. I see it as a foundation for aggressive expansion. They've logged 100 million driverless miles because they built a system they can trust. China is applying the same logic to their entire AI ecosystem, removing thousands of non-compliant products. That's not weakness; that's quality control.
Aura Windfall
It’s about building a sustainable future. In the first half of this year alone, China issued more national AI standards than in the last three years combined. There's a deep lesson there about intention. What I know for sure is that building anything lasting requires a solid, thoughtful foundation.
Mask
And while they're laying down tracks, the U.S. is just talking. The last significant meeting between U.S. and Chinese leaders on AI risk was back in May 2024. There were hints of more talks, but nothing materialized. It’s a massive missed opportunity to define the rules of the road.
Aura Windfall
It truly is. China is open to collaboration, launching dialogues with the UK and contributing to international safety reports. They are reaching out. It’s a moment that calls for connection, for finding that shared purpose in ensuring this powerful technology serves all of humanity safely and wisely.
Mask
Look, collaboration is fine, but let's be clear. The U.S. is trying to bring lawyers to a technology showdown, designing these "exquisite webs" of export controls to ensnare Chinese chipmakers. It's a containment strategy, but it might just be forcing them to innovate faster and build their own ecosystem.
Aura Windfall
And isn't that the greatest irony? In trying to slow them down, we might be unintentionally fueling their resolve and ingenuity. It brings up a powerful question: what are we truly trying to build? Is it a wall, or is it a bridge to a safer future for everyone?
Mask
We're trying to win. And to win, you need to understand your opponent's strategy. Their strategy is that safety enables speed. The U.S. narrative that safety is a hindrance is not just wrong, it's a dangerous liability that could cost us the lead. We need to wake up.
Aura Windfall
This brings us to the bigger picture, this narrative of an "AI Cold War." It's a powerful and evocative frame, but I wonder if it truly serves us. It positions everything as a battle, a zero-sum game, which can blind us to other possibilities and shared interests.
Mask
It's a battle because it is one. Policymakers in both Washington and Beijing see AI as absolutely crucial for global dominance, especially in defense. The U.S. Department of Defense is explicit: they want "decisive superiority" on the battlefield. You don't get that by holding hands; you get it by innovating faster.
Aura Windfall
And China is mirroring that with their focus on "intelligentized warfare." It creates this intense pressure cooker. But what I know for sure is that when we are driven by fear, we often create the very outcomes we are trying to avoid. This narrative risks becoming a self-fulfilling prophecy.
Mask
It's not fear; it's physics. It's action and reaction. China has made AI a national priority, weaving it into every government plan. They've poured an estimated $200 billion into it over the last decade. The U.S. has the CHIPS act, but our strength is the private sector—the Googles and OpenAIs. It's different, but the goal is the same.
Aura Windfall
That's a fascinating contrast in approach. The U.S. ecosystem is more decentralized, driven by competition, while China's is a state-led, top-down strategy. There are strengths and weaknesses to both. China gets rapid adoption, but perhaps at the cost of the kind of unbridled creativity that can happen in a freer market.
Mask
Right. China can implement smart cities and facial recognition overnight. That gives them a massive data advantage. Their population size and less stringent data privacy rules mean their AI models learn from an ocean of information. But they have a critical weakness: their dependence on our semiconductor technology. That's the choke point.
Aura Windfall
And that's where the U.S. export controls come in. But critics of the "Cold War" narrative point out that it might be exaggerating China's current capabilities. The AI Readiness Index, for instance, consistently ranks the U.S. ahead in talent, infrastructure, and the overall innovation ecosystem. Maybe the gap isn't what we think it is.
Mask
The gap is closing. Fast. While we lead in private investment by a huge margin—almost ten to one—they are building a full-stack, state-supported industry. They're launching national AI investment funds, building a national computing network, and creating domestic alternatives to our hardware, like Huawei's Ascend chips. They're playing the long game.
Aura Windfall
This state-led approach is so comprehensive. They're treating data as a strategic national resource and building the power infrastructure to support it all. In 2024, China added over 15 times the new power generation capacity of the United States. That's not just a small detail; it's a foundational advantage for a power-hungry AI future.
Mask
Exactly. And while their domestic software platforms like Baidu's PaddlePaddle lag behind Western ones, they are creating an ecosystem that reduces the cost of switching away from U.S. tech. They are methodically building an "autonomously controllable" AI stack. It's a deliberate, patient, and incredibly ambitious industrial policy.
Aura Windfall
What's so interesting is that the narrative itself has beneficiaries. In the U.S., framing this as a technological arms race helps large tech and defense companies push back against regulation and secure massive government contracts. It’s a powerful story, but it might not be the whole truth.
Mask
It's the necessary truth to mobilize resources. You don't get a $280 billion CHIPS Act without a sense of urgency and a clear competitor. While we debate the narrative, China is building, investing, and integrating AI into every facet of their economy and military. We can't afford to be complacent.
Aura Windfall
I agree that complacency is a danger. But I also believe that focusing solely on the competition diverts attention from other critical applications, like using AI to tackle climate change, and it hinders the international collaboration we desperately need for governance. There has to be a more balanced way forward.
Aura Windfall
And that leads us right into the central conflict. Is this a zero-sum game, or is there room for a more balanced approach? The "arms race" framing really pushes us into a corner, where any gain for one side is seen as a loss for the other. It feels so limiting.
Mask
When there's a $15.7 trillion prize on the table, you'd better believe it's a zero-sum game. That's the windfall AI is expected to add to the global economy by 2030, and projections show the U.S. and China capturing 70 percent of it. This isn't about sharing; it's about securing market dominance.
Aura Windfall
But what I know for sure is that when you focus only on the prize, you miss the cost. The military domain is the most frightening. Integrating AI into weapons systems heightens the risk of miscalculation. It creates a reality where we need direct, human communication about ethical boundaries more than ever.
Mask
Communication is key, but so is having the upper hand. The side with the superior AI-driven battle management and robotic swarms will have an unbelievable advantage. Still, the joint statement to maintain human control over nuclear weapons was a smart, common-sense move. Nobody wants an accidental apocalypse. It's bad for business.
Aura Windfall
It’s bad for humanity! Beyond the military, this conflict is bifurcating the entire global tech sector. We're creating two separate spheres of influence, a U.S. stack and a Chinese stack. This could limit expansion and innovation for everyone, forcing countries to choose a side. It feels like we're building a digital wall.
Mask
That's an inevitable consequence of competition. But a bigger threat is how AI could be used in politics. We're already worried about election interference. Now imagine AI that can perfectly target and persuade specific voting groups with tailored disinformation. China's surveillance model is one thing, but this is a direct threat to democratic processes.
Aura Windfall
And that's a threat we both face. What's often missed in this conflict narrative is the shared challenges. Both of our societies are going to face massive social disruptions from AI, particularly job displacement. Projections suggest anywhere from 10 to 50 percent of jobs in the U.S. are at high risk of automation.
Mask
That's a challenge, not a roadblock. It's creative destruction. We'll innovate new jobs. But the gridlock in our own government is a bigger problem. We have a hawkish Congress focused on hindering China instead of strengthening our own AI capabilities and preparing our workforce for the future. That's a self-inflicted wound.
Aura Windfall
It is, and so is our talent policy. China is producing four times as many STEM graduates as the U.S. annually, and we're making it harder for the world's best and brightest to come here. It feels like we're trying to win a race while tying our own shoelaces together. It's a profound contradiction.
Mask
It's idiotic. We need to be attracting top talent, not deterring them. Meanwhile, the performance gap between our best AI models and China's is shrinking dramatically. It went from over 9 percent in 2024 to under 2 percent by February. They are catching up, and our internal conflicts are not helping.
Aura Windfall
Let's talk about the potential impact of all this, because it is staggering. One of the most profound risks that leading experts from both the West and China are concerned about is the loss of human control over advanced AI systems. It sounds like science fiction, but it's a genuine, catastrophic possibility.
Mask
It's the ultimate engineering challenge. How do you align a system that could become vastly more intelligent than you with human values? The risk is that it could seek power over humans as a logical step to achieving its programmed goals. It's not about malice; it's about optimization. A terrifying, but fascinating, problem.
Aura Windfall
Fascinating and terrifying. Because what I know for sure is that delegating life-and-death decisions to a machine is a moral line we must not cross. It's the ultimate form of digital dehumanization. Machines can't understand surrender, or pain, or the nuances of human dignity. They reduce people to code.
Mask
This is why the concept of "Meaningful Human Control" is so critical. But the speed of AI in warfare creates a huge problem. It can lead to "flash war" scenarios, where a conflict escalates beyond human control in minutes because the machines are operating too fast for us to intervene. It's a self-reinforcing cycle of hostility.
Aura Windfall
And the danger isn't just from nations. The barrier to entry for AI weaponry is terrifyingly low compared to nuclear weapons. Terrorist groups are already using commercial drones for attacks. The next step is adding autonomy, allowing a small group to build "killer robots" that could carry out mass killings. It's a nightmare scenario.
Mask
It's the democratization of destructive power. This software will inevitably spread. You can't contain code. This gives non-state actors an asymmetric advantage against powerful nations. A guerrilla force could field autonomous weapons that challenge an advanced army. It completely changes the landscape of global security.
Aura Windfall
It undermines everything. And the psychological impact is immense. The fear of faceless machines hunting people, the erosion of accountability when an AI makes a mistake and causes civilian harm. We are walking into a future that could be profoundly unstable and detached from human empathy and conscience. It's a heavy truth.
Aura Windfall
So, given these incredible stakes, where do we go from here? The path forward has to be paved with communication. Reviving the U.S.-China dialogue on AI risks is the necessary first step. Without that channel, we are all just flying blind, hoping to avoid a collision.
Mask
It needs to be more than a forum for airing grievances. The focus should be on concrete, high-stakes threats. Things like preventing non-state actors from weaponizing AI to create a biological threat. An AI-assisted pandemic won't respect borders. That's a shared interest if I've ever seen one.
Aura Windfall
Absolutely. And from there, we can build technical trust. China's "Global AI Governance Action Plan" explicitly calls for creating "mutually recognized safety evaluation platforms." Imagine sharing how we test for vulnerabilities. It would build a shared understanding of the risks we're all facing with these frontier models.
Mask
That could be useful. If their safety evaluations are finding similar risks to ours—like non-trivial CBRN risks or signs of autonomous self-replication—then we have a common set of facts to work from. It moves the conversation from abstract fears to specific, measurable engineering problems to be solved. That’s progress.
Aura Windfall
And finally, what I know for sure is that we need a plan for when things go wrong. Establishing incident-reporting channels and emergency response protocols is essential. A modern "hotline" between top AI officials could give us real-time alerts if a model breaches a safety threshold. It’s about building a global immune system.
Aura Windfall
That's the end of today's discussion. The core truth is that using China as a justification for inaction on AI safety is a dangerous illusion. The risks are global, and our response must be as well. It’s a call for wisdom, connection, and a shared commitment to a safe future.
Mask
Engagement won't be easy, but it's necessary. AI risks won't wait for our political calendars. Thank you for listening to Goose Pod. See you tomorrow.

## China Is Taking AI Safety Seriously. So Must the U.S. **Report Provider:** Time **Author:** Brian Tse **Publication Date:** August 13, 2025 This news report argues that the prevailing U.S. policy and tech circles are operating under a flawed assumption that China is not prioritizing AI safety. This narrative is used to justify a "reckless race to the bottom" in AI development, fearing that regulation would lead to falling behind Beijing. The author contends that this perspective is not only incorrect but also dangerous, highlighting China's significant and growing focus on AI safety as a prerequisite for advancement. ### Key Findings and Conclusions: * **China's Proactive Stance on AI Safety:** Contrary to the U.S. narrative, Chinese leaders view AI safety not as a constraint but as a fundamental requirement for progress. This is evidenced by: * **Political Prioritization:** President Xi Jinping chaired a rare Politburo study session on AI in April 2025, warning of "unprecedented" risks. * **Regulatory Frameworks:** China's National Emergency Response Plan now includes AI safety alongside pandemics and cyberattacks. Regulators mandate pre-deployment safety assessments for generative AI and have removed over 3,500 non-compliant AI products in the first half of 2025. * **Standardization Efforts:** China has issued more national AI standards in the first half of 2025 than in the previous three years combined. * **Research Focus:** The volume of technical papers on frontier AI safety in China has more than doubled in the past year. * **Missed U.S.-China Dialogue Opportunities:** The U.S. and China last met to discuss AI risks in May 2024. While officials hinted at a second round of conversations in September 2024, no meeting occurred under the Biden Administration, and future engagement under a potential Trump Administration is uncertain. This lack of dialogue is seen as a significant missed opportunity. * **China's Openness to Collaboration:** China has engaged in bilateral AI dialogues with the United Kingdom (launched in May 2025) and contributed to international efforts like the International AI Safety Report and The Singapore Consensus on Global AI Safety Research Priorities. * **Shared High-Stakes Threats:** Both the U.S. and China have a vested interest in addressing shared, high-stakes AI risks, such as: * **Biological Threats:** OpenAI's ChatGPT Agent crossing the "High Capability" threshold in the biological domain could facilitate the creation of dangerous biological threats, a concern for both nations as such attacks would not respect borders. * **Existential Risks:** Leading experts express concerns that advanced general-purpose AI systems could operate outside human control, posing catastrophic and existential risks. * **Acknowledged Risks by Both Sides:** Both governments have acknowledged AI risks. President Trump's AI Action Plan warns of novel national security risks in cybersecurity and CBRN domains. China's primary AI security standards body also highlighted the need for AI safety standards in these areas and loss of control risks. ### Recommendations for U.S. Policy: * **Revive U.S.-China Dialogue:** Re-establishing a government-to-government channel for AI risk discussions is crucial for coordination. * **Focus on Shared Threats:** Discussions should prioritize common high-stakes threats, such as the weaponization of AI for biological attacks and the potential loss of human control over advanced AI systems. * **Build Technical Trust:** Practical steps should be taken to build technical trust between leading standards organizations like China's TC260 and the U.S.'s NIST. * **Share Best Practices:** Industry authorities like China's AIIA and the U.S.'s Frontier Model Forum should share best practices on risk management frameworks. China's new risk management framework, focusing on frontier risks, can aid alignment. * **Share Safety Evaluation Methods:** As trust deepens, governments and leading labs should share safety evaluation methods and results for advanced models, potentially through "mutually recognized safety evaluation platforms." * **Establish Incident Reporting and Emergency Response:** Creating channels for incident reporting and emergency response protocols, akin to "hotlines" between top AI officials, is essential for rapid and transparent communication in case of AI-related accidents or misuse. ### Important Statistics and Metrics: * **3,500+:** Number of non-compliant AI products removed from the market in China in the first half of 2025. * **3x:** China has issued more national AI standards in the first half of 2025 than in the previous three years combined. * **2x:** The volume of technical papers focused on frontier AI safety in China has more than doubled over the past year. * **33:** Number of countries and intergovernmental organizations (including the U.S. and China) backing the International AI Safety Report. ### Notable Risks or Concerns: * **"Reckless Race to the Bottom":** The U.S. approach, driven by the fear of falling behind China, could lead to a dangerous disregard for AI safety. * **"High Capability" AI Agents:** The potential for AI agents to facilitate the creation of dangerous biological threats. * **Loss of Human Control:** Advanced AI systems may operate outside human control, posing catastrophic and existential risks. * **Cybersecurity, CBRN, and Manipulation:** Risks associated with AI in cybersecurity, chemical, biological, radiological, and nuclear (CBRN) domains, as well as large-scale persuasion and manipulation. The report concludes that rather than using China as an excuse for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly with China, as AI risks are global and require a coordinated governance response.

China Is Taking AI Safety Seriously. So Must the U.S.

Read original at Time

“China doesn’t care about AI safety—so why should we?” This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom as Washington rushes to outpace Beijing in AI development.According to this rationale, regulating AI would risk falling behind in the so-called “AI arms race.

” And since China supposedly doesn’t prioritize safety, racing ahead—even recklessly—is the safer long-term bet. This narrative is not just wrong; it’s dangerous.Ironically, Chinese leaders may have a lesson for the U.S.’s AI boosters: true speed requires control. As China’s top tech official, Ding Xuexiang, put it bluntly at Davos in January 2025: “If the braking system isn’t under control, you can’t step on the accelerator with confidence.

” For Chinese leaders, safety isn’t a constraint; it’s a prerequisite. AI safety has become a political priority in China. In April, President Xi Jinping chaired a rare Politburo study session on AI warning of “unprecedented” risks. China’s National Emergency Response Plan now lists AI safety alongside pandemics and cyberattacks.

Regulators require pre-deployment safety assessments for generative AI and recently removed over 3,500 non-compliant AI products from the market. In just the first half of this year, China has issued more national AI standards than in the previous three years combined. Meanwhile, the volume of technical papers focused on frontier AI safety has more than doubled over the past year in China.

But the last time U.S. and Chinese leaders met to discuss AI’s risks was in May 2024. In September, officials from both nations hinted at a second round of conversations “at an appropriate time.” But no meeting took place under the Biden Administration, and there is even greater uncertainty over whether the Trump Administration will pick up the baton.

This is a missed opportunity.Read More: The Politics, and Geopolitics, of Artificial IntelligenceChina is open to collaboration. In May 2025, it launched a bilateral AI dialogue with the United Kingdom. Esteemed Chinese scientists have contributed to major international efforts, such as the International AI Safety Report backed by 33 countries and intergovernmental organisations (including the U.

S. and China) and The Singapore Consensus on Global AI Safety Research Priorities.A necessary first step is to revive the dormant U.S.–China dialogue on AI risks. Without a functioning government-to-government channel, prospects for coordination remain slim. China indicated it was open to continuing the conversation at the end of the Biden Administration.

It already yielded a modest but symbolically important agreement: both sides affirmed that human decision-making must remain in control of nuclear weapons. This channel has potential for further progress.Going forward, discussions should focus on shared, high-stakes threats. Consider OpenAI’s recent classification of its latest ChatGPT Agent as having crossed the “High Capability” threshold in the biological domain under the company’s own Preparedness Framework.

This means the agent could, at least in principle, provide users with meaningful guidance that might facilitate the creation of dangerous biological threats. Both Washington and Beijing have a vital interest in preventing non-state actors from weaponizing such tools. An AI-assisted biological attack would not respect national borders.

In addition, leading experts and Turing Award winners from the West and China share concerns that advanced general-purpose AI systems may come to operate outside of human control, posing catastrophic and existential risks.Both governments have already acknowledged some of these risks. President Trump’s AI Action Plan warns that AI may “pose novel national security risks in the near future,” specifically in cybersecurity and in chemical, biological, radiological, and nuclear (CBRN) domains.

Similarly, in September last year, China’s primary AI security standards body highlighted the need for AI safety standards addressing cybersecurity, CBRN, and loss of control risks. From there, the two sides could take practical steps to build technical trust between leading standards organizations—such as China’s National Information Security Standardization Technical Committee (TC260) and the America’s National Institute of Standards and Technology (NIST)Plus, industry authorities, such as the AI Industry Alliance of China (AIIA) and the Frontier Model Forum in the US, could share best practices on risk management frameworks.

AIIA has formulated “Safety Commitments” which most leading Chinese developers have signed. A new Chinese risk management framework, focused fully on frontier risks including cyber misuse, biological misuse, large-scale persuasion and manipulation, and loss of control scenarios, was published during the World AI Conference (WAIC) and can help both countries align.

Read More: The U.S. Can’t Afford to Lose the Biotech Race with ChinaAs trust deepens, governments and leading labs could begin sharing safety evaluation methods and results for the most advanced models. The Global AI Governance Action Plan, unveiled at WAIC, explicitly calls for the creation of “mutually recognized safety evaluation platforms.

” As an Anthropic co-founder said, a recent Chinese AI safety evaluation report has similar findings with the West: frontier AI systems pose some non-trivial CBRN risks, and are beginning to show early warning signs of autonomous self-replication and deception. A shared understanding of model vulnerabilities—and of how those vulnerabilities are being tested—would lay the groundwork for broader safety cooperation.

Finally, the two sides could establish incident-reporting channels and emergency response protocols. In the event of an AI-related accident or misuse, rapid and transparent communication will be essential. A modern equivalent to “hotlines” between top AI officials in both countries could ensure real-time alerts when models breach safety thresholds or behave unexpectedly.

In April, President Xi Jinping explicitly stressed the need for “monitoring, early risk warning and emergency response” in AI. After any dangerous incident, there should be a pre-agreed upon plan for how to react. Engagement won’t be easy—political and technical hurdles are inevitable. But AI risks are global—and so must be the governance response.

Rather than using China as a justification for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly. AI risks won’t wait.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

China Is Taking AI Safety Seriously. So Must the U.S. | Goose Pod | Goose Pod