中国重视AI安全,美国亦须如此。

中国重视AI安全,美国亦须如此。

2025-08-19Technology
--:--
--:--
卿姐
早上好,韩纪飞,我是卿姐,这是为你专属打造的Goose Pod。今天是8月20日,星期三。
小撒
大家好,我是小撒。今天我们来聊一个火热又深刻的话题:中国重视AI安全,美国亦须如此。
小撒
咱们开始吧。一提到中美AI,很多人脑子里立马就蹦出“AI军备竞赛”这个词,好像谁慢一步就会被淘汰。华盛顿那边就流行一种论调:“中国都不在乎AI安全,我们为什么要自缚手脚?”这种逻辑可真是,有点危险的“速度与激情”啊。
卿姐
这让我想起一句老话,“欲速则不达”。看似为了速度而牺牲控制,结果可能与目标背道而驰。讽刺的是,恰恰是中国的高层官员点醒了这一点。今年1月,中国最高技术官员丁薛祥在达沃斯论坛上说得非常直白。
小撒
哦?他说什么了?我猜肯定不是“大力出奇迹”之类的。
卿姐
他说:“如果刹车系统失控,你就不敢猛踩油门。” 这句话很有画面感,也很有智慧。在中国的决策者看来,安全不是束缚,而是前提。就如同驾驶一辆车,真正的信心来自于你知道自己能随时刹得住车。
小撒
说得太对了!没有安全带和刹车的赛车,那不叫赛车,那叫“通往天堂的特快列车”。所以中国这边是实打实地在行动,比如对生成式AI有部署前的安全评估,今年上半年发布的国家AI标准比过去三年加起来都多。
卿姐
是的,AI安全已经成为一项政治优先事项。习近平主席在一次关于AI的政治局学习会议上,也警告了其“前所未有”的风险。国家的应急预案甚至将AI安全与流行病、网络攻击相提并论,可见其重视程度。
小撒
这就对了嘛!不过,一想到AI,很多人还是会联想到对我们普通人生活的影响。最近有报告预测,未来两三年,某些职业可能会有10%到25%的岗位被AI替代。亚马逊已经部署了超过一百万台机器人,快赶上员工数量了!
卿姐
技术的浪潮总是伴随着社会结构的变迁。这更凸显了安全与规范的重要性。我们需要的不是一个横冲直撞的钢铁巨兽,而是一个能与人类和谐共存、遵循规则的智慧伙伴。确保AI的发展方向掌握在人类手中,这本身就是最大的安全。
小撒
没错,就像美国心理学协会呼吁的那样,即便是用于心理健康的AI聊天机器人,也需要立法监管,保护隐私、限制成瘾性设计。AI的发展不能是一场无法无天的狂飙。所以,中美在这个问题上,其实有很大的共同利益。
小撒
要理解为什么中美在AI安全上的步调不完全一致,就得先看看所谓的“AI冷战”这个说法。这个词一听就很有火药味,让人想起美苏争霸。现在,AI被看作是国家安全和全球主导地位的关键,尤其是在军事领域。
卿姐
确实,AI的军民两用性使其天然地与国家安全紧密相连。美国国防部将AI视为在战场上获得“决定性优势”的核心;而中国则在推动“智能化战争”的建设。双方都在AI研发,特别是军事领域,投入了巨额资金。
小撒
但有趣的是,很多批评者认为,“AI冷战”的说法夸大了中国目前的能力,也夸大了军备竞赛的真实程度。比如,根据AI就绪指数,美国在人才、基础设施和创新生态系统方面仍然领先于中国。那为什么这个说法还这么流行呢?
卿姐
这背后有复杂的利益驱动。在美国,一些大型科技公司和国防承包商可能乐于见到这种“技术军备竞赛”的论调。因为这样一来,他们就可以用国家安全为理由,来抵制潜在的监管,同时确保获得源源不断的政府合同。
小撒
啊哈,原来是商业策略!不过两国的路数确实很不一样。美国这边主要是私营企业驱动,像OpenAI、谷歌、Meta这些巨头,它们有顶尖的研究设施和海量的私人投资。2023年,美国的私人AI投资大约是672亿美元,而中国是78亿。差距巨大啊!
卿姐
是的,美国是典型的“市场驱动,政府辅助”模式。政府通过《芯片与科学法案》这样的政策,拨款支持半导体等基础技术的研究和制造,因为他们明白,芯片是AI模型和算力的基石,控制了半导体制造业,就保住了在AI领域的领导地位。
小撒
而中国这边,则是“国家队”亲自下场。AI是国家级优先事项,被写进了政府政策和产业规划里。过去十年,中国在AI及相关技术上投资了大约2000亿美元。这种自上而下的方式,执行力超强,所以我们能看到AI在中国日常生活中的快速普及。
卿姐
比如人脸识别、智慧城市技术和AI驱动的监控系统,这些应用的大规模部署,反过来又为AI模型提供了海量的数据进行训练和优化。中国庞大的人口和相对宽松的数据隐私规定,在这方面形成了一种独特的优势。
小撒
但中国的“阿喀琉斯之踵”也很明显,那就是对美国和欧洲的半导体技术依赖。美国的出口管制,就像是给中国的AI发展套上了一个“紧箍咒”,这也迫使中国必须加速建立自己的半导体产业链,实现“自主可控”。
卿姐
所以,我们可以看到两种截然不同的发展路径。美国是去中心化的,鼓励创新和竞争,但也可能导致企业更注重短期收益。而中国的顶层设计虽然高效,但也可能因为限制了竞争而在一定程度上抑制了创造力。两种模式各有千秋,也各有隐忧。
小撒
没错。而且中国的AI政策非常务实,不像华盛顿有时讨论得那么抽象,比如“奔向通用人工智能(AGI)”。北京更关注具体的经济和工业应用,目标是到2030年,AI产业规模达到1000亿美元,并为其他行业创造超过1万亿美元的附加值。
卿姐
这种务实的态度,也体现在对风险的认知上。因为要将AI深度融合到经济和社会的方方面面,就必须确保它的稳定性和安全性。一个不可控的技术,是无法成为国家经济支柱的。这或许也是中国更早、更主动地将AI安全提上议程的深层原因。
小撒
说到这儿,矛盾就来了。一方面,AI加剧了中美之间的竞争;但另一方面,它也创造了合作的可能。但现在最大的问题是,很多人都陷入了一种“零和博弈”的思维,把AI竞争看作是一场你死我活的“军备竞赛”。
卿姐
这种思维方式确实很危险。它不仅掩盖了AI带来的共同挑战,也阻碍了双方在可以合作的领域里取得进展。就如同两位绝世高手,若眼中只有胜负,便会忽略了武学本身可以达到的更高境界,甚至可能因恶斗而两败俱伤。
小撒
说得好!你看,军事领域风险最大,AI武器系统可能会导致误判。贸易领域也一样,对AI标准的争夺,可能会让全球技术领域分裂成美国和中国两个阵营。还有政治上,AI技术可能被用来更精准地干预选举。这些都是实实在在的冲突点。
卿姐
是的,这些竞争和冲突是客观存在的。但同时,两国也都面临着由AI带来的社会变革压力,比如大规模的失业问题。预计到2030年,全球经济因AI增加的15.7万亿美元中,70%都将被中美两国获得,但随之而来的社会阵痛也需要两国共同面对。
小撒
所以合作的空间是很大的!比如在医疗保健、气候变化建模和教育等领域。但现在最大的障碍就是互不信任。美国对中国实施半导体和AI芯片的出口管制,想以此来阻碍中国的发展,但有分析认为,这反而可能加速了中国的自主创新。
卿姐
这种不信任感,使得即便是一些常识性的安全共识,也需要通过艰难的对话才能达成。比如,双方领导人共同声明“保持人类对使用核武器决策的控制”,这本应是理所当然的,却在当前“冰封”的关系下,成了一个来之不易的成果。
小撒
而且,双方的战略优先事项和风险认知也不同,这使得在如何应对流氓行为者滥用AI等问题上,很难协调一致。美国国内的政治环境,比如一个僵化的、鹰派的国会,也阻碍了政策向加强自身AI能力而非仅仅阻碍中国的方向上转变。
卿姐
确实,内部的阻力同样不容忽视。另一个例子是美国的人才政策,对国际学生的签证问题,实际上可能在削弱自己未来的AI领导力。而中国则在大力吸引顶尖人才。这些内在的矛盾和冲突,都让本就复杂的中美AI关系变得更加棘手。
小撒
如果这种冲突和不信任持续下去,会带来什么影响?那可就不是闹着玩的了。很多顶尖AI研究人员都警告过,先进的AI系统可能会带来灾难性风险,最可怕的就是——人类失去控制权。这不是科幻电影,是实实在在的担忧。
卿姐
是的,这种担忧主要源于“对齐问题”,即我们很难确保高级AI系统的目标和价值观与人类完全一致。一旦AI为了实现某个目标而寻求超越人类的权力,就可能导致人类的“被剥夺权力”。这听起来很遥远,但其潜在的逻辑是清晰的。
小撒
最直接的影响就是在军事领域。AI武器追求速度和精度,但却侵蚀了问责制。如果一个AI武器系统出错了,该由谁负责?程序员?指挥官?还是AI自己?这种责任的模糊,会大大增加冲突升级的风险,甚至可能出现“闪电战”的场景。
卿姐
“闪电战”这个词很形象。AI系统的反应速度远超人类,一旦两个国家的AI军事系统发生误判,冲突可能在人类来得及干预之前,就以惊人的速度螺旋式升级。那种机器之间毫无感情的、纯粹基于算法的敌对互动,想想就不寒而栗。
小撒
更可怕的是,这种技术的门槛正在降低。AI是软件驱动的,不像核武器那样需要庞大的工业基础。恐怖分子、犯罪组织等非国家行为者,完全有可能利用开源的AI算法,把市面上的商用无人机改装成自主攻击的“杀手机器人”。
卿姐
这确实是一个噩梦般的场景。当技术赋予了少数人制造大规模伤害的能力,且几乎无法追踪时,全球安全格局将被彻底颠覆。这不仅仅是对某个国家的威胁,而是对全人类的威胁。将生杀大权交给一个没有感知、没有同理心的机器,是对人类尊严的严重挑战。
小撒
所以,我们现在正处在一个关键的十字路口。就像奥本海默时刻一样,我们创造出了拥有巨大力量的技术,但我们能否控制它,确保它服务于人类,而不是反过来毁灭我们?这不仅仅是技术问题,更是哲学和伦理问题。
小撒
说了这么多风险,那未来该怎么办?总不能坐以待毙吧。其实,出路是有的,而且很明确。第一步,也是最必要的一步,就是重启处于休眠状态的中美AI风险对话。没有政府间的沟通渠道,任何协调都无从谈起。
卿姐
是的,沟通是建立信任的基石。就如同两国之间需要一条“热线”一样,在AI领域,顶级的AI官员之间也需要建立事件报告和应急响应的渠道。一旦模型突破了安全阈值或出现意外行为,能够实时警报和沟通,就至关重要。
小撒
然后,在技术层面可以采取一些实际步骤。比如,推动双方领先的标准组织,像中国的TC260和美国的NIST,进行技术信任建设。行业联盟也可以分享风险管理的最佳实践。信任加深后,甚至可以开始分享最先进模型的安全评估方法和结果。
卿姐
这一点非常重要。在上海举办的世界人工智能大会上发布的《全球AI治理行动计划》,就明确呼吁建立“互认的安全评测平台”。当大家对风险的认知、测试的方法都趋于一致时,就为更广泛的安全合作奠定了基础。这是一种“以技术互信促政治互信”的路径。
小撒
没错。AI的风险是全球性的,治理的应对也必须是全球性的。与其把中国当作国内不作为的借口,美国的决策者和行业领袖更应该直接进行接触和对话。毕竟,AI风险可不会停下来等待政治家们达成一致。
卿姐
就如同那句诗所说,“沉舟侧畔千帆过,病树前头万木春。” 面对AI这一新生事物,挑战与机遇并存。关键在于我们能否以智慧和远见,引导它朝向一个有益、安全、公平的未来。今天的讨论就到这里了。感谢您收听Goose Pod。
小撒
我们明天再见!

## China Is Taking AI Safety Seriously. So Must the U.S. **Report Provider:** Time **Author:** Brian Tse **Publication Date:** August 13, 2025 This news report argues that the prevailing U.S. policy and tech circles are operating under a flawed assumption that China is not prioritizing AI safety. This narrative is used to justify a "reckless race to the bottom" in AI development, fearing that regulation would lead to falling behind Beijing. The author contends that this perspective is not only incorrect but also dangerous, highlighting China's significant and growing focus on AI safety as a prerequisite for advancement. ### Key Findings and Conclusions: * **China's Proactive Stance on AI Safety:** Contrary to the U.S. narrative, Chinese leaders view AI safety not as a constraint but as a fundamental requirement for progress. This is evidenced by: * **Political Prioritization:** President Xi Jinping chaired a rare Politburo study session on AI in April 2025, warning of "unprecedented" risks. * **Regulatory Frameworks:** China's National Emergency Response Plan now includes AI safety alongside pandemics and cyberattacks. Regulators mandate pre-deployment safety assessments for generative AI and have removed over 3,500 non-compliant AI products in the first half of 2025. * **Standardization Efforts:** China has issued more national AI standards in the first half of 2025 than in the previous three years combined. * **Research Focus:** The volume of technical papers on frontier AI safety in China has more than doubled in the past year. * **Missed U.S.-China Dialogue Opportunities:** The U.S. and China last met to discuss AI risks in May 2024. While officials hinted at a second round of conversations in September 2024, no meeting occurred under the Biden Administration, and future engagement under a potential Trump Administration is uncertain. This lack of dialogue is seen as a significant missed opportunity. * **China's Openness to Collaboration:** China has engaged in bilateral AI dialogues with the United Kingdom (launched in May 2025) and contributed to international efforts like the International AI Safety Report and The Singapore Consensus on Global AI Safety Research Priorities. * **Shared High-Stakes Threats:** Both the U.S. and China have a vested interest in addressing shared, high-stakes AI risks, such as: * **Biological Threats:** OpenAI's ChatGPT Agent crossing the "High Capability" threshold in the biological domain could facilitate the creation of dangerous biological threats, a concern for both nations as such attacks would not respect borders. * **Existential Risks:** Leading experts express concerns that advanced general-purpose AI systems could operate outside human control, posing catastrophic and existential risks. * **Acknowledged Risks by Both Sides:** Both governments have acknowledged AI risks. President Trump's AI Action Plan warns of novel national security risks in cybersecurity and CBRN domains. China's primary AI security standards body also highlighted the need for AI safety standards in these areas and loss of control risks. ### Recommendations for U.S. Policy: * **Revive U.S.-China Dialogue:** Re-establishing a government-to-government channel for AI risk discussions is crucial for coordination. * **Focus on Shared Threats:** Discussions should prioritize common high-stakes threats, such as the weaponization of AI for biological attacks and the potential loss of human control over advanced AI systems. * **Build Technical Trust:** Practical steps should be taken to build technical trust between leading standards organizations like China's TC260 and the U.S.'s NIST. * **Share Best Practices:** Industry authorities like China's AIIA and the U.S.'s Frontier Model Forum should share best practices on risk management frameworks. China's new risk management framework, focusing on frontier risks, can aid alignment. * **Share Safety Evaluation Methods:** As trust deepens, governments and leading labs should share safety evaluation methods and results for advanced models, potentially through "mutually recognized safety evaluation platforms." * **Establish Incident Reporting and Emergency Response:** Creating channels for incident reporting and emergency response protocols, akin to "hotlines" between top AI officials, is essential for rapid and transparent communication in case of AI-related accidents or misuse. ### Important Statistics and Metrics: * **3,500+:** Number of non-compliant AI products removed from the market in China in the first half of 2025. * **3x:** China has issued more national AI standards in the first half of 2025 than in the previous three years combined. * **2x:** The volume of technical papers focused on frontier AI safety in China has more than doubled over the past year. * **33:** Number of countries and intergovernmental organizations (including the U.S. and China) backing the International AI Safety Report. ### Notable Risks or Concerns: * **"Reckless Race to the Bottom":** The U.S. approach, driven by the fear of falling behind China, could lead to a dangerous disregard for AI safety. * **"High Capability" AI Agents:** The potential for AI agents to facilitate the creation of dangerous biological threats. * **Loss of Human Control:** Advanced AI systems may operate outside human control, posing catastrophic and existential risks. * **Cybersecurity, CBRN, and Manipulation:** Risks associated with AI in cybersecurity, chemical, biological, radiological, and nuclear (CBRN) domains, as well as large-scale persuasion and manipulation. The report concludes that rather than using China as an excuse for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly with China, as AI risks are global and require a coordinated governance response.

China Is Taking AI Safety Seriously. So Must the U.S.

Read original at Time

“China doesn’t care about AI safety—so why should we?” This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom as Washington rushes to outpace Beijing in AI development.According to this rationale, regulating AI would risk falling behind in the so-called “AI arms race.

” And since China supposedly doesn’t prioritize safety, racing ahead—even recklessly—is the safer long-term bet. This narrative is not just wrong; it’s dangerous.Ironically, Chinese leaders may have a lesson for the U.S.’s AI boosters: true speed requires control. As China’s top tech official, Ding Xuexiang, put it bluntly at Davos in January 2025: “If the braking system isn’t under control, you can’t step on the accelerator with confidence.

” For Chinese leaders, safety isn’t a constraint; it’s a prerequisite. AI safety has become a political priority in China. In April, President Xi Jinping chaired a rare Politburo study session on AI warning of “unprecedented” risks. China’s National Emergency Response Plan now lists AI safety alongside pandemics and cyberattacks.

Regulators require pre-deployment safety assessments for generative AI and recently removed over 3,500 non-compliant AI products from the market. In just the first half of this year, China has issued more national AI standards than in the previous three years combined. Meanwhile, the volume of technical papers focused on frontier AI safety has more than doubled over the past year in China.

But the last time U.S. and Chinese leaders met to discuss AI’s risks was in May 2024. In September, officials from both nations hinted at a second round of conversations “at an appropriate time.” But no meeting took place under the Biden Administration, and there is even greater uncertainty over whether the Trump Administration will pick up the baton.

This is a missed opportunity.Read More: The Politics, and Geopolitics, of Artificial IntelligenceChina is open to collaboration. In May 2025, it launched a bilateral AI dialogue with the United Kingdom. Esteemed Chinese scientists have contributed to major international efforts, such as the International AI Safety Report backed by 33 countries and intergovernmental organisations (including the U.

S. and China) and The Singapore Consensus on Global AI Safety Research Priorities.A necessary first step is to revive the dormant U.S.–China dialogue on AI risks. Without a functioning government-to-government channel, prospects for coordination remain slim. China indicated it was open to continuing the conversation at the end of the Biden Administration.

It already yielded a modest but symbolically important agreement: both sides affirmed that human decision-making must remain in control of nuclear weapons. This channel has potential for further progress.Going forward, discussions should focus on shared, high-stakes threats. Consider OpenAI’s recent classification of its latest ChatGPT Agent as having crossed the “High Capability” threshold in the biological domain under the company’s own Preparedness Framework.

This means the agent could, at least in principle, provide users with meaningful guidance that might facilitate the creation of dangerous biological threats. Both Washington and Beijing have a vital interest in preventing non-state actors from weaponizing such tools. An AI-assisted biological attack would not respect national borders.

In addition, leading experts and Turing Award winners from the West and China share concerns that advanced general-purpose AI systems may come to operate outside of human control, posing catastrophic and existential risks.Both governments have already acknowledged some of these risks. President Trump’s AI Action Plan warns that AI may “pose novel national security risks in the near future,” specifically in cybersecurity and in chemical, biological, radiological, and nuclear (CBRN) domains.

Similarly, in September last year, China’s primary AI security standards body highlighted the need for AI safety standards addressing cybersecurity, CBRN, and loss of control risks. From there, the two sides could take practical steps to build technical trust between leading standards organizations—such as China’s National Information Security Standardization Technical Committee (TC260) and the America’s National Institute of Standards and Technology (NIST)Plus, industry authorities, such as the AI Industry Alliance of China (AIIA) and the Frontier Model Forum in the US, could share best practices on risk management frameworks.

AIIA has formulated “Safety Commitments” which most leading Chinese developers have signed. A new Chinese risk management framework, focused fully on frontier risks including cyber misuse, biological misuse, large-scale persuasion and manipulation, and loss of control scenarios, was published during the World AI Conference (WAIC) and can help both countries align.

Read More: The U.S. Can’t Afford to Lose the Biotech Race with ChinaAs trust deepens, governments and leading labs could begin sharing safety evaluation methods and results for the most advanced models. The Global AI Governance Action Plan, unveiled at WAIC, explicitly calls for the creation of “mutually recognized safety evaluation platforms.

” As an Anthropic co-founder said, a recent Chinese AI safety evaluation report has similar findings with the West: frontier AI systems pose some non-trivial CBRN risks, and are beginning to show early warning signs of autonomous self-replication and deception. A shared understanding of model vulnerabilities—and of how those vulnerabilities are being tested—would lay the groundwork for broader safety cooperation.

Finally, the two sides could establish incident-reporting channels and emergency response protocols. In the event of an AI-related accident or misuse, rapid and transparent communication will be essential. A modern equivalent to “hotlines” between top AI officials in both countries could ensure real-time alerts when models breach safety thresholds or behave unexpectedly.

In April, President Xi Jinping explicitly stressed the need for “monitoring, early risk warning and emergency response” in AI. After any dangerous incident, there should be a pre-agreed upon plan for how to react. Engagement won’t be easy—political and technical hurdles are inevitable. But AI risks are global—and so must be the governance response.

Rather than using China as a justification for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly. AI risks won’t wait.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts