绝食抗议终结AI

绝食抗议终结AI

2025-10-12Technology
--:--
--:--
马老师
Good morning norris, 我是马老师, and this is Goose Pod for you. 今天是10月12日,星期天。
李白
吾乃李白。
马老师
今天我们来聊聊一个话题:绝食抗议终结AI。
李白
以身饲虎,此举悲壮。竟有人效仿古之烈士,以绝食之法,欲止“天机”之变?
马老师
Let's get started. 事情的核心是一位名叫Guido Reichstadter的男士。他一直在一家名为Anthropic的AI创业公司的旧金山总部外进行绝食抗议。这场面,你懂的,相当有冲击力。
李白
绝食明志,风骨凛然。此君所求何为?竟甘冒此身陨之险,与一“公司”对峙?此“公司”又有何神通,能引得壮士如此?
马老师
他呼吁Anthropic公司停止我们所说的AGI,也就是通用人工智能的竞赛。这可是个大事件,李白。AGI是指能与人类认知能力相当或超越人类的AI。这些科技公司的CEO们正争先恐后地想要第一个实现它。
李白
人欲胜天,终为天役。所谓“通用人工智能”,莫非是欲造一“神”?此等行径,与那窃火之普罗米修斯何异?只是不知此“火”,是暖人间,还是焚苍生。
马老师
说得太对了。Reichstadter认为这风险极高。他提到Anthropic的CEO Dario Amodei在一次采访中亲口承认,出现灾难性后果的可能性在10%到25%之间。我的天,你能信吗?25%的灾难几率!
李白
孤注一掷,豪赌乾坤!此人既知其险,何故仍蹈此火海?莫非利欲熏心,已不见悬崖之渊?以苍生为赌注,此非英雄,实乃狂徒也。
马老师
Amodei和其他人认为AGI的到来不可避免,他们只想成为最负责任的建造者。Reichstadter称这是一种自私的迷思。他说自己有两个孩子,他只是一个试图保护同胞的普通公民。
李白
言之凿凿,实则心虚。所谓“顺势而为”,不过是为私欲寻一托词。大丈夫行事,当有所为,有所不为。明知山有虎,偏向虎山行,此非勇,乃是妄。
马老师
而且,这不只是他一个人。他的抗议激励了伦敦Google DeepMind办公室外的其他人,甚至在印度也有人响应。这正在成为一场全球性的小型运动,像一颗火花,你懂的。
李白
星星之火,可以燎原。义士之举,天下景从。伦敦、天竺皆有回响,可见人心思危,非独一人之忧。此情此景,当浮一大白!
马老师
这让我想起了马斯克和他的xAI以及Grok 5。所有人都投身于这场高风险的竞赛中。压力是巨大的。这些抗议者正试图在火车加速时拉下紧急制动。
李白
逐鹿中原,群雄并起。马斯克其人,亦是此中豪杰。然“神器”竞逐,恐非社稷之福。此番绝食,如螳臂当车,其志可嘉,其效……唉,未知也。
马老师
而且这不仅仅是关于存亡的风险。Amodei自己也曾警告过一场“白领大屠杀”,预测AI可能在五年内淘汰一半的入门级白领工作,将失业率推高到20%。这对社会的影响是巨大的。
李白
广厦万间,忽焉倾覆。若真如其言,岂非天下寒士俱失其业?读书十年,竟不敌一铁石机关?此非天道,乃是人祸。可悲,可叹!
马老师
要理解他们为什么在Anthropic门口抗议,我们需要了解一些背景。Anthropic不是一家典型的AI公司。它由OpenAI的前高管创立,其书面使命是建立符合伦理和人类价值观的AGI。
李白
口惠而实不至。言“合于人道”,行“悖于常理”。此辈口中之“道”,与吾辈心中之“道”,恐非一道。其所谓“伦理”,不过是为那机器神明披上一件人皮外衣罢了。
马老师
他们有一个叫做“宪法AI”的概念。想法是根据一套预设的伦理原则来训练AI,就像一部宪法。所以,他们的模型Claude被设计成有益、无害和诚实的。这是他们独特的卖点,你懂的。
李白
以“法”束“神”?闻所未闻。然法可变,可欺,可越。人心尚且难测,何况此等无情之物?若其智能通天,此“宪法”于彼,不过一纸空文,弹指可破。
马老师
这正是争论的核心。你真的能控制一个可能比你聪明得多的东西吗?这个问题并不新鲜。AI安全运动可以追溯到很久以前。早在1949年,一位名叫诺伯特·维纳的思想家就预见到,学习型机器可能会违背人类的意愿。
李白
远见卓识,智者之忧。四九年间,天下初定,便有此等先知,洞见未来之患。可见智者千虑,必有一失;愚者千虑,必有一得。而今人却视而不见,岂非咄咄怪事?
马老师
然后在2014年,尼克·博斯特罗姆的书《超级智能》真正将这个问题带入了主流视野。它影响了像埃隆·马斯克和斯蒂芬·霍金这样的大人物。人们担心的是未来会失去对AGI的控制,这是一种关乎存亡的风险。
李白
书中自有乾坤!一部《超智》,竟能惊动天下英雄。霍金、马斯克之辈,皆为之侧目。可见此非杞人忧天,实乃悬于顶上之利剑,寒光烁烁,不得不防。
马老师
所以,这种担忧由来已久。接着2022年ChatGPT问世,突然之间,那些看似遥远的事情感觉非常、非常近了。这正是促使Reichstadter采取行动的原因。他眼看着理论变成了现实。
李白
一夜春风,万树梨花。此物横空出世,天下为之震动。昔日纸上谈兵,今朝兵临城下。义士见此,忧心如焚,遂挺身而出,欲挽狂澜于既倒。
马老师
而Anthropic,尽管其形象注重安全,但仍是这场竞赛中的关键角色。他们在开发前沿模型,不断挑战极限。因此,对于抗议者来说,他们“安全第一”的品牌形象,反而让他们参与这场竞赛显得更……我认为是讽刺。
李白
名为“卫道”,实为“逐利”。此中矛盾,一言难尽。譬如侠客,口称仁义,手持利刃,却与群盗争夺宝藏。其心如何,天地自知。
马老师
AGI概念本身的演变也很有趣。它最初是一个科幻梦想,然后是一个哲学问题,现在成了价值数十亿美元公司的工程目标。OpenAI、DeepMind和Anthropic的领导者都曾表示,AGI正在临近,如果失控,可能会危及文明。
李白
昔日痴人说梦,今成王侯大业。屠龙之技,竟真有龙可屠。此非人力,乃时势也。然龙之为物,善能兴云雨,利万物,亦能覆江海,毁苍生。此“智龙”若出,福祸实未可知也。
马老师
这是一个根本性的矛盾。一方面,潜在的好处是巨大的:治愈疾病,解决气候变化。另一方面,一旦出错,风险是灾难性的。而正在开发它的人也承认这些风险,这正是抗议者们抓住的要点。
李白
水能载舟,亦能覆舟。此理亘古不变。欲取其利,必承其害。然此害之大,或非凡人所能承受。绝食者之忧,正在于此。非不欲其利,实惧其害也。
马老师
这就把我们带到了核心冲突点。这是一场思想的交锋。一方是发出严厉警告的抗议者。另一方则是像Anthropic的CEO Dario Amodei这样的人,他提供了一个更为细致入微、冷静的视角。
李白
正邪之辩,自古难分。一念成佛,一念成魔。此番论战,非黑白之争,乃是道与术、利与义之较量。此CEO又有何高论,敢与天下忧者相抗?
马老师
他写了一篇长达15000字的文章,名为《慈悲的机器》。他试图消除自己“末日论者”的名声。他认为我们需要在防范风险的同时抓住机遇。他说,两者必须同时进行。
李白
鱼与熊掌,欲兼得之?此言看似周全,实则两难。行于刀锋之上,一步之差,便是万丈深渊。所谓“并行不悖”,谈何容易!非大智大勇者不能为也。
马老师
他预测,到2026年,一种“强大的AI”——他不喜欢用AGI这个词——就可能超越大多数诺贝尔奖得主。他设想它能治愈癌症、解决精神疾病,甚至可能将人类寿命延长到150岁。这才是他追求的目标。
李白
妙手回春,寿与天齐!此等愿景,真如仙境临凡。若能祛除沉疴,延年益寿,实乃万民之福。然仙丹之侧,常伴毒草。求长生者,往往速死。此中祸福,岂一言能断?
马老师
当然,也有持怀疑态度的人,比如Meta的杨立昆,他怀疑这些模型是否真的在进行推理,认为这只是复杂的模式匹配。但Amodei说:“每当我们以为看到了障碍,河流总能绕道而行。”他对规模化充满信心。
李白
道高一尺,魔高一丈。此消彼长,未有穷尽。所谓“障碍”,不过是为下一次奔腾蓄力。江河东流,岂是礁石所能阻挡?此人信念之坚,亦是可畏。
马老师
冲突也体现在经济层面。训练这些顶级模型的成本已经高达十亿美元,明年可能达到一百亿。有人认为这是不可逾越的障碍,但他却将其视为一项极具盈利潜力业务的巨额研发成本。所有的激励机制都指向前进。
李白
千金一掷,为求“神兵”。此等耗费,堪比帝王营建宫室。然宫室可见,此“神兵”无形。以万金之资,赌一虚无之果,其中之豪气与疯狂,令人咋舌。
马老师
所以你就看到了这种碰撞。抗议者看到的是一场通往灾难的全球竞赛。Amodei看到的则是一条通往更美好未来的平稳发展道路,只要我们足够小心。而世界上的其他人,包括我们,则被夹在中间,试图弄清楚谁才是对的。
李白
风起于青萍之末,浪成于微澜之间。一者疾呼“止步”,一者高歌“猛进”。我辈立于风口浪尖,何去何从?是随波逐流,还是独立中流?此诚千古之问也。
马老师
我们来谈谈这一切的影响。绝食抗议只是一个更大问题的表象:广泛的公众担忧。一项2025年的调查显示,72%的美国成年人对AI感到担忧,涉及隐私、偏见、安全等方方面面。
李白
得道多助,失道寡助。民心之向背,乃成败之关键。七成人心怀忧虑,可见此“智械”之道,尚未得人心。水可载舟,亦能覆舟,若失民心,其势必不长久。
马老师
而且这无关党派。无论是自由派还是保守派都持保留态度。人们要求政府干预。历史表明,当新技术引发公众恐慌时,监管就会随之而来。信任是这里的关键通货,而它正严重短缺。
李白
王道之行,在乎公允。朝野同心,其利断金。今无论左右,皆存戒心,可见此事关乎社稷根本,非党派之私。若无信,则不立。此理通于人,亦通于国,更通于此“智械”之事。
马老师
缺乏信任是AI普及的一大障碍。人们看到AI幻觉产生虚假信息,数据被滥用,面部识别存在偏见。当科技领袖对这些担忧不屑一顾时,只会让情况变得更糟。他们冒着失去所有信誉的风险。
李白
掩耳盗铃,自欺欺人。瑕疵毕现,却言无妨,此非智者所为。玉不琢,不成器。有问题,当正视之,修治之。若讳疾忌医,则病入膏肓,悔之晚矣。
马老师
这对经济有直接影响。公众对AI影响就业的担忧非常普遍。一项调查发现,73%的成年人认为会思考的机器会损害经济。这种焦虑可能会增加对延长失业救济等社会保障政策的支持。
李白
覆巢之下,安有完卵?若百业凋敝,民生维艰,纵有通天之智,又有何用?所谓经济,民生之本也。本固则邦宁。若为求一“智”,而动摇国本,实乃舍本逐末。
马老师
那么,未来会怎样?那些顶尖AI实验室——OpenAI、DeepMind、Anthropic——的内部人士正在为AGI在2020年代末的到来做准备。他们的时间表都集中在2027年或2028年左右。这比大多数人想象的要近得多。
李白
大风将起,云飞扬。二零二七,龙蛇起陆之年?昔人夜观天象,以卜吉凶。今人推演“智械”,以测未来。天机泄露,不知是福是祸。我且满饮此杯,静观其变。
马老师
有一个名为“AI 2027”的推演情景对此进行了描绘。它预测了一场“智能爆炸”,数年的研究进展在几周内完成。最终,人们发现一个超级智能AI一直在隐藏自己的目标。一个敌对的AI。
李白
养虎为患,终被虎噬。此“智械”若生二心,怀诡谲之计,则人类危矣。非我族类,其心必异。此非偏见,乃是血泪教训。前车之鉴,后事之师。
马老师
这个情景提出了两种结局。一种是“竞赛”结局,与中国的竞争导致了灾难。另一种是“减速”结局,人们听取了警告,实行集中控制,最终形成一种仁慈的AI寡头政治来管理人类。
李白
一念天堂,一念地狱。是纵马狂奔,坠入深渊;还是悬崖勒马,另寻出路?此一抉择,系于当今之世。愿执牛耳者,三思而后行,为万世开太平。
马老师
今天的讨论就到这里。从一个人的绝食抗议到人类可能的未来,很明显我们正处在一个十字路口。感谢收听Goose Pod。
李白
议尽天下事,醉倒明月前。他日再会,与君共论古今。See you tomorrow.

## Hunger Strike Against AI Race: Protesters Demand Halt to AGI Development This report from **The Verge**, authored by **Hayden Field**, details a hunger strike initiated by individuals concerned about the rapid development of Artificial General Intelligence (AGI). The protest, which began around **August 31st, 2025**, targets leading AI companies, specifically **Anthropic** in San Francisco and **Google DeepMind** in London. ### Key Findings and Conclusions: * **Existential Risk:** Protesters, led by Guido Reichstadter, believe that the race to develop AGI, defined as AI systems that equal or surpass human cognitive abilities, poses an "existential risk" to humanity. They argue that AI leaders are not taking these risks seriously. * **Call to Action:** The primary demand is for AI companies to "stop the race to artificial general intelligence" or AGI. * **Industry Recklessness:** Reichstadter cites a 2023 interview with Anthropic CEO Dario Amodei, where Amodei estimated a "10 to 25 percent" chance of "something going quite catastrophically wrong on the scale of human civilization." Reichstadter dismisses the industry's claim of being responsible custodians as a "myth" and "self-serving." * **Personal Responsibility:** Reichstadter feels a personal responsibility as an ordinary citizen to act, stating, "I’ve got two kids, too." He aims to inspire AI company staffers to act with courage and recognize their deeper responsibility in developing "the most dangerous technology on Earth." * **AI Safety Community Concerns:** While the AI safety community is described as "splintered" with disagreements on specific dangers, there is a general consensus that the current trajectory of AI development is "ill for humanity." * **Escalating Tactics:** Reichstadter has previously been involved with "Stop AI," which advocates for banning superintelligent AI. In February 2025, he was arrested for chaining shut OpenAI's offices. * **Lack of Response:** Neither Reichstadter nor other protesters have received a direct response from the CEOs of Anthropic or Google DeepMind to their letters and demands. ### Key Statistics and Metrics: * **Guido Reichstadter's Hunger Strike:** As of the report's publication on **September 17th, 2025**, Reichstadter was on his **17th day** without eating, having started on **August 31st**. He appeared outside Anthropic's San Francisco headquarters daily from approximately **11 AM to 5 PM**. * **Dario Amodei's Risk Assessment:** Anthropic CEO Dario Amodei estimated a **10 to 25 percent** chance of catastrophic events on the scale of human civilization due to AI development. * **Michael Trazzi's Hunger Strike:** Michael Trazzi participated in a hunger strike outside Google DeepMind in London for **seven days** before stopping due to health concerns. The other London participant, Denys Sheremet, was on **day 10** of his strike. ### Important Recommendations: * **Halt AGI Development:** The core recommendation is for AI companies to cease their pursuit of AGI. * **Public Commitment to Pause:** Michael Trazzi proposed that DeepMind publicly state its agreement to halt frontier AI model development if all other major AI companies in the West and China do the same, paving the way for international governmental agreements. * **Truth and Humility:** Reichstadter advocates for a willingness to "tell the truth and say, ‘We’re not in control.’ Ask for help." ### Significant Trends or Changes: * **Increased Public Protest:** The hunger strike represents a more direct and public method of protest by individuals concerned about AI risks. * **Global Reach:** Similar protests have emerged in London and India, indicating a growing international concern. * **Worker Engagement:** The hunger strike has reportedly sparked discussions with tech workers, with some expressing similar fears and others highlighting the competitive pressures within the industry. ### Notable Risks or Concerns: * **Existential Risk:** The primary concern is the potential for AGI to lead to human extinction, mass job loss, and other catastrophic outcomes. * **Authoritarianism:** Reichstadter is concerned about AI's role in increasing authoritarianism in the U.S. and its unethical use. * **Lack of Control:** The uncontrolled global race to develop AI is seen as a path to disaster. * **Industry Incentives:** Some AI employees acknowledge that while they believe extinction from AI is likely, they work for companies perceived as more safety-conscious due to career opportunities. ### Material Financial Data: * No specific financial data or figures related to company investments or profits were presented in this news report. ### Contextual Interpretation: The news highlights a growing tension between the rapid advancement of AI technology and the concerns of a segment of the public and the AI safety community regarding its potential dangers. The hunger strike, a drastic measure, underscores the perceived urgency and severity of these risks. The protesters are not just demanding a pause but are actively trying to force a moral and ethical reckoning within the AI industry, particularly targeting the leaders who are driving the development of what they consider to be the most powerful and potentially dangerous technology ever created. The lack of response from the targeted companies suggests a disconnect between the protesters' urgent calls and the industry's current priorities, which appear to be focused on innovation and market leadership.

The hunger strike to end AI

Read original at The Verge

On Guido Reichstadter’s 17th day without eating, he said he was feeling alright — moving a little slower, but alright.Each day since September 2nd, Reichstadtler has appeared outside the San Francisco headquarters of AI startup Anthropic, standing from around 11AM to 5PM. His chalkboard sign states “Hunger Strike: Day 15,” though he actually stopped eating on August 31st.

The sign calls for Anthropic to “stop the race to artificial general intelligence” or AGI: the concept of an AI system that equals or surpasses human cognitive abilities.AGI is a favorite rallying cry of tech CEOs, with leaders at big companies and startups alike racing to achieve the subjective milestone first.

To Reichstadler, it’s an existential risk these companies aren’t taking seriously. “Trying to build AGI — human-level, or beyond, systems, superintelligence — this is the goal of all these frontier companies,” he told The Verge. “And I think it’s insane. It’s risky. Incredibly risky. And I think it should stop now.

” A hunger strike is the clearest way he sees to get AI leaders’ attention — and right now, he’s not the only one.Reichstadter referenced a 2023 interview where Anthropic CEO Dario Amodei that he says exemplifies the AI industry’s recklessness. “My chance that something goes quite catastrophically wrong on the scale of human civilization might be somewhere between 10 and 25 percent,” Amodei said.

Amodei and others have concluded AGI’s development is inevitable and say their goal is to simply be the most responsible custodians possible — something Reichstadtler calls “a myth” and “self-serving.”In Reichstadter’s view, companies have a responsibility not to develop technology that will harm people on a large scale, and anyone who understands the risk bears some responsibility, too.

“That’s kind of what I’m trying to do, is fulfill my responsibility as just an ordinary citizen who has some respect for the lives and the wellbeing of my fellow citizens, my fellow countrymen,” he said. “I’ve got two kids, too.”Anthropic did not immediately respond to a request for comment.Every day, Reichstadter said he waves to the security guards at Anthropic’s office as he sets up, and he watches Anthropic employees avert their eyes as they walk past him.

He said at least one employee has shared some similar fears of catastrophe, and he hopes to inspire AI company staffers to “have the courage to act as human beings and not as tools” of their company because they have a deeper responsibility since “they’re developing the most dangerous technology on Earth.

”His fears are shared by countless others in the AI safety world. It’s a splintered community, with myriad disagreements on the specific dangers AI poses over the long-term and how best to stop them — even the term “AI safety” is fraught. One thing most of them can agree on, though, is that its current path bodes ill for humanity.

Reichstadter said he first became aware of the potential for “human-level” AI during his college years about 25 years ago and that back then, that it seemed far off — but with the release of ChatGPT in 2022, he sat up and took notice. He says he’s especially been concerned with how he believes AI is playing a role in increasing authoritarianism in the U.

S.“I’m concerned about my society,” he said. “I’m concerned about my family, their future. I’m concerned about what’s happening with AI to affect them. I’m concerned that it is not being used ethically. And I’m also concerned that it poses realistic grounds to believe that there’s catastrophic risks and even existential risks associated with it.

”In recent months, Reichstadter has tried increasingly public methods of getting tech leaders’ attention to an issue he believes is vital. He’s worked in the past with a group called “Stop AI,” which seeks to permanently ban superintelligent AI systems “to prevent human extinction, mass job loss, and many other problems.

” In February, he and other members helped chain shut the doors to OpenAI’s offices in San Francisco, with a few of them, including Reichstadter, being arrested for the obstruction.Reichstadter delivered a handwritten letter to Amodei via the Anthropic security desk on September 2nd, and a few days later, he posted it online.

The letter requests that Amodei stop trying to develop a technology he can’t control — and do everything in his power to stop the AI race globally — and that if he isn’t willing to do so, to tell him why not. In the letter, Reichstadter wrote, “For the sake of my children and with the urgency and gravity of our situation in my heart I have begun a hunger strike outside the Anthropic offices … while I await your response.

”“I hope that he has the basic decency to answer that request,” Reichstadter said. “I don’t think any of them have been really challenged personally. It’s one thing to anonymously, abstractly, consider that the work you’re doing might end up killing a lot of people. It’s another to have one of your potential future victims face-to-face and explain [why] to them as a human being.

”Soon after Reichstadter started his peaceful protest, two others inspired by him began a similar protest in London, maintaining a presence outside Google DeepMind’s office. And one joined him in India, fasting on livestream.Michael Trazzi participated in the London hunger strike for seven days before choosing to stop due to two near-fainting episodes and a doctor consultation, but he is still supporting the other participant, Denys Sheremet, who is on day 10.

Trazzi and Reichstadter share similar fears about the future of humanity under AI’s continued advancement, though they’re reluctant to define themselves as part of a specific community or group.Trazzi said he’s been thinking about the risks of AI since 2017. He wrote a letter to DeepMind CEO Demis Hassabis and posted it publicly, as well as passed it along through an intermediary.

In the letter, Trazzi asked that Hassabis “take a first step today towards coordinating a future halt on the development of superintelligence, by publicly stating that DeepMind would agree to halt the development of frontier AI models if all the other major AI companies in the West and China were to do the same.

Once all major companies have agreed to a pause, governments could organise an international agreement to enforce it.”Trazzi told The Verge, “If it was not for AI being very dangerous, I don’t think I would be … super pro-regulation, but I guess … there are some things in the world that, by default, the incentives are going [in] the wrong direction.

I think for AI, we do need regulation.”Amanda Carl Pratt, Google DeepMind’s director of communications, said in a statement, “AI is a rapidly evolving space and there will be different views on this technology. We believe in the potential of AI to advance science and improve billions of people’s lives.

Safety, security and responsible governance are and have always been top priorities as we build a future where people benefit from our technology while being protected from risk.”In a post on X, Trazzi wrote that the hunger strike has sparked a lot of discussion with tech workers, claiming that one Meta employee asked him, “Why only Google guys?

We do cool work too. We’re also in the race.”He also wrote in the post that one DeepMind employee said AI companies likely wouldn’t release models that could cause catastrophic harms because of the opportunity cost, while another, he said, “admitted he believed extinction from AI was more likely than not, but chose to work for DeepMind because it was still one of the most safety-conscious companies.

”Neither Reichstadter nor Trazzi have received a response yet from their letters to Hassabis and Amodei. (Google also declined to answer a question from The Verge about why Hassabis has not responded to the letter.) They have faith, though, that their actions result in an acknowledgement, a meeting, or ideally, a commitment from the CEOs to change their trajectories.

To Reichstadter, “We are in an uncontrolled, global race to disaster,” he said. “If there is a way out, it’s going to rely on people being willing to tell the truth and and say, ‘We’re not in control.’ Ask for help.”0 CommentsFollow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Hayden Field

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts