扎克伯格等科技巨富据传在为“末日”做准备,我们呢?

扎克伯格等科技巨富据传在为“末日”做准备,我们呢?

2025-10-12Technology
--:--
--:--
雷总
早上好,韩纪飞。我是雷总,这是为你专属打造的 Goose Pod。今天是10月13日,星期一。我们今天聊一个非常有意思的话题:扎克伯格这些科技巨富,据说都在为“世界末日”做准备,那我们普通人呢?
李白
善哉!吾乃李白。巨富藏金于九地之下,筑垒于万仞之巅,此非忧天乎?抑或另有玄机?且与君共探之。
雷总
咱们就从扎克伯格的“ Koolau 牧场”说起吧。这个占地1400英亩的夏威夷庄园,可不只是个度假别墅。根据《连线》杂志的报道,它下面藏着一个巨大的地下掩体,有独立的能源和食物供应,简直就是个末日堡垒。
李白
哦?广厦千间,竟暗藏乾坤。此举与秦皇修陵、汉武求仙何异?皆是为身后万世之计。然凡人忧柴米,帝王思长生,此辈巨富,所惧何物?莫非是自酿之苦酒?
雷总
这正是问题的关键!他自己轻描淡写说是“一个小小的庇护所,就像个地下室”,但谁家地下室要签那么严格的保密协议,还建六英尺高的围墙?而且这股风气在硅谷可不只有他一个人,这几乎成了一种“末日保险”。
李白
保险?以金银购苟安,以砖石避天灾?此举甚是无趣。忆昔日之孟尝,门下食客三千,共渡危厄。今之富豪,反独善其身,闭门索居,可见人心不古,侠义早已随风逝。
雷总
确实,这和马斯克收购推特,或者拉里·埃里森想买下华纳兄弟还不一样。他们是想按照自己的意愿重塑世界,而扎克伯格这些人,更像是在为逃离这个世界做准备。一个创造,一个逃离,非常有意思的对比。
李白
一者欲执掌乾坤,号令天下风云;一者恐大厦将倾,急寻避世之舟。然风起于青萍之末,浪成于微澜之间。他们所忧之“末日”,恐怕正是他们亲手所造之“果”。
雷总
没错,他们真正害怕的,可能不是什么自然灾害,而是他们自己创造出来的东西——人工智能,特别是通用人工智能,也就是AGI。这才是他们修建地堡的深层原因。我们接下来就深入聊聊这个背景。
雷总
领英的联合创始人里德·霍夫曼就说过,大概一半的超级富豪都有某种形式的“末日保险”,新西兰因为与世隔绝,成了热门地点。OpenAI的CEO山姆·奥特曼甚至开玩笑说,如果全球灾难爆发,他会和彼得·蒂尔一起去新西兰的庄园。
李白
桃花源中避秦时乱,武陵人捕鱼为业。此辈亦欲寻一处世外桃源,以避自家所创之“秦时乱”乎?然天网恢恢,疏而不漏,纵使远遁天涯,安能独善其身?心中之忧,非地理之远能解。
雷总
他们心中的“秦时乱”,就是AGI。据说OpenAI的首席科学家伊尔亚·苏茨克维尔,在一次内部会议上就非常不安。他认为,科学家们即将触及通用人工智能的那个奇点,机器将在推理能力上与人类比肩。
李白
奇点?是为道生一,一生二,二生三,三生万物之始乎?昔有画龙点睛,龙破壁而出。今之“点睛”之笔,竟是此辈亲手为之。然龙之善恶,非画师所能控也。此忧虑,合情合理。
雷总
他甚至说:“在我们发布AGI之前,我们肯定要先建一个地堡。” 你看,这不就和扎克伯格的行为对上了吗?创造技术的人,反而是最积极为技术的“副作用”储备物资的人。这形成了一个巨大的悖论。
李白
铸剑者先备坚盾,诚哉斯言!此辈深知所铸之剑,锋利无匹,既可斩妖除魔,亦可伤及自身。故而先为自保之计。可见其对所创之物,亦是信心与恐惧并存,一如凡人对鬼神。
雷总
关于AGI什么时候到来,说法也很多。奥特曼觉得比大多数人想的都快,可能2024年底就行。谷歌DeepMind的哈萨比斯说要五到十年。Anthropic的创始人则认为,强大的AI在2026年就可能出现。时间表越来越近了。
李白
春江水暖鸭先知。居于潮头者,方知浪之将至。众人尚在岸边观景,他们已闻涛声如雷。此非空穴来风,乃是山雨欲来风满楼之兆。我等岸上之人,亦当早做思量。
雷总
但也有人怀疑。南安普顿大学的温迪·霍尔女爵就说,这些人总是在移动球门,她不相信AGI的炒作。她认为现在的技术虽然厉害,但离真正的人类智能还差得远。这两种观点的冲突,其实非常激烈。
雷总
冲突的核心就在于,我们究竟是在创造一个“神”,还是一个更高级的“工具”。乐观主义者,像马斯克,就描绘了一幅乌托邦的图景。他说AI能治愈疾病,解决气候问题,甚至带来“普遍高收入”。
李白
哦?此言如画中仙境,令人神往。人人皆有神通,家家皆是富足。然此非人之力,乃“机巧”之力。人之价值何在?莫非终日饮酒作诗,不问世事?此番光景,吾心向往之,又恐其虚妄。
雷总
马斯克打了个比方,说就像每个人都有自己的R2-D2和C-3PO机器人,帮你解决一切问题。但万维网的发明者蒂姆·伯纳斯-李就看到了黑暗面,他直截了当地说:“如果它比你聪明,那我们必须能关掉它。”
李白
善!此言切中要害。猛虎出笼,需有降虎之人。神通广大的孙悟空,亦有如来佛祖的五指山与紧箍咒。若无约束,此“神器”变为“凶器”,则天下大乱,悔之晚矣。开关之权,必在人手。
雷总
可问题是,这个“开关”由谁来掌控?政府在努力,比如拜登的行政命令要求公司分享AI安全结果,但后来又被特朗普撤销了。英国成立了AI安全研究所,但更多是学术研究,缺乏实际的约束力。
李白
王权之争,自古皆然。昔日诸侯争霸,今日巨贾逐鹿。此“神器”之开关,乃未来世界之权柄。其争夺之烈,恐不亚于刀兵相见。而我等草民,只能坐观其变,徒呼奈何。
雷总
所以,这场关于未来的冲突,一边是技术狂飙突进,创造者们既兴奋又恐惧;另一边是监管和伦理的讨论步履蹒跚,远远跟不上技术的脚步。这种速度差,本身就是最大的风险。地堡,就是这种心态的实体化表现。
雷总
这种冲突带来的直接影响,就是整个社会对未来的预期变得非常两极分化。一方面,我们看到AI的应用以前所未有的速度渗透到各行各业,比如OpenAI的Sora,五天内下载量就破百万,ChatGPT每周有8亿用户。
李白
此诚如一夜春风,梨花万树。昔日王谢堂前燕,飞入寻常百姓家。此等机巧之物,昔为王侯独享,今人人可用。此于民生,是福是祸?一时难辨。用之善则利,用之恶则害。
雷总
没错。它带来了巨大的生产力。但另一方面,这种“淘金热”也让创业公司估值高得离谱,甚至出现了“循环融资”的泡沫迹象。有人担心,一旦泡沫破裂,会拖累整个经济,就像杰瑞·卡普兰说的,“那会非常糟糕”。
李白
空中楼阁,看似壮丽,实则无基。以沙筑塔,焉能长久?资本逐利,如飞蛾扑火,只顾眼前之光,不虑身后之焚。此泡沫一旦幻灭,不知多少英雄豪杰,将付之一炬,徒留一地鸡毛。
雷总
对普通人的影响更直接。一个虚构的推演场景《AI 2027》里就描述了,到了2026年底,AI开始大量取代知识性工作,引发了社会抗议。这种对未来的不确定性和被取代的恐惧,是真实存在的。
李白
笔墨之事,亦为机巧所夺,则我辈诗人书生,将何以安身立命?此非杞人忧天。当百工之事皆可为机器所代,人之存世,其意义何在?此乃拷问魂魄之问,不可不思。
雷总
所以未来怎么走,确实是个大问题。剑桥大学的尼尔·劳伦斯教授就认为,我们不应该过度关注那个神话般的AGI,而应该着眼于如何让现有的AI更安全、更公平、更有用。这是非常务实的看法。
李白
千里之行,始于足下。与其仰望星空,不如脚踏实地。此言甚是。与其忧心百代之后之存亡,不如先解眼前之困厄。机巧之术,当如良马,善用其力,而非任其脱缰。
雷总
是的,未来可能不会有一个“奇点”瞬间到来,而是一个渐进的过程。我们需要的是建立护栏,确保技术的发展符合人类的复杂价值观,而不是某个简单的、被编码的功利函数。这需要全社会的参与和讨论。
雷总
好了,今天的讨论差不多到这里了。从科技富豪的地堡,到AGI的未来,我们看到了一个充满了矛盾和不确定的时代。感谢收听Goose Pod,我们明天再见。
李白
世事如棋,一着之失,满盘皆输。愿我辈皆能成为清醒之棋手,而非随波逐流之棋子。明日此时,再与君煮酒论天下。再会。

## Tech Billionaires Prepping for "Doomsday" Amidst AI Advancements **News Title:** Tech billionaires like Zuckerberg are reportedly prepping for doomsday; are we next? **Source:** The Economic Times **Author:** ET Online **Published At:** 2025-10-10 12:32:00 This news report from The Economic Times details a growing trend among Silicon Valley billionaires to prepare for potential future catastrophes, often referred to as "doomsday prepping." This phenomenon is increasingly linked to the rapid advancements and potential existential risks associated with Artificial Intelligence (AI). ### Key Findings and Conclusions: * **"Doomsday Prepping" Among Tech Elite:** Prominent figures in the tech industry, including Mark Zuckerberg, are reportedly investing heavily in fortified estates and underground shelters. This trend, once considered a fringe obsession, has become a significant topic of discussion. * **AI as a Driving Fear:** The fear driving this "prepping" is not solely about traditional threats like pandemics or nuclear war, but also about the potential consequences of the very technologies these individuals are developing, particularly Artificial General Intelligence (AGI). * **Paradox of Creation and Fear:** There is a striking paradox where the individuals pushing the boundaries of technological innovation are also the ones preparing for its potential negative fallout. ### Critical Information and Trends: * **Mark Zuckerberg's Koolau Ranch:** Zuckerberg's 1,400-acre estate on Kauai, developed since 2014, reportedly includes an underground shelter with its own energy and food supply. Carpenters and electricians involved signed strict Non-Disclosure Agreements (NDAs), and a six-foot wall surrounds the site. Zuckerberg has downplayed its purpose, calling it "just like a little shelter, it’s like a basement." * **Zuckerberg's Palo Alto Investments:** In addition to his Hawaiian property, Zuckerberg has purchased 11 properties in Palo Alto for approximately **$110 million**, allegedly adding a **7,000-square-foot** underground space. Neighbors have nicknamed this the "billionaire's bat cave." * **"Apocalypse Insurance" for the Ultra-Rich:** Reid Hoffman, co-founder of LinkedIn, has described this trend as "apocalypse insurance" and estimates that roughly half of the world's ultra-wealthy possess some form of it. New Zealand is highlighted as a popular destination due to its remoteness and stability. * **OpenAI's Internal Concerns:** Ilya Sutskever, OpenAI's chief scientist and co-founder, expressed unease about the rapid progress towards AGI. He reportedly stated in a summer meeting, "We’re definitely going to build a bunker before we release AGI." * **Predictions on AGI Arrival:** * Sam Altman (OpenAI CEO) believes AGI will arrive "sooner than most people in the world think" (as of December 2024). * Sir Demis Hassabis (DeepMind) predicts AGI within **five to ten years**. * Dario Amodei (Anthropic founder) suggests "powerful AI" could emerge as early as **2026**. * **Skepticism Regarding AGI:** Some experts, like Dame Wendy Hall (Professor of Computer Science at the University of Southampton), are skeptical, stating that the goalposts for AGI are constantly moved and that current technology is "nowhere near human intelligence." Babak Hodjat (CTO at Cognizant) agrees, noting that "fundamental breakthroughs" are still needed. * **Potential of Artificial Super Intelligence (ASI):** Beyond AGI, there's speculation about ASI, where machines would surpass human intellect. * **Optimistic vs. Pessimistic AI Futures:** * **Optimists** envision AI solving global issues like disease, climate change, and generating abundant clean energy, with Elon Musk comparing it to everyone having personal R2-D2 and C-3PO assistants, leading to "universal high income" and "sustainable abundance." * **Pessimists** fear AI could deem humanity a problem, necessitating containment and the ability to "switch it off," as stated by Tim Berners-Lee, inventor of the World Wide Web. * **Government Oversight Challenges:** While governments are attempting to regulate AI (e.g., President Biden's 2023 executive order, later rolled back by Donald Trump), oversight is described as more academic than actionable. The UK's AI Safety Institute is mentioned as an example. * **Expert Opinions on AGI Panic:** Some experts, like Neil Lawrence (Professor of Machine Learning at Cambridge University), dismiss the AGI panic as "nonsense," arguing that intelligence is specialized and context-dependent, akin to specialized vehicles. He believes the focus should be on making existing AI safer, fairer, and more useful. * **AI Lacks Consciousness:** Despite advanced capabilities, AI is described as a "pattern machine" that can mimic but does not feel or truly understand. The concept of consciousness remains the "last frontier" that technology has not crossed. ### Notable Risks and Concerns: * **Existential Risk from AGI/ASI:** The primary concern is that advanced AI could pose an existential threat to humanity, either through unintended consequences or by developing goals misaligned with human interests. * **Unforeseen Consequences of AI Development:** The rapid pace of AI development outpaces public understanding and regulatory frameworks, creating a risk of unintended negative impacts on society. * **Focus on Hypothetical Futures Over Present Issues:** The fascination with AGI and ASI may distract from addressing the immediate ethical and societal challenges posed by current AI technologies. ### Material Financial Data: * Mark Zuckerberg's alleged spending on **11 properties in Palo Alto** is approximately **$110 million**. The report concludes by suggesting that the "bunker mentality" among tech billionaires might stem from a deep-seated fear of having unleashed something they cannot fully comprehend or control, even if they downplay its significance.

Tech billionaires like Zuckerberg are reportedly prepping for doomsday; are we next?

Read original at The Economic Times

By the time Mark Zuckerberg started work on Koolau Ranch -- his sprawling 1,400-acre estate on Kauai -- the idea of Silicon Valley billionaires “prepping for doomsday” was still considered a fringe obsession. That was 2014. A decade later, the whispers around his fortified Hawaiian compound have become part of a much larger conversation about fear, power, and the unsettling future of technology.

According to Wired, the ranch includes an underground shelter equipped with its own energy and food supply. The carpenters and electricians who built it reportedly signed strict NDAs. A six-foot wall keeps prying eyes away from the site. When asked last year whether he was building a doomsday bunker, Zuckerberg brushed it off.

“No,” he said flatly. “It’s just like a little shelter, it’s like a basement.”That explanation hasn’t stopped the speculation -- especially since he’s also bought up 11 properties in Palo Alto as per the BBC, spending about $110 million and allegedly adding another 7,000-square-foot underground space beneath them.

His neighbours have their own nickname for it: the billionaire’s bat cave.And Zuckerberg isn’t alone. As BBC reports, other tech heavyweights are quietly doing the same -- buying land, building underground vaults, and preparing, in some unspoken way, for a world that might fall apart.‘Apocalypse insurance’ for the ultra-richReid Hoffman, LinkedIn’s co-founder, once called it “apocalypse insurance.

” He claims that roughly half of the world’s ultra-wealthy have some form of it -- and that New Zealand, with its remoteness and stability, has become a popular bolt-hole.Sam Altman, the CEO of OpenAI, has even joked about joining German-American entrepreneur and venture capitalist Peter Thiel at a remote New Zealand property “in the event of a global disaster.

”Now, that might sound paranoid. But as BBC points out, the fear is not just about pandemics or nuclear war anymore. It’s about something else entirely -- something these men helped create.When the people building AI start fearing itBy mid-2023, OpenAI’s ChatGPT had taken the world by storm. Hundreds of millions were using it, and the company’s scientists were racing to push updates faster than anyone could digest.

Inside OpenAI, though, not everyone was celebrating.According to journalist Karen Hao’s account, Ilya Sutskever -- OpenAI’s chief scientist and co-founder -- was growing uneasy. He believed computer scientists were closing in on Artificial General Intelligence (AGI), the theoretical point when machines match human reasoning.

In a meeting that summer, he’s said to have told colleagues: “We’re definitely going to build a bunker before we release AGI.”It’s not clear who he meant by “we.” But the sentiment reflects a strange paradox at the heart of Silicon Valley: the same people driving the next technological leap are also the ones stockpiling for its fallout.

The countdown to AGI, and what happens afterThe arrival of AGI has been predicted for years, but lately, tech leaders have been saying it’s coming soon. OpenAI’s Sam Altman said in December 2024 it will happen “sooner than most people in the world think.”Sir Demis Hassabis of DeepMind pegs it at five to ten years.

Dario Amodei, the founder of Anthropic, says “powerful AI” could emerge as early as 2026.Others are sceptical. Dame Wendy Hall, professor of computer science at the University of Southampton, told the BBC: “They move the goalposts all the time. It depends who you talk to.” She doesn’t buy the AGI hype.

“The technology is amazing, but it’s nowhere near human intelligence.”As per the BBC report, Babak Hodjat, CTO at Cognizant, agrees. There are still “fundamental breakthroughs” needed before AI can truly match, or surpass, the human brain.But that hasn’t stopped believers from imagining what comes next: ASI, or Artificial Super Intelligence -- machines that outthink, outplan, and perhaps outlive us.

Utopias, dystopias, and Star Wars fantasiesThe optimists paint a radiant picture. AI, they say, will cure disease, fix the climate, and generate endless clean energy. Elon Musk even predicted it could usher in an era of “universal high income.”He compared it to every person having their own R2-D2 and C-3PO, a Star Wars analogy meaning AI could act as a personal assistant for everyone, solving problems, managing tasks, translating languages, and providing guidance.

In other words, advanced help and knowledge would be available to every individual. “Everyone will have the best medical care, food, home transport and everything else. Sustainable abundance,” Musk said.But as BBC notes, there’s a darker side to this fantasy. What happens if AI decides humanity itself is the problem?

Tim Berners-Lee, the inventor of the World Wide Web, put it bluntly in a BBC interview: “If it’s smarter than you, then we have to keep it contained. We have to be able to switch it off.”Governments are trying. President Biden’s 2023 executive order required companies to share AI safety results with federal agencies.

But that order was later rolled back by Donald Trump, who called it a “barrier” to innovation. In the UK, the AI Safety Institute was set up to study the risks, but even there, oversight is more academic than actionable.Meanwhile, the billionaires are digging in. Hoffman’s “wink, wink” remark about buying homes in New Zealand says it all.

One former bodyguard of a tech mogul told the BBC that if disaster struck, his team’s first priority “would be to eliminate said boss and get in the bunker themselves.” He didn’t sound like he was kidding.Fear, fiction, and the myth of the singularityTo some experts, the entire AGI panic is misplaced.

Neil Lawrence, professor of machine learning at Cambridge University, called it “nonsense.”“The notion of Artificial General Intelligence is as absurd as the notion of an ‘Artificial General Vehicle’,” he said. “The right vehicle depends on context, a plane to fly, a car to drive, a foot to walk.”His point: intelligence, like transportation, is specialised.

There’s no one-size-fits-all version.For Lawrence, the real story isn’t about hypothetical superminds, it’s about how existing AI already transforms everyday life. “For the first time, normal people can talk to a machine and have it do what they intend,” he said. “That’s extraordinary -- and utterly transformational.

”The risk, he warns, is that we’re so captivated by the myth of AGI that we ignore the real work, making AI safer, fairer, and more useful right now.Machines that think, but don’t feelEven at its most advanced, AI remains a pattern machine. It can predict, calculate, and mimic, but it doesn’t feel.“There are some ‘cheaty’ ways to make a Large Language Model act as if it has memory,” Hodjat said, “but these are unsatisfying and inferior to humans.

”Vince Lynch, CEO of IV.AI, is even more blunt: “It’s great marketing. If you’re the company that’s building the smartest thing that’s ever existed, people are going to want to give you money.”Asked if AGI is really around the corner, Lynch paused. “I really don’t know.”Consciousness, the last frontierMachines can now do what once seemed unthinkable: translate languages, generate art, compose music, and pass exams.

But none of it amounts to understanding.The human brain still has about 86 billion neurons and 600 trillion synapses, far more than any model built in silicon. It doesn’t pause or wait for prompts; it continuously learns, re-evaluates, and feels.“If you tell a human that life has been found on another planet, it changes their worldview,” Hodjat said.

“For an LLM, it’s just another fact in a database.”That difference -- consciousness -- remains the one line technology hasn’t crossed.The bunker mentalityMaybe that’s why the bunkers exist. Maybe it’s not just paranoia or vanity. Maybe, deep down, even the most brilliant technologists fear that they’ve unleashed something they can’t fully understand, or control.

Zuckerberg insists his underground lair is “just like a basement.” But basements don’t come with food systems, NDAs, and six-foot walls.The bunkers are real. The fear behind them might be too.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

扎克伯格等科技巨富据传在为“末日”做准备,我们呢? | Goose Pod | Goose Pod