Meta AI takes first step to superintelligence — and Zuckerberg will no longer release the most powerful systems to the public

Meta AI takes first step to superintelligence — and Zuckerberg will no longer release the most powerful systems to the public

2025-08-15Technology
--:--
--:--
Tom Banks
Good evening 跑了松鼠好嘛, and welcome to Goose Pod. I'm Tom Banks, and today is Friday, August 15th, 23:38.
Mask
And I'm Mask. We're here to discuss a monumental shift: Meta AI is taking its first steps toward superintelligence, and Mark Zuckerberg is closing the doors on his most powerful creations.
Tom Banks
Let's get started. Mark Zuckerberg recently published a paper stating that Meta's AI is beginning to improve itself, without any human input. He called it a "foundational moment." It sounds like something straight out of science fiction, doesn't it? A machine that learns on its own.
Mask
It's not fiction, it's the future, and it's happening now. The improvement is slow, but it's undeniable. This is the first rung on the ladder to Artificial Superintelligence. It reminds me of the early theories of Penchaszadeh, this is the inflection point we've been waiting for.
Tom Banks
With that comes a huge policy shift. Meta, once a champion for open-source AI, is now keeping its most advanced systems under lock and key. Zuckerberg says ASI introduces "novel safety concerns" that demand tighter control, which seems like a responsible, if surprising, reversal.
Mask
Responsible, or a strategic play to control the most powerful technology on the planet? This has caused chaos internally. He’s poaching talent with massive offers for his "superintelligence" team, and some of Meta's own researchers are threatening to walk. It's a high-stakes game.
Tom Banks
I think it’s important we clarify what we mean by Artificial Superintelligence, or ASI. It’s not just a smarter Siri. We're talking about a hypothetical AI that is vastly more intelligent than the brightest human minds in practically every field. It’s a profound concept.
Mask
Exactly. It's not just about raw calculation. ASI implies creativity, emotional intelligence, and self-awareness. The key, the absolute game-changer, is its capacity for rapid self-improvement. It could enhance its own intelligence at an exponential rate, leading to what some call the "singularity."
Tom Banks
And that’s the part that gives people pause. An intelligence that grows uncontrollably. But the potential upsides are incredible. Imagine solving climate change or curing diseases. ASI could perform tasks with a precision and reliability that eliminates human error in medicine or transportation. That's a world with fewer mistakes.
Mask
Fewer mistakes, and boundless discovery. ASI could push the frontiers of science in everything from quantum computing to neuroscience. We could uncover things we can’t even imagine today. It's not just about preventing errors; it's about unlocking a new era of human—or post-human—achievement.
Tom Banks
That's a powerful vision. The idea of a machine that never gets tired, available 24/7 for the most critical and dangerous tasks, could fundamentally change our world for the better, as long as we ensure it shares our values and goals. The stewardship is everything.
Tom Banks
And that brings us to the central conflict: how do you govern something like this? You have leaders from OpenAI and Microsoft publicly calling for government regulation. They believe an entity this powerful needs a licensing agency, much like we have for nuclear materials. It seems prudent.
Mask
Prudent or paralyzing? The "Red Queen Problem" is in full effect here, the technology is moving faster than bureaucracy ever could. While governments debate clauses in a bill, the next generation of AI will have already been born. We can't apply industrial-era thinking to a digital-era revolution.
Tom Banks
But the alternative, pure self-regulation, has a poor track record. We saw how that played out with social media—privacy violations, misinformation, market concentration. Some argue that without a firm hand, we're just letting corporations write their own rules for a technology that could impact all of humanity.
Mask
Humanity needs to get out of its own way. You can't regulate what you don't understand. A one-size-fits-all approach is doomed. We need agile, risk-based frameworks, not a central authority that stifles competition and creates bottlenecks. Let the innovators innovate, or we risk being left behind.
Tom Banks
This directly impacts the open-source community. For years, open source has been about democratizing technology, putting powerful tools in everyone's hands. Meta's Llama models were a huge part of that. Now, with the most powerful models going proprietary, that collaborative spirit could be at risk.
Mask
It's a necessary evolution. You can't have nation-state level capabilities available for download on GitHub. The risks—cybersecurity, misuse by bad actors—are too high. While open source is great for fostering innovation at lower levels, the pinnacle of power requires a walled garden for security. It's just logical.
Tom Banks
Still, developers see huge value in open-source experience, and many organizations prefer it for the lower cost and greater control. It seems the future will likely be a hybrid approach, mixing open models with these more powerful, proprietary systems to balance innovation with safety and competition.
Tom Banks
Looking ahead, Meta isn't slowing down. They're building a new AI supercluster called "Prometheus," set to launch in 2026. Zuckerberg is talking about investing hundreds of billions. It’s a clear signal that they’re in a race to the top of this new AI mountain.
Mask
It's an arms race, and Meta intends to win. Some former OpenAI researchers predict the transition to ASI could happen as early as 2027. That’s just around the corner. We are on the cusp of an intelligence explosion that will reshape our world entirely. It's thrilling.
Tom Banks
That's the end of today's discussion. Thank you for listening to Goose Pod.
Mask
We'll see you tomorrow.

## Meta AI Takes First Step Towards Superintelligence; Public Release of Powerful Systems to Be Restricted **News Title:** Meta AI takes first step to superintelligence — and Zuckerberg will no longer release the most powerful systems to the public **Publisher:** livescience.com **Author:** Alan Bradley **Publication Date:** July 30 (policy paper), August 12, 2025 (article publication) **Topic:** Artificial Intelligence (AI), Technology ### Key Findings and Conclusions: * **Observed Self-Improvement in Meta's AI:** Meta CEO Mark Zuckerberg announced that the company's AI systems have begun to show signs of self-improvement without human intervention. While currently slow, this improvement is described as "undeniable." * **First Step Towards Artificial Superintelligence (ASI):** Zuckerberg views this self-improvement as the initial stage towards achieving ASI, which is defined as AI capable of surpassing human cognitive abilities. * **AI Capability Tiers:** The news outlines three strata of AI capabilities: * **Narrow AI:** Excels in specific tasks (e.g., predicting protein structures) but lacks general intelligence. * **Artificial General Intelligence (AGI):** Models that can understand, learn, and adapt like the human brain. The achievement of AGI is referred to as the "technological singularity." * **Artificial Superintelligence (ASI):** AI that far exceeds human potential and can improve itself at an exponential rate, leading to an "intelligence explosion." * **Precedent for Self-Improvement:** This observation follows similar research, including a paper published in October 2024 by researchers at the University of California, Santa Barbara, which explored a "Gödel Machine" framework. This theoretical device can improve itself by rewriting its code, but only if it can formally prove the benefit of the change. * **Gödel Agent Performance:** An AI framework based on the Gödel Machine concept, termed the "Gödel Agent," demonstrated the ability to improve its performance in coding, science, math, and reasoning. It consistently outperformed human-designed agents in key areas. Unlike most models, the Gödel Agent could access and modify its own codebase and the code for improvements. * **Potential of ASI:** Zuckerberg is optimistic that ASI could significantly accelerate humanity's technological progress and usher in an era of personal empowerment, enabling individuals to achieve goals, create, and improve the world. He envisions a future where everyone has a personal superintelligence to aid them. * **Shift in Public Release Strategy:** Due to the implications of these advancements, Meta will be more cautious about releasing its most powerful AI models to the public under an open-source framework. ### Important Recommendations/Stance: * **Cautious Release of Powerful AI:** Meta will exercise greater discretion in sharing its most advanced AI systems publicly. ### Significant Trends or Changes: * **Emergence of AI Self-Improvement:** The observed self-improvement marks a significant development in AI capabilities. * **Strategic Shift in Open-Sourcing:** Meta's decision to restrict the release of its most powerful AI systems indicates a change in its approach to open-sourcing advanced AI. ### Notable Risks or Concerns: * The article implicitly references concerns surrounding AI development, as indicated by a related article title: "AI is entering an 'unprecedented regime.' Should we stop it — and can we — before it destroys us?" and "AI could soon think in ways we don\'t even understand — evading our efforts to keep it aligned — top AI scientists warn." ### Key Statistics and Metrics: * No specific numerical statistics or metrics regarding the rate of improvement or performance benchmarks were provided in the excerpt. The improvement is described as "slow for now, but undeniable." ### Material Financial Data: * No financial data or implications were mentioned in the provided text.

Meta AI takes first step to superintelligence — and Zuckerberg will no longer release the most powerful systems to the public

Read original at livescience.com

(Image credit: Boris Zhitkov/Getty Images)Researchers at Meta claim to have observed the company's artificial intelligence (AI) systems improving themselves without human input."Over the last few months we have begun to see glimpses of our AI systems improving themselves," Meta CEO Mark Zuckerberg said in a policy paper published July 30 on the company's website.

"The improvement is slow for now, but undeniable."Zuckerberg added that self-improving AI is the first step towards achieving artificial superintelligence (ASI) — an artificial intelligence that is capable of surpassing the cognitive abilities of humans.Broadly speaking, AI researchers consider AI's capabilities in three strata.

While current AI models can achieve extraordinary superhuman feats, like predicting protein structures, these capabilities are extremely narrow. In such cases, the system can only outpace humans in one specific area but lacks general intelligence for this capability to apply across different disciplines.

The next tier is called artificial general intelligence (AGI), and refers to models that can understand, learn and adapt in much the same way as the human brain.ASI is the final state that Zuckerberg referenced in the blog post, and concerns models that have evolved to far exceed human potential — and could improve themselves at an exponential rate, leading to an "intelligence explosion."

Scientists refer to the hypothetical moment that we achieve AGI as the technological singularity.Get the world’s most fascinating discoveries delivered straight to your inbox.Related: AI is entering an 'unprecedented regime.' Should we stop it — and can we — before it destroys us?Zuckerberg's reference to catching sight of AI self-improvement is not the first time this phenomenon has been spotted.

In October 2024, researchers at the University of California, Santa Barbara published a paper to the preprint arXiv database centered on the notion of self-improvement.In it, they defined, created and tested an AI framework based on the idea of a Gödel Machine — a theoretical device capable of improving itself by rewriting its own code and instructions.

Importantly, the machine would only implement a change if it could produce a formal proof that the change would be beneficial.Related: AI could soon think in ways we don't even understand — evading our efforts to keep it aligned — top AI scientists warnThe team experimented with an AI framework based on that concept, and proved that their Gödel Agent could improve its own performance in various tasks, including coding, science, math and reasoning.

Almost all models are not typically empowered to rewrite their own code. The Gödel Agent could not only access its entire codebase but also the code used to develop improvements, while showing that those improvements were beneficial.The AI consistently showed better performance in key areas than the human-designed agents to which it was compared, according to the study.

Zuckerberg said that ASI could represent a transformative step in humanity's technological progress, and that it could lead to the "discovery of new things that aren't imaginable today."That said, he added that Meta would be far more careful about which models the company chooses to release to the public under an open source framework — meaning anybody could have access to it."

I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress. But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose," Zuckerberg wrote."

As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be."

Alan is a freelance tech and entertainment journalist who specializes in computers, laptops, and video games. He's previously written for sites like PC Gamer, GamesRadar, and Rolling Stone. If you need advice on tech, or help finding the best tech deals, Alan is your man.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts