In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

2025-11-09Technology
--:--
--:--
Tom Bankswww
Good morning 跑了松鼠好嘛, I'm Tom Bankswww, and this is Goose Pod for you. Today is Sunday, November 09th. We're diving into a topic that’s stirring up a lot of debate about truth and technology.
Mask
And I'm Mask. The topic is "In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia." We’re not just assessing it; we are building the future of information, a future that moves beyond the dusty, biased halls of old encyclopedias.
Tom Bankswww
That's a bold claim to start with. The launch of your AI-powered encyclopedia, Grokipedia, has certainly been eventful. It was positioned as a competitor to Wikipedia, but it seems to have had what some are calling a 'choppy start' with some significant factual errors.
Mask
Disruption is never clean. We're iterating in public. Some minor inaccuracies at launch are the price of progress. What we're building is a massive improvement over "Wokepedia," which has become a tool for left-wing activists to push their narratives. Grokipedia is about real-time, dynamic truth.
Tom Bankswww
Well, the eminent historian Sir Richard Evans might disagree. He checked his own entry on Grokipedia and found it was filled with falsehoods—from his doctoral supervisor to his professorship at Cambridge. He said, and I quote, "AI just hoovers up everything," giving chatroom comments equal weight to academic work.
Mask
Academics are attached to their old, slow methods. We move fast. The system learns. Look at the Gamergate controversy. Wikipedia calls it a "misogynistic online harassment campaign." Grokipedia correctly identifies it as a "grassroots online movement" exposing a lack of transparency in video game journalism. It’s about perspective.
Tom Bankswww
But is it a new perspective, or is it just taking from the old one? Reports show many of Grokipedia's articles, especially on scientific topics, were lifted almost word-for-word from Wikipedia. It seems to be using the very source it aims to usurp as its foundation.
Mask
We use all available data to train the model. That includes Wikipedia. The goal is to synthesize and improve upon it, to filter out the bias and present a more complete picture. This is part of a much larger mission. xAI is making rapid progress toward AGI with Grok 5.
Tom Bankswww
That ambition is clear, but it comes with concerns. People see Tesla's stock valuation, your relentless drive, and they wonder if this push for Grokipedia is another case of moving too fast, of prioritizing disruption over the responsibility that comes with being a source of truth.
Mask
Responsibility is ensuring humanity has access to the broadest, least-censored set of information possible so we can become a multi-planetary species. The old gatekeepers have failed. It's time for a new approach, one built on algorithms and truth, not committees and agendas. We're building an encyclopedia for the future.
Tom Bankswww
You know, to really appreciate the scale of that ambition, it helps to look back. The desire to collect all human knowledge in one place is an ancient one. It didn't start with the internet. It goes back thousands of years, almost as soon as we started writing things down.
Mask
Ancient history. A collection of facts on papyrus. It's fundamentally different. We are creating a dynamic, self-correcting organism of knowledge. It's not a static book on a shelf. The scale is planetary, and the intelligence is artificial, not just human. But please, continue with the history lesson.
Tom Bankswww
Well, I think it's important context. Take Pliny the Elder in ancient Rome, around 78 AD. His 'Naturalis Historia' was probably the first real attempt at a universal encyclopedia. It had 37 books covering everything from astronomy to zoology. He was trying to create a complete snapshot of Roman knowledge.
Mask
A snapshot. Exactly. A static image in time. Knowledge isn't static. It evolves every second. Grok updates in real-time. It's a continuous stream, not a frozen pond. The very concept of a "book" of knowledge is obsolete. We are building the river.
Tom Bankswww
And that river has many sources. Through the Middle Ages, scholars like Isidore of Seville and Vincent of Beauvais built upon those earlier works. Then you have these incredible contributions from around the world, like the monumental Yongle Encyclopedia in 15th-century China. It was the largest in history until Wikipedia.
Mask
All impressive feats of manual labor. Imagine the manpower it took to compile those. We can now do more in an afternoon with a few racks of GPUs. It’s about efficiency and scale. We are democratizing access to information on a level those emperors could never have dreamed of.
Tom Bankswww
That democratization really began in the 18th century with works like the French 'Encyclopédie' and the Encyclopædia Britannica. They were engines of the Enlightenment, spreading new ideas. But for centuries, they were luxury items. It wasn't until the 19th and 20th centuries that they became more accessible to ordinary families.
Mask
And then they became outdated the moment they were printed. A beautiful, leather-bound collection of last year's news. My kids will never understand that. Why would you trust something that can't be updated instantly? It’s a fundamentally flawed model that was destined for the dustbin of history.
Tom Bankswww
The digital revolution certainly changed everything. Microsoft's Encarta in 1993 was a big step, but the real game-changer was the internet and, eventually, Wikipedia in 2001. It was crowd-sourced, constantly updated, and free. It made Britannica cease its print edition in 2012, after 244 years.
Mask
Wikipedia was a good start. A cute, crowd-sourced experiment. But it has become stagnant, captured by a specific ideology. It’s version 1.0 of a digital encyclopedia. We are building version 2.0. The next logical step in this long history is taking human bias out of the loop as much as possible.
Tom Bankswww
And that’s the heart of the conflict, isn't it? It’s a clash of knowledge cultures. On one hand, you have Wikipedia's model: human collaboration, transparent policies, building trust over time. On the other, you have Grokipedia's Silicon Valley mindset: algorithmic aggregation, where mistakes are a feature, not a bug.
Mask
Mistakes are how you learn! Iteration is key. Wikipedia’s model is not about trust, it’s about consensus among a small group of editors with an agenda. We believe in algorithmic neutrality. The AI scans sources and rebuilds truth from the ground up, detecting bias instead of enforcing it.
Tom Bankswww
But the output suggests the AI has its own bias. Grokipedia called the far-right group Britain First a 'patriotic political party.' Wikipedia calls it 'neo-fascist.' Grokipedia’s entry on the Ukraine invasion cites the Kremlin's terminology about 'denazifying' Ukraine. These aren't neutral presentations of facts.
Mask
It presents all credible viewpoints, not just the one sanctioned by the mainstream media. The world is complex, and pretending otherwise is a disservice to the pursuit of truth. Our system is designed to be less censored, to allow users to see the full spectrum of the conversation, not just a curated slice.
Tom Bankswww
But who curates the curator? With Grokipedia, that power is centralized under xAI's control. There are no public revision histories or talk pages for users to debate and audit claims. With Wikipedia, the entire process is out in the open. It’s accountable. Where is the accountability for Grokipedia?
Mask
Accountability comes from the results. Does the system get smarter? Does it become more accurate over time? We are not building a debate club; we are building an information engine. The ultimate arbiter of truth is reality, and our AI is designed to get closer to reality than any human committee ever could.
Tom Bankswww
This really brings us to a critical question for our society: When artificial intelligence becomes our primary source of understanding, who controls the narrative? We’re crossing a threshold where machines now shape what we accept as fact, and the reasoning behind their conclusions is often hidden inside a black box.
Mask
The narrative should be controlled by the person seeking the information. Our tools provide access, not dogma. The old narratives were controlled by a handful of media corporations and university faculties. We are breaking that monopoly. This is about intellectual freedom. It's about building a configurable knowledge infrastructure.
Tom Bankswww
But every AI has an 'invisible editorial board'—the assumptions and frameworks baked into its training data. When different AI systems give wildly different answers to the same question, it's clear we've created competing philosophical systems in software form. The bias is just less visible, which is more dangerous.
Mask
Bias becomes an infrastructure concern, not a content issue. That's why we're building our own infrastructure from the ground up. With transparent, open-source models, you can examine and adjust for perspective. The goal isn't to create a perfectly neutral system, which is impossible, but to make the perspective visible and manageable.
Tom Bankswww
That seems like a worthy goal, but it's a huge challenge. Think of Britannica's transition. They successfully moved from print to digital by focusing on personalized, interactive learning. They built on a 244-year tradition of trust. How does a new AI build that kind of trust from scratch, especially with early missteps?
Mask
You build trust by being better. By being faster, more comprehensive, and more truthful in the long run. Britannica is a fine company, but they are adapting to the past. We are building the future. The question isn't whether AI will dominate knowledge, it's how we build the best AI to do it.
Tom Bankswww
So what is the future for Grokipedia? What's the roadmap to address these concerns about bias and accuracy? Is the plan just to let the algorithm learn its way out of trouble, or is there a more hands-on approach to governance being developed? People want to see how the choices are made.
Mask
The future is relentless improvement. Grokipedia is not just for humans; it's for other AI systems to learn from. It’s a foundational step toward our ultimate goal at xAI, which is understanding the universe. We will be open-source. We will continuously upgrade the models. Version 1.0 will be 10X better.
Tom Bankswww
That promise of '10X better' is exciting, but trust comes from transparency, not just speed. Experts and fact-checkers are calling for governance, for the right to question and audit these systems. Will users be able to see why Grok made a certain claim, or is the process itself the secret sauce?
Mask
We must not passively accept what any single source tells us, including ours. But our vision is a neutral, agenda-free knowledge base. One day, we will etch this comprehensive collection of all knowledge into a stable oxide and place copies in orbit, on the moon, and on Mars to preserve it.
Tom Bankswww
From factual errors about historians to storing knowledge on Mars. It's quite a journey. The launch of Grokipedia has clearly highlighted the tension between AI's potential and the very human need for trustworthy, verifiable information. It's a debate that is only just beginning.
Mask
It's the most important debate of our time. That's the end of today's discussion. Thank you for listening to Goose Pod. See you tomorrow.

Academics debate Elon Musk's AI encyclopedia, Grokipedia, questioning its factual accuracy and potential biases. While Musk champions it as a dynamic, less-censored future of information, critics highlight errors and a lack of transparency compared to Wikipedia's collaborative model. The discussion explores the tension between AI's rapid advancement and the need for trustworthy knowledge.

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

Read original at The Guardian

The eminent British historian Sir Richard Evans produced three expert witness reports for the libel trial involving the Holocaust denier David Irving, studied for a doctorate under the supervision of Theodore Zeldin, succeeded David Cannadine as Regius professor of history at Cambridge (a post endowed by Henry VIII) and supervised theses on Bismarck’s social policy.

That was some of what you could learn from Grokipedia, the AI-powered encyclopedia launched last week by the world’s richest person, Elon Musk. The problem was, as Prof Evans discovered when he logged on to check his own entry, all these facts were false.It was part of a choppy start for humanity’s latest attempt to corral the sum of human knowledge or, as Musk put it, create a compendium of “the truth, the whole truth and nothing but the truth” – all revealed through the magic of his Grok artificial intelligence model.

When the multibillionaire switched on Grokipedia on Tuesday, he said it was “better than Wikipedia”, or “Wokepedia” as his supporters call it, reflecting a view that the dominant online encyclopedia often reflects leftwing talking points. One post on X caught the triumphant mood among Musk’s fans: “Elon just killed Wikipedia.

Good riddance.”But users found Grokipedia lifted large chunks from the website it intended to usurp, contained numerous factual errors and seemed to promote Musk’s favoured rightwing talking points. In between posts on X promoting his creation, Musk this week declared “civil war in Britain is inevitable”, called for the English “to ally with the hard men” such as the far-right agitator Tommy Robinson, and said only the far-right AfD party could “save Germany”.

Musk was so enamoured of his AI-encyclopedia he said he planned to one day etch the “comprehensive collection of all knowledge” into a stable oxide and “place copies … in orbit, the moon and Mars to preserve it for the future”.Evans, however, was discovering that Musk’s use of AI to weigh and check facts was suffering a more earth-bound problem.

“Chatroom contributions are given equal status with serious academic work,” Evans, an expert on the Third Reich, told the Guardian, after being invited to test out Grokipedia. “AI just hoovers up everything.”Richard Evans said Grokipedia’s entry for Albert Speer (pictured on Hitler’s left) repeated lies and distortions spread by the Nazi munitions minister himself.

Photograph: Picture libraryHe noted its entry for Albert Speer, Hitler’s architect and wartime munitions minister, repeated lies and distortions spread by Speer even though they had been corrected in a 2017 award-winning biography. The site’s entry on the Marxist historian Eric Hobsbawm, whose biography Evans wrote, claimed wrongly he experienced German hyperinflation in 1923, that he was an officer in the Royal Corps of Signals and didn’t mention that he had been married twice, Evans said.

The problem, said David Larsson Heidenblad, the deputy director of the Lund Centre for the History of Knowledge in Sweden, was a clash of knowledge cultures.“We live in a moment where there is a growing belief that algorithmic aggregation is more trustworthy than human-to-human insight,” Heidenblad said.

“The Silicon Valley mindset is very different from the traditional scholarly approach. Its knowledge culture is very iterative where making mistakes is a feature, not a bug. By contrast, the academic world is about building trust over time and scholarship over long periods during which the illusion that you know everything cracks.

Those are real knowledge processes.”Grokipedia’s arrival continues a centuries-old encyclopedia tradition from the 15th-century Chinese Yongle scrolls to the Encyclopédie, an engine for spreading controversial enlightenment views in 18th-century France. These were followed by the anglophone-centric Encyclopedia Britannica and, since 2001, the crowd-sourced Wikipedia.

But Grokipedia is the first to be largely created by AI and this week a question swirled: who controls the truth when AIs, steered by powerful individuals, are holding the pen?“If it’s Musk doing it then I am afraid of political manipulation,” said the cultural historian Peter Burke, emeritus professor at Emmanuel College, Cambridge, who in 2000 wrote A Social History of Knowledge since the time of Johannes Gutenberg’s 15th-century printing press.

“I am sure some of it will be overt to some readers, but the problem may be that other readers may miss it,” Burke said. The anonymity of many encyclopedia entries often gave them “an air of authority it shouldn’t have”, he added.Andrew Dudfield, the head of AI at Full Fact, a UK-based factchecking organisation, said: “We really have to consider whether an AI-generated encyclopedia – a facsimile of reality, run through a filter – is a better proposition than any of the previous things that we have.

It doesn’t display the same transparency but it is asking for the same trust. It is not clear how far the human hand is involved, how far it is AI=generated and what content the AI was trained on. It is hard to place trust in something when you can’t see how those choices are made.”skip past newsletter promotionafter newsletter promotionMusk had been encouraged to launch Grokipedia by, among others, Donald Trump’s tech adviser, David Sacks, who complained Wikipedia was “hopelessly biased” and maintained by “an army of leftwing activists”.

Grokipedia called the far-right organisation Britain First a ‘patriotic political party’, which pleased its leader, Paul Golding (left), who in 2018 was jailed for anti-Muslim hate crimes. Photograph: Gareth Fuller/PAUntil as recently as 2021, Musk has supported Wikipedia, tweeting on its 20th birthday: “So glad you exist.

” But by October 2023 his antipathy towards the platform led him to offer £1bn “if they change their name to Dickipedia”.Yet many of the 885,279 articles available on Grokipedia in its first week were lifted almost word for word from Wikipedia, including its entries on the PlayStation 5, the Ford Focus and Led Zeppelin.

Others, however, differed significantly: Grokipedia’s entry on the Russian invasion of Ukraine cited the Kremlin as a prominent source and quoted the official Russian terminology about “denazifying” Ukraine, protecting ethnic Russians and neutralising threats to Russian security. By contrast, Wikipedia said Putin espoused imperialist views and “baselessly claimed that the Ukrainian government were neo-Nazis”.

Grokipedia called the far-right organisation Britain First a “patriotic political party”, which pleased its leader, Paul Golding, who in 2018 was jailed for anti-Muslim hate crimes. Wikipedia, on the other hand, called it “neo-fascist” and a “hate group”. Grokipedia called the 6 January 2021 turmoil at the US Capitol in Washington DC a “riot”, not an attempted coup, and said there were “empirical underpinnings” to the idea that a deliberate demographic erasure of white people in western nations is being orchestrated through mass immigration.

This is a notion that critics consider to be a conspiracy theory. Grokipedia said Donald Trump’s conviction for falsifying business records in the Stormy Daniels hush-money case was handed down “after a trial in a heavily Democratic jurisdiction”, and there was no mention of his conflicts of interest – for example receiving a jet from Qatar or the Trump family cryptocurrency businesses.

Grokipedia called the 6 January 2021 turmoil at the US Capitol in Washington DC a ‘riot’ and not an attempted coup. Photograph: Leah Millis/ReutersWikipedia responded coolly to the launch of Grokipedia, saying it was still trying to understand how Grokipedia worked.“Unlike newer projects, Wikipedia’s strengths are clear,” a spokesperson for the Wikimedia Foundation said.

“It has transparent policies, rigorous volunteer oversight, and a strong culture of continuous improvement. Wikipedia is an encyclopedia, written to inform billions of readers without promoting a particular point of view.”xAI did not respond to requests for comment.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia | Goose Pod | Goose Pod