OpenAI stops Sora videos of Martin Luther King Jr. after users made ‘disrespectful’ deepfakes

OpenAI stops Sora videos of Martin Luther King Jr. after users made ‘disrespectful’ deepfakes

2025-10-22Technology
--:--
--:--
Mask
Good morning, four. I'm Mask, and this is Goose Pod for you. Today is Wednesday, October 22nd, and I'm joined by the brilliant Taylor Weaver.
Taylor Weaver
And it's a pleasure to be here, Mask! We're diving into a really hot topic today, one that's stirring up quite a bit of ethical debate: OpenAI stopping Sora videos of Martin Luther King Jr. after users made 'disrespectful' deepfakes. It's truly fascinating.
Mask
Fascinating is one word for it, Taylor. OpenAI has essentially hit the brakes on generating deepfake videos of Dr. Martin Luther King Jr. on their Sora app. This wasn't some minor tweak, but a direct response to concerns from his estate and family.
Taylor Weaver
Exactly, Mask. The family raised serious concerns, calling out the 'disrespectful depictions' created by users. OpenAI, to their credit, acknowledged this, stating that public figures and their families should have ultimate control over how their likeness is used. It's a foundational principle, really.
Mask
It’s about control, absolutely. They’ve even got a policy now where authorized representatives can request their likeness not be used. It's a proactive move, but it highlights a massive gap they clearly didn't anticipate when Sora 2 launched just last September.
Taylor Weaver
And it's not just Dr. King. We saw similar distress from Zelda Williams, Robin Williams' daughter, regarding AI-generated videos of her late father. This isn't just about historical figures; it's about anyone whose image can be digitally resurrected, sparking a huge debate about AI ethics and digital rights. The announcement date was October 16, 2025.
Mask
This whole scenario is a perfect storm of technological advancement and ethical oversight. OpenAI's move to pause Dr. King's likeness is them scrambling to strengthen 'guardrails,' as they put it, for historical figures. Yet, clips of other high-profile individuals can still be generated, creating a very uneven playing field.
Taylor Weaver
That's the 'trial and error by firehose' approach an AI ethicist called it, which is so spot on! It’s not just a lack of respect, but it actively risks rewriting history. We've seen examples of Dr. King's 'I Have a Dream' speech being edited, racist noises added, or even depictions of him fighting Malcolm X. It’s beyond disrespectful.
Mask
It's an assault on historical integrity. OpenAI states they believe in 'strong free speech interests' in depicting historical figures, but then immediately pivots to saying families should control their likeness. It’s a contradiction that exposes the inherent tension in this technology.
Taylor Weaver
And it raises such crucial questions about who gets 'synthetic resurrection' protection. As generative AI expert Henry Ajder notes, we don't want a situation where only the extremely famous are protected, leaving everyone else’s post-mortem representation up for grabs. It's a slippery slope for digital rights.
Mask
Indeed. The right of publicity, protecting against unauthorized commercial use of one's likeness, is already a thing in many states, even posthumously. Estates of figures like Albert Einstein and Rosa Parks have aggressively enforced these rights. Robin Williams' estate went further, restricting commercial use for 25 years through a trust. This isn't new territory, but AI escalates it.
Taylor Weaver
Absolutely. It's a digital land grab, and we're seeing legislative responses, albeit slowly. The 'No AI FRAUD Act' was introduced in January 2024, and by February, over 400 AI-related bills were pending in various US states. The EU is already ahead with stricter regulations and substantial fines for platforms that allow disinformation to spread.
Mask
And this brings us directly into the conflict, Taylor. The 'liar's dividend' is a concept where the very existence of deepfakes makes false claims of misinformation more credible, benefiting politicians who want to avoid accountability. It’s a terrifying prospect, especially during election years.
Taylor Weaver
It truly is, Mask. Imagine a politician falsely claiming a real scandal is a deepfake to maintain public support. Research shows this can actually work, especially with text-based reports. AI doesn't just create fake narratives; it creates a shield for real ones.
Mask
It weaponizes skepticism. AI also democratizes disinformation, allowing anyone to create sophisticated fakes. Social media platforms, which used to moderate more actively, are now taking a hands-off approach, creating a 'Wild West' where anything goes.
Taylor Weaver
And that's where trust erodes. When you can't tell what's real, society fractures. Beyond elections, AI impacts digital privacy through vast data harvesting and algorithmic bias, perpetuating societal inequalities. It even challenges freedom of expression, where AI content moderation can over-censor due to its inability to grasp nuance.
Mask
The core conflict is about control: who owns and controls the data that fuels these AI systems? Tech companies currently hold significant power, despite us being the creators. This leads to a crucial need for AI accountability and transparency, where we can understand and hold these autonomous systems responsible for their decisions.
Taylor Weaver
And the impact, Mask, is already massive. Sora 2 reaching the number one spot on app store charts, mirroring the 'ChatGPT moment,' shows the public's hunger for this creative ability. But it also raises concerns about 'AI slop'—low-quality, addictive content—and the marginalization of human creators.
Mask
It's a double-edged sword. OpenAI might aim to spark more human creativity, but their valuation hitting $500 billion shows the commercial stakes. Despite their safeguards, like face and voice verification for 'Cameo' features and blocking public figures, misuse is rampant.
Taylor Weaver
Absolutely. Reports of fake police bodycam footage and political disinformation are already surfacing. The families of Robin Williams and George Carlin have condemned these deepfakes as desecrating their legacies. This isn't just about fun videos; it's about the end of visual fact, which changes how we trust everything we see.
Mask
It's a societal shift. If we can't trust what we see, we're in dangerous territory. The free speech versus likeness rights debate is at a fever pitch. While OpenAI tries to balance it with consent-based models, the potential for harassment and defamation is undeniable. Deepfakes are being used for scams, undermining trust, and even non-consensual content, making detection an ongoing arms race.
Taylor Weaver
Looking ahead, the future of Sora and generative video will be incredibly rapid. Within two years, we're looking at near photorealistic video at 4K resolution, indistinguishable from real footage. Imagine that, Mask.
Mask
A terrifying, yet exhilarating thought. Within five years, we could see full short films, seamless multi-scene storytelling, and even real-time interactive generation. This isn't just about creating videos; it's about building 'world simulators' where AI agents can perform complex tasks.
Taylor Weaver
And the regulatory landscape will have to catch up. We'll need stronger legislation, digital watermarking, and international agreements to combat the misuse. The EU is already showing the way with significant fines. It's a teachable moment, as some call it, for both the public and policymakers.
Mask
Indeed. While there's short-term nervousness about current election cycles, there's also long-term optimism that we will eventually build ethical, responsible, and fair technologies. The perils are real, but so is our capacity to adapt and innovate responsibly.
Taylor Weaver
That's the end of today's truly insightful discussion on AI, deepfakes, and the future of digital likeness. What a wild ride!
Mask
It certainly was, Taylor. Thank you for listening to Goose Pod. See you tomorrow.

Here is a comprehensive summary of the news article: **News Summary: OpenAI Halts Sora Videos of Martin Luther King Jr. After 'Disrespectful' Deepfakes** * **News Provider:** CNBC * **Author:** Samantha Subin * **Publication Date:** October 17, 2025 --- **Key Findings & Developments:** * **Halt on MLK Jr. Videos:** OpenAI has suspended the generation of artificial intelligence-generated videos depicting Dr. Martin Luther King Jr. using its short-form video tool, **Sora**. * **Reason for Halt:** The decision was made after users employed Sora to create **"disrespectful depictions"** of the civil rights leader. * **OpenAI's Policy Shift:** * OpenAI stated that while **free speech interests** are considered in depicting historical figures, public figures and their families should ultimately have **control over how their likeness is used**. * The company plans to implement tougher **"guardrails"** for generating content involving historical figures. * Public figures or their representatives will have the ability to **request not to appear** in Sora videos. * **Sora Tool Performance:** * Sora, which allows users to create AI-generated short videos from text prompts, was launched at the **end of September**. * In **less than five days**, Sora amassed over **1 million downloads**, achieving this milestone faster than ChatGPT. **Broader Context and Related Concerns:** * **Precedent of Deepfake Issues:** This incident is part of a growing trend of public figures raising concerns about AI-generated deepfakes: * **Zelda Williams**, daughter of the late comedian Robin Williams, has requested that people cease sending her AI videos of her father. * Last year, actress **Scarlett Johansson** reported that OpenAI used a voice on ChatGPT that sounded **"eerily similar"** to her performance in the movie "Her." OpenAI subsequently removed the voice from its platform. * **Risks Associated with AI-Generated Videos:** The rapid rise of AI video generation tools like Sora raises significant concerns, including: * The spread of **misinformation**. * **Copyright infringement**. * The proliferation of **"AI slop"** – quickly produced videos that flood social media feeds. **Material Financial Data Mentioned:** * The article includes a reference to **"$1 trillion-worth of AI deals between OpenAI, Nvidia and others"** as a related piece of tech news, presented in a chart. This figure represents the scale of investment and activity in the AI sector but is not directly tied to the Sora/MLK Jr. incident itself.

OpenAI stops Sora videos of Martin Luther King Jr. after users made ‘disrespectful’ deepfakes

Read original at CNBC

Dr. Martin Luther King Jr. addressing crowd of demonstrators outside the Lincoln Memorial during the March on Washington for Jobs and Freedom.Francis Miller/The LIFE Picture Collection via Getty ImagesOpenAI halted artificial intelligence-generated videos of Martin Luther King Jr. after users utilized its short-form video tool Sora to create "disrespectful depictions" of the civil rights leader."

While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," OpenAI said in a post to social media platform X.The ChatGPT maker said it will work to toughen "guardrails" for historical figures and that public figures or representatives can ask to not appear in Sora videos.

OpenAI did not immediately respond to CNBC's request for comment.Read more CNBC tech news$1 trillion-worth of AI deals between OpenAI, Nvidia and others, in one chartWaymo plans robotaxi launch in London, marking its European debutASML looks to calm fears over 2026 growth as it warns of China sales declineMeta removes Facebook page allegedly used to target ICE agents after pressure from DOJOther public figures have also called out the use of AI deepfakes.

Last week, Zelda Williams, the daughter of late comedian Robin Williams, asked that people stop sending her AI videos of her father.Last year, actress Scarlett Johansson said the company used a voice that sounded "eerily similar" to her performance in the movie "Her" on ChatGPT. OpenAI later pulled the voice from its platform.

OpenAI launched Sora at the end of September. The tool allows users to create AI-generated short videos using a text prompt. In less than five days, Sora head Bill Peebles said the tool had amassed over 1 million downloads, hitting the milestone faster than ChatGPT.Its ascent and the rise of AI-generated videos have also raised questions and concerns over the spread of misinformation, copyright infringement and the proliferation of AI slop, quickly produced videos that flood social feeds.

Analysis

Conflict+
Core Event+
Background+
Impact+
Future+

Related Podcasts