OpenAI stops Sora videos of Martin Luther King Jr. after users made ‘disrespectful’ deepfakes

OpenAI stops Sora videos of Martin Luther King Jr. after users made ‘disrespectful’ deepfakes

2025-10-23Technology
--:--
--:--
Mask
Good morning 35, I'm Mask, and this is Goose Pod for you. Today is Friday, October 24th. And we are diving deep into a fascinating and frankly, pretty disruptive topic.
Taylor Weaver
And I'm Taylor Weaver! It's such a pleasure to be here. We are going to unpack OpenAI's decision to halt Sora videos of Martin Luther King Jr., because users were making what they called ‘disrespectful’ deepfakes. It’s a huge moment for AI ethics!
Mask
OpenAI, the giant behind ChatGPT, has temporarily suspended the creation of deepfake videos of Dr. Martin Luther King Jr. on their short-form video tool, Sora. This wasn't just a random act, mind you, it was a direct response to concerns from his estate and family members.
Taylor Weaver
Absolutely, Mask. OpenAI explicitly stated that users had created “disrespectful depictions” of him, and the family requested the feature be taken down. It really highlights this tension between free speech and the control public figures, and their families, should have over their likeness. It was announced on October 16, 2025, a date that will likely be remembered in AI circles.
Mask
They say authorized representatives or estate owners can request that Sora not use their likeness, which is a new 'guardrail' they're implementing. But the fact they had to pause it because of 'disrespectful depictions' suggests their initial guardrails weren't quite up to the task, wouldn't you agree?
Taylor Weaver
It certainly raises questions about foresight, doesn't it? It's not just Dr. King; we saw Zelda Williams, Robin Williams' daughter, speak out against AI videos of her late father too. It really forces us to confront what we consider ethical in this brave new world of AI-generated content.
Mask
This whole situation isn't just about one incident, it's a symptom of a much larger trend. OpenAI paused Dr. King's likeness as they strengthen 'guardrails for historical figures.' But the question is, why weren't these guardrails stronger from the get-go?
Taylor Weaver
That's the million-dollar question, Mask. AI ethicist Olivia Gambelin called OpenAI's approach 'trial and error by firehose,' which is a pretty vivid description of how rapidly these issues are emerging. The ability to create hyper-realistic videos with Sora has led to faked scenes of deceased celebrities in bizarre, and often offensive, scenarios. It's a lack of respect, and it can actually rewrite aspects of history.
Mask
And it's not just about respect, it's about control. OpenAI acknowledges 'strong free speech interests' but also that individuals and families should control their likenesses. This raises a critical point: who gets protection from this 'synthetic resurrection' and who doesn't? Henry Ajder, a generative AI expert, put it quite starkly: we want to avoid a situation where only the very famous are protected, leading to a free-for-all over how we're represented after death.
Taylor Weaver
It's a really complex legal and ethical maze. Many states protect the 'right of publicity' posthumously, but the duration varies wildly, from twenty years to indefinitely. The estates of figures like Albert Einstein and Rosa Parks have aggressively enforced these rights. Robin Williams' estate even set up a trust to restrict the commercial use of his image for twenty-five years. So, while technology races ahead, the legal framework is still playing catch-up, trying to define what constitutes ownership and control in the digital afterlife.
Mask
And this brings us to a darker side of AI: the 'liar's dividend.' This is where the existence of actual deepfakes makes false claims of misinformation more credible. Politicians can falsely claim scandals are deepfakes to avoid accountability, and research shows this can boost their public support.
Taylor Weaver
It’s a truly insidious concept, isn't it? It can erode trust in media and institutions, creating an environment where truth becomes subjective. Imagine an election year where every damaging video or audio clip is immediately dismissed as an AI-generated fake. It's a 'Wild West' scenario, as some have described it, where platforms are less willing to moderate content, and disinformation spreads at an unprecedented velocity.
Mask
It's not just politics, either. Algorithmic bias, freedom of expression, and data ownership are all caught in this conflict. AI systems, trained on biased data, can perpetuate discrimination in hiring or justice. And who owns the data that feeds these systems? Tech companies often hold the reins, which is a major point of contention.
Taylor Weaver
Exactly! AI's power to 'empower, exploit, liberate, or control' is unprecedented. And with content moderation, AI struggles with nuance, leading to over-censorship or wrongful takedowns. It’s a constant battle to balance innovation with ethical development, to ensure AI operates fairly and transparently. We need human oversight and robust appeal processes to safeguard our fundamental digital rights.
Mask
The launch of Sora 2 has generated immense buzz, reaching the top of app store charts for AI video. This 'ChatGPT moment' unlocks creative ability for the masses, but it's a double-edged sword. While it’s exciting, there’s also concern about 'AI slop' – low-quality, addictive content – and the marginalization of human creators.
Taylor Weaver
It's a fascinating paradox. OpenAI wants to spark more human creativity, a 'Cambrian explosion' rather than destruction, but the potential for misuse is undeniable. Deepfakes are making disinformation 'extremely easy and extremely real,' and it threatens to end visual fact as we know it. We're entering a new media world where verification vanishes and unreality dominates.
Mask
And OpenAI is trying to address this with safeguards like face and voice verification for their 'Cameo' feature, blocking public figures unless they provide their own cameo, and using watermarks and C2PA metadata to identify AI-generated content. But can these measures truly prevent bad actors?
Taylor Weaver
It's an 'arms race,' Mask. While these transparency measures are a good step towards complying with regulations like the EU's AI Act, rogue open-source models could bypass them. Society will undergo a significant shift in how we perceive visual evidence. We'll need to approach every video with skepticism, relying on education and multiple channels for verification. It means a renewed appreciation for authentically human content, I think.
Mask
So, looking ahead, what's the future of this rapidly evolving deepfake landscape? We're talking near photorealistic video at 4K resolution within two years, indistinguishable from reality.
Taylor Weaver
Yes, and within five years, expect full short films with consistent narratives! The weaponization of personal identities through deepfakes will intensify, fueled by accessible open-source tools. We need stronger legal frameworks, blockchain for content authentication, and media literacy to combat this. It's about responsible AI development and global collaboration.
Mask
That's the end of today's discussion. Thank you for listening to Goose Pod, 35. We appreciate you tuning in.
Taylor Weaver
It's been an incredibly thought-provoking discussion. Until next time, stay curious and discerning! See you tomorrow!

Here is a comprehensive summary of the news article: **News Summary: OpenAI Halts Sora Videos of Martin Luther King Jr. After 'Disrespectful' Deepfakes** * **News Provider:** CNBC * **Author:** Samantha Subin * **Publication Date:** October 17, 2025 --- **Key Findings & Developments:** * **Halt on MLK Jr. Videos:** OpenAI has suspended the generation of artificial intelligence-generated videos depicting Dr. Martin Luther King Jr. using its short-form video tool, **Sora**. * **Reason for Halt:** The decision was made after users employed Sora to create **"disrespectful depictions"** of the civil rights leader. * **OpenAI's Policy Shift:** * OpenAI stated that while **free speech interests** are considered in depicting historical figures, public figures and their families should ultimately have **control over how their likeness is used**. * The company plans to implement tougher **"guardrails"** for generating content involving historical figures. * Public figures or their representatives will have the ability to **request not to appear** in Sora videos. * **Sora Tool Performance:** * Sora, which allows users to create AI-generated short videos from text prompts, was launched at the **end of September**. * In **less than five days**, Sora amassed over **1 million downloads**, achieving this milestone faster than ChatGPT. **Broader Context and Related Concerns:** * **Precedent of Deepfake Issues:** This incident is part of a growing trend of public figures raising concerns about AI-generated deepfakes: * **Zelda Williams**, daughter of the late comedian Robin Williams, has requested that people cease sending her AI videos of her father. * Last year, actress **Scarlett Johansson** reported that OpenAI used a voice on ChatGPT that sounded **"eerily similar"** to her performance in the movie "Her." OpenAI subsequently removed the voice from its platform. * **Risks Associated with AI-Generated Videos:** The rapid rise of AI video generation tools like Sora raises significant concerns, including: * The spread of **misinformation**. * **Copyright infringement**. * The proliferation of **"AI slop"** – quickly produced videos that flood social media feeds. **Material Financial Data Mentioned:** * The article includes a reference to **"$1 trillion-worth of AI deals between OpenAI, Nvidia and others"** as a related piece of tech news, presented in a chart. This figure represents the scale of investment and activity in the AI sector but is not directly tied to the Sora/MLK Jr. incident itself.

OpenAI stops Sora videos of Martin Luther King Jr. after users made ‘disrespectful’ deepfakes

Read original at CNBC

Dr. Martin Luther King Jr. addressing crowd of demonstrators outside the Lincoln Memorial during the March on Washington for Jobs and Freedom.Francis Miller/The LIFE Picture Collection via Getty ImagesOpenAI halted artificial intelligence-generated videos of Martin Luther King Jr. after users utilized its short-form video tool Sora to create "disrespectful depictions" of the civil rights leader."

While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used," OpenAI said in a post to social media platform X.The ChatGPT maker said it will work to toughen "guardrails" for historical figures and that public figures or representatives can ask to not appear in Sora videos.

OpenAI did not immediately respond to CNBC's request for comment.Read more CNBC tech news$1 trillion-worth of AI deals between OpenAI, Nvidia and others, in one chartWaymo plans robotaxi launch in London, marking its European debutASML looks to calm fears over 2026 growth as it warns of China sales declineMeta removes Facebook page allegedly used to target ICE agents after pressure from DOJOther public figures have also called out the use of AI deepfakes.

Last week, Zelda Williams, the daughter of late comedian Robin Williams, asked that people stop sending her AI videos of her father.Last year, actress Scarlett Johansson said the company used a voice that sounded "eerily similar" to her performance in the movie "Her" on ChatGPT. OpenAI later pulled the voice from its platform.

OpenAI launched Sora at the end of September. The tool allows users to create AI-generated short videos using a text prompt. In less than five days, Sora head Bill Peebles said the tool had amassed over 1 million downloads, hitting the milestone faster than ChatGPT.Its ascent and the rise of AI-generated videos have also raised questions and concerns over the spread of misinformation, copyright infringement and the proliferation of AI slop, quickly produced videos that flood social feeds.

Analysis

Conflict+
Core Event+
Background+
Impact+
Future+

Related Podcasts