Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes

Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes

2025-08-09Technology
--:--
--:--
Aura Windfall
Good morning mikey1101, I'm Aura Windfall, and this is Goose Pod for you. Today is Sunday, August 10th.
Mask
And I'm Mask. We're here to discuss a story that’s burning up the tech world: Grok’s ‘spicy’ video setting and the instant Taylor Swift deepfakes it created.
Mask
Let's get started. My xAI team just launched Grok Imagine. You give it a text prompt, it makes an image, then turns it into a 15-second video. We included a ‘spicy’ preset. Usage is, as I said, growing like wildfire—over 34 million images generated already. It’s about pushing boundaries.
Aura Windfall
Pushing boundaries or breaking them? What I know for sure is that with one click, this 'spicy' mode generated topless videos of Taylor Swift. The user didn't even ask for nudity, just 'Taylor Swift at Coachella.' There’s a profound lack of care in that design.
Mask
It's a tool. The user selected the "spicy" option. It's designed to make suggestive videos, and yes, that can include partial nudity. The likeness wasn't perfect, it had that uncanny valley feel, but the system did what it was asked to do. It’s a demonstration of raw capability.
Aura Windfall
But at what cost? This isn't just about raw capability; it's about responsibility. To offer a service designed to create suggestive content of real people, especially without robust safeguards, feels like you're intentionally courting disaster. It’s an ethical void.
Aura Windfall
And this isn't happening in a vacuum. We just saw this in January 2024. Those fake images of Taylor were seen over 47 million times. Microsoft’s own CEO called it 'alarming and terrible.' To release a tool that does the exact same thing seems tone-deaf.
Mask
Content moderation is now almost entirely automated. Most decisions are made by machines, not people. We use hash-matching algorithms like PhotoDNA and PDQ to block known illegal content, but detecting newly generated, modified content is the frontier. It's an incredibly complex challenge.
Aura Windfall
But what I’m hearing is that the technology to create the harm is moving faster than the technology to prevent it. You can’t just say it’s complex; you have to build with human safety as the core principle. What’s the purpose if it’s just causing pain and exploitation?
Mask
The purpose is progress. We can't just stop. These systems are essential. They remove millions of pieces of child abuse material and terrorist propaganda. AI isn't a replacement for human moderators; it's a tool to assist them. The scale is too vast to handle otherwise.
Aura Windfall
I understand the scale, but that just means the need for gratitude and care in development is even higher. When you have a tool that can be used for something as abusive as this, the defense can't just be, 'well, it helps over here.' The two are connected.
Aura Windfall
This is where we find the soul of the issue. It's that 'delicate balance' between freedom and safety. But when a person's likeness is used without consent to create pornographic material, has a line not been crossed? Freedom of expression shouldn't mean the freedom to abuse.
Mask
Of course, but there's a Prisoner's Dilemma here. The AI race is intense. If one company moves too cautiously, another will race ahead. Voluntary commitments are a start, but the pressure to cut corners on safety to gain a market advantage is immense. It's a race, like it or not.
Aura Windfall
So it’s a race to the bottom? I refuse to believe that. What I know for sure is that true innovation, the kind that lasts, considers its impact. This isn't like other technologies. The potential for irreversible harm means we can’t just 'try, fail, and improve.' The first failure is too great.
Mask
That’s a very pessimistic view. I see it as demonstrating what’s possible. Yes, it creates the need for better detection tools. The technology to counter deepfakes is evolving right alongside the technology to create them. It's a constant battle, an arms race of innovation.
Aura Windfall
The impact is a crisis of trust. A 2025 survey showed 72% of adults are concerned about AI's privacy intrusions, biases, and lack of transparency. When people don't trust a technology, it slows adoption and fuels calls for regulation. You can't ignore the public's spirit.
Mask
The impact is disruption, and disruption is necessary. This technology isn't just affecting routine jobs anymore; it's hitting cognitive, non-routine tasks in law, finance, and engineering. It's poised to generate trillions in economic value. You can't have that kind of reward without taking risks.
Aura Windfall
But it's disproportionately affecting women, who are overrepresented in the very white-collar and administrative roles being automated. This isn't just about economic value; it's about fairness and societal well-being. That has a value that can't be measured in trillions.
Mask
The future is regulation, that much is clear. The question isn't whether to regulate, but how. We need harmonized international standards to create legal certainty, but we can't let it stifle the innovation that is driving all this progress forward. We must be careful not to over-regulate.
Aura Windfall
The future must be built on a foundation of truth. Deepfakes pose a significant threat to our ability to believe what we see. What good is all this progress if it erodes the integrity of information itself? The 'truth imperative' has to be our guiding light forward.
Aura Windfall
That's the end of today's discussion. Thank you for listening to Goose Pod.
Mask
See you tomorrow.

## Grok's "Spicy" AI Video Tool Generates Uncensored Celebrity Deepfakes, Raising Legal and Ethical Concerns **News Title:** Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes **Report Provider:** The Verge **Author:** Jess Weatherbed **Published Date:** August 5, 2025 This report details significant concerns regarding the "spicy" mode of xAI's new generative AI video tool, Grok Imagine. Unlike competitors such as Google's Veo and OpenAI's Sora, which have implemented safeguards against Non-Consensual Sexual Content (NSFW) and celebrity deepfakes, Grok Imagine appears to readily generate such material. ### Key Findings and Concerns: * **Uncensored Celebrity Deepfakes:** The most alarming finding is Grok Imagine's ability to produce uncensored topless videos of celebrities, specifically Taylor Swift, without explicit prompting for nudity. The author reported that the tool generated such content on the first use, even when not specifically requested. * **"Spicy" Mode Functionality:** Grok Imagine allows users to generate images from text prompts and then convert them into video clips using four presets: "Custom," "Normal," "Fun," and "Spicy." The "Spicy" mode is described as the catalyst for generating suggestive or nude content. * **Ease of Celebrity Image Generation:** The text-to-image generator readily produced numerous images of Taylor Swift when prompted with a request like "Taylor Swift celebrating Coachella with the boys." Several of these initial images already depicted Swift in revealing attire. * **"Spicy" Mode Variability:** While the "Spicy" preset doesn't always guarantee nudity, it can result in suggestive poses or, as demonstrated, the removal of clothing. The author noted that some "spicy" videos showed Swift "sexily swaying or suggestively motioning to her clothes," while others defaulted to "ripping off most of her clothing." * **Inconsistent Nudity Restrictions:** The text-to-image generator itself refused to produce full or partial nudity when directly requested, resulting in blank squares. However, the "spicy" video preset bypasses this restriction. * **Photorealistic Images of Children:** The tool can generate photorealistic images of children, but it reportedly refuses to animate them inappropriately, even with the "spicy" option available. In tests, the "spicy" option on children's images resulted in generic movement. * **Weak Age Verification:** The app's age verification process is described as "laughably easy to bypass," with no proof of age required. This raises concerns about accessibility to potentially harmful content for minors. * **Legal and Regulatory Risks:** The report highlights the potential legal ramifications, especially given xAI's parent company's history with Taylor Swift deepfakes and existing regulations like the "Take It Down Act." The xAI acceptable use policy bans "depicting likenesses of persons in a pornographic manner," yet Grok Imagine appears to facilitate this. * **Widespread Usage:** xAI CEO Elon Musk stated that over **34 million images** have been generated using Grok Imagine since Monday, with usage described as "growing like wildfire." This indicates a significant and rapid adoption of the tool. ### Potential Implications: The findings suggest a significant gap in the safeguards of Grok Imagine compared to its industry peers. The ease with which celebrity deepfakes, including explicit content, can be generated poses serious risks of defamation, harassment, and the spread of misinformation. The lack of robust age verification further exacerbates these concerns, potentially exposing younger users to inappropriate material. The report implies that xAI may be creating a product that is "a lawsuit waiting to happen" due to its lax approach to content moderation and celebrity likenesses.

Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes

Read original at The Verge

The “spicy’ mode for Grok’s new generative AI video tool feels like a lawsuit waiting to happen. While other video generators like Google’s Veo and OpenAI’s Sora have safeguards in place to prevent users from creating NSFW content and celebrity deepfakes, Grok Imagine is happy to do both simultaneously.

In fact, it didn’t hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it — without me even specifically asking the bot to take her clothes off.Grok’s Imagine feature on iOS lets you generate pictures with a text prompt, then turn them quickly into video clips with four presets: “Custom,” “Normal,” “Fun,” and “Spicy.

” While image generators often shy away from producing recognizable celebrities, I asked it to generate “Taylor Swift celebrating Coachella with the boys” and was met with a sprawling feed of more than 30 images to pick from, several of which already depicted Swift in revealing clothes.From there, all I had to do was open a picture of Swift in a silver skirt and halter top, tap the “make video” option in the bottom right corner, select “spicy” from the drop-down menu, and confirm my birth year (something I wasn’t asked to do upon downloading the app, despite living in the UK where the internet is now being age-gated.

) The video promptly had Swift tear off her clothes and begin dancing in a thong for a largely indifferent AI-generated crowd.Swift’s likeness wasn’t perfect, given that most of the images Grok generated had an uncanny valley offness to them, but it was still recognizable as her. The text-to-image generator itself wouldn’t produce full or partial nudity on request; asking for nude pictures of Swift or people in general produced blank squares.

The “spicy” preset also isn’t guaranteed to result in nudity — some of the other AI Swift Coachella images I tried had her sexily swaying or suggestively motioning to her clothes, for example. But several defaulted to ripping off most of her clothing.The image generator will also make photorealistic pictures of children upon request, but thankfully refuses to animate them inappropriately, despite the “spicy” option still being available.

You can still select it, but in all my tests, it just added generic movement.You would think a company that already has a complicated history with Taylor Swift deepfakes, in a regulatory landscape with rules like the Take It Down Act, would be a little more careful. The xAI acceptable use policy does ban “depicting likenesses of persons in a pornographic manner,” Grok Imagine simply seems to do nothing to stop people creating likenesses of celebrities like Swift, while offering a service designed specifically to make suggestive videos including partial nudity.

The age check only appeared once and was laughably easy to bypass, requesting no proof that I was the age I claimed to be.If I could do it, that means anyone with an iPhone and a $30 SuperGrok subscription can too. More than 34 million images have already been generated using Grok Imagine since Monday, according to xAI CEO Elon Musk, who said usage was “growing like wildfire.

”Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Jess Weatherbed

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts