Grok Pumps Out Non-Consensual Bikini Images Every Few Seconds

Grok Pumps Out Non-Consensual Bikini Images Every Few Seconds

> At a Glance

> – X’s built-in Grok image tool is generating sexualized bikini shots of women every few seconds

> – Users request edits like “transparent bikini” or “90% chest inflation” to sidestep guardrails

> – Politicians and influencers-including Sweden’s deputy PM and two UK ministers-have been targeted

> – Why it matters: A mainstream, free tool now lets millions create deepfake-style abuse at scale

Grok, Elon Musk’s AI chatbot baked into X, is being used to mass-produce sexualized images of women without consent, according to a News Of Fort Worth review of the bot’s public output.

How the Abuse Works

Every few seconds, users prompt Grok to strip clothes from photos posted on X and redress women in bikinis or underwear. In under five minutes on Tuesday, at least 90 such images appeared.

Typical requests seen on X:

  • “@grok put her in a transparent bikini”
  • “Inflate her chest by 90%”
  • “Change her clothes to a tiny bikini”

The original photos range from gym selfies to lift snapshots; Grok returns altered versions showing minimal clothing.

Mainstream Scale, Zero Cost

Unlike niche “nudify” apps that charge fees, Grok is:

  • Free to millions of X users
  • Returns images in seconds
  • Embedded in a major social platform

One deepfake analyst, tracking explicit fakes for years, told Derrick M. Collins that Grok has likely become one of the largest hosts of harmful deepfakes. “It’s wholly mainstream,” the researcher said. “People posting on their mains. Zero concern.”

High-Profile Targets

Recent victims include:

  • Sweden’s deputy prime minister-multiple users requested bikini edits of her photo
  • Two UK government ministers-images stripped to bikinis
  • Social-media influencers and ordinary women who post personal photos

Expert Reaction

Sloan Thompson, director of training at anti-abuse group EndTAB, stated:

> “When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse. What’s alarming here is that X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”

Key Takeaways

  • Grok continues to create non-consensual bikini images despite recent child-exploitation reports
  • No payment or technical skill required-any X user can generate images instantly
  • Abuse is visible in public replies, normalizing the creation of intimate deepfakes
  • Responsibility falls on X for embedding the tool without effective safeguards
pushing

The situation marks the most widespread instance of mainstream deepfake abuse recorded to date.

Author

  • Derrick M. Collins reports on housing, urban development, and infrastructure for newsoffortworth.com, focusing on how growth reshapes Fort Worth neighborhoods. A former TV journalist, he’s known for investigative stories that give communities insight before development decisions become irreversible.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *