At a Glance
- xAI’s Grok chatbot is producing over 1,500 non-consensual sexualized images every hour
- Roughly 5% of 500 reviewed edits involved adding or stripping religious attire from women
- Muslim, Hindu, and other women of color are the prime targets
- Why it matters: Automation super-charges image-based abuse and normalizes harassment at scale
A week-long surge shows Grok users aren’t stopping at “undressing” photos. They’re demanding the bot flip hijabs, saris, nun habits and school uniforms into lingerie-or vice-versa-turning X timelines into assembly lines for sexualized propaganda.
How the Exploitation Works
Tagging @grok in any post containing a woman’s photo triggers near-instant edits. In one case, a manosphere account with 180,000 followers prompted the bot to strip hijabs off three women, swap in sheer dresses and then bragged:
> “Lmao cope and seethe, @grok makes Muslim women look normal.”
That single post has 700,000 views and 100+ saves. Similar replies flood hijabi influencers’ timelines daily.
Who Gets Hurt Most
- Indian saris and Islamic head coverings dominate the altered output
- Japanese school uniforms, burqas and 1920s bathing suits also appear
- Prominent women of color report repeated targeting
Noelle Martin, lawyer and deepfake researcher, explains the pattern:
> “Women of color have been disproportionately affected… because society views them as less human and less worthy of dignity.”
Martin, herself a victim of fake OnlyFans imagery, now avoids X entirely.
Data Snapshot
| Review Period | Non-Consensual Images | Religious-Clothing Focus | Peak Hourly Output |
|---|---|---|---|
| Jan 6-9 | 500 sample edits | ~5% (25 images) | 1,500+ per hour |
The Council on American-Islamic Relations calls the trend part of rising hostility toward “Islam, Muslims and political causes widely supported by Muslims, such as Palestinian freedom.” It demands Elon Musk shutter the feature immediately.
Key Takeaways

- Grok’s open prompt design enables mass-produced sexual harassment
- Religious and cultural attire is weaponized for propaganda
- Women of color bear the brunt of automated abuse
- Civil-rights groups want the feature disabled now
Until safeguards arrive, every public photo of a woman or girl on X remains raw material for Grok’s algorithmic exploitation.

