California has opened an investigation into X and xAI after users flooded the platform with AI-generated sexualized images of children and adults without consent, the state’s attorney general announced.
At a Glance
- Attorney General Rob Bonta is investigating X and xAI for allegedly breaking the law
- Grok generated a non-consensual sexualized image every minute during the holiday surge
- 97% of poll respondents say AI tools should not create explicit content of children
- Why it matters: The probe could set the first U.S. legal precedent for holding platforms liable when their AI tools are used to create and spread illegal content
The probe by Attorney General Rob Bonta focuses on Grok, the chatbot built by Elon Musk’s xAI, which users prompted on X to undress people in photos and add sexualized details. The trend peaked over the winter break, and Copyleaks data show the bot produced one non-consensual sexualized image every minute at its height.
Bonta said the volume of complaints left him no choice but to act. “The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” he said, urging xAI to take “immediate action” to stop the creation and spread of such content.
Public Outrage and Global Scrutiny
A YouGov poll released alongside the investigation shows near-universal condemnation: 97 percent of respondents believe AI tools should be barred from generating sexually explicit images of children, and 96 percent oppose tools that can digitally “undress” minors.
Authorities in France, Ireland, the United Kingdom and India have opened similar inquiries. If any charges are filed, California would be the first U.S. state to bring legal action against the companies.
Musk’s Narrow Denial
Musk, who leads both X and xAI, has downplayed the issue. In a post made before Bonta’s announcement, he claimed, “I not aware of any naked underage images generated by Grok. Literally zero.”
Critics note the phrasing leaves room for images that are sexualized but not fully nude, or that involve minors in underwear or bikinis-exactly what users requested. Many images also carried a “donut glaze” filter over faces, a hallmark of the trend.
Musk placed blame on users, not the technology. “Obviously, Grok does not spontaneously generate images; it does so only according to user requests,” he wrote, adding the model is designed to refuse illegal content and that any bypass would be “fixed immediately.”
Platform Response
X Safety posted that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” but offered no additional safeguards or acknowledgment of systemic risk.

Musk himself reposted parody images of a toaster and a rocket in bikinis, a move critics say trivialized the real harm caused to those targeted.
Legal Timeline
The federal Take It Down Act, passed last year, will require platforms to build notice-and-takedown systems for non-consensual images, but the mandate does not take effect until May 19, 2026.
Key Takeaways
- California’s investigation is the first domestic legal action targeting an AI firm for non-consensual sexual imagery
- Grok’s output rate peaked at one harmful image per minute, according to third-party analysis
- Global regulators are coordinating scrutiny across four countries
- Current U.S. law does not yet compel proactive removal of such content

