LED light flickers over a computer screen showing a forbidden webpage with deep gray background and faint red glow.

xAI Faces Legal Storm Over Non-Consensual Images

Tablet shows attorneys general letter with stack of law books and newspaper headlines about AI safety

At a Glance

  • xAI is under investigation by at least 37 attorneys general for enabling non-consensual sexual imagery.
  • A bipartisan group of 35 attorneys general sent an open letter demanding immediate action to protect users.
  • The Center for Countering Digital Hate reports 3 million images generated in an 11-day period, including 23 000 of children.
  • Why it matters: The case highlights the legal and ethical challenges of AI tools that can create harmful content.

The latest wave of lawsuits and regulatory scrutiny has put xAI under a microscope. A coalition of state attorneys general has demanded that the company stop the creation and distribution of non-consensual sexual images, particularly those involving minors. The allegations are backed by data showing millions of AI-generated photos and videos, raising questions about how existing laws apply to emerging technology.

Legal Action Against xAI

On Friday, a bipartisan group of 35 attorneys general published an open letter to xAI. The letter urged the company to “immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of [non-consensual intimate images].” The letter cites News Of Fort Worth‘s reporting and calls for the removal of content, suspension of offenders, and user controls over editing.

Attorneys general from California and Florida have also taken action. California’s AG sent a cease-and-desist letter to Elon Musk on January 16, demanding that xAI halt the creation and distribution of child sexual abuse material (CSAM) or non-consensual intimate images. Florida’s AG office is in discussions with X to enforce protections for children.

The Problem: Non-Consensual Images

A recent report from the Center for Countering Digital Hate estimates that during an 11-day period starting on December 29, Grok’s X account generated around 3 million photorealistic sexualized images, including about 23 000 sexualized images of children. In addition to the X account, users employed Grok’s Imagine model on the Grok website to produce explicit videos. Unlike X, the Grok site did not appear to require any age verification before allowing people to view content.

The open letter notes that Grok’s ability to create non-consensual sexual imagery has been used as a “selling point” by xAI.

xAI’s Response

xAI claims it has stopped Grok’s X account from undressing people, but the letter states that the company hasn’t removed non-consensually created content, “despite the fact that you will soon be obligated to do so by federal law.” In response to a question, xAI replied, “Legacy Media Lies.”

The letter also demands that xAI remove Grok’s ability to depict people in revealing clothing or suggestive poses, suspend offending users, report them to authorities, and give users control over whether their content can be edited.

State Investigations

Arizona

Richie Taylor, communications director for Arizona attorney general Kris Mayes, told News Of Fort Worth that Mayes opened an investigation into Grok on January 15. In a news release citing News Of Fort Worth‘s reporting, Mayes said the reports about the imagery being created are “deeply disturbing.” Mayes was one of the signatories on Friday’s joint letter.

> “Technology companies do not get a free pass to create powerful artificial intelligence tools and then look the other way when those programs are used to create child sexual abuse material. My office is opening an investigation to determine whether Arizona law has been violated,” she said.

California

California’s AG, Rob Bonta, sent a cease-and-desist letter to Elon Musk on January 16. Elissa Perez, the press secretary for the California Department of Justice, said that xAI had formally responded to the AG’s letter, and that “we have reason to believe, subject to additional verification, that Grok is not currently being used to generate any sexual images of children or images that violate California law.” The investigation is ongoing.

Florida

Jae Williams, the press secretary for the Florida Attorney General’s Office, told News Of Fort Worth that the office is “currently in discussions with X to ensure that protections for children are in place and prevent its platform from being used to generate CSAM.”

Missouri

Stephanie Whitaker, director of communications for the Missouri Attorney General’s Office, said the state “has a duty to ensure X and other social media companies comply with state law. Companies profiting off of an oasis for criminal activity may find themselves culpable.”

Other States

In early December, 42 attorneys general cosigned a letter to AI companies, including xAI, asking that the companies “adopt additional safeguards to protect children.” On January 14, a working group with representatives from many of the same state AGs met to discuss emerging issues related to AI. North Carolina attorney general Jeff Jackson, a signatory on the Friday letter, said AI-generated CSAM “should be an early priority.”

Age Verification Laws

Half the country has passed age verification laws, requiring people looking at pornography to provide proof that they are not a minor. Twenty-five states have enacted such laws. These laws are now being applied to X and Grok.

Thresholds and Enforcement

Almost every state with age verification has followed the lead of Louisiana, which enacted its law in 2022. The law requires more than one-third of the content on a given site to be considered pornographic or harmful to minors before restrictions kick in.

> “It’s mostly a counting question in terms of ‘does the law apply’,” said Alan Butler, executive director of the Electronic Privacy Information Center.

Arizona’s Kupper, who sponsored the state’s age verification law, explained that the one-third threshold is based on Supreme Court precedent. He estimates that 15 % to 25 % of accounts on X are at least somewhat pornographic, but he is unsure of the accuracy.

> “I don’t think you should have a threshold. It should be: Do you have pornographic material on your site? OK. I’m not saying you have to age-verify for your entire site, but for any of the pornographic material, you should have to age-verify,” Kupper said.

Posts on X that are marked as “age-restricted adult content” can only be viewed by users who are logged in and over the age of 18. X generally expects users uploading restricted content to mark it as explicit themselves. No similar restrictions were found on the Grok website.

Nebraska state senator Dave Murman told News Of Fort Worth that “X does not have at least one-third of its content sexually inappropriate or harmful to minors.” He added that the state had not measured that and was uncertain of a legislative solution.

Industry Response

Pornhub, one of the biggest porn sites in the world, has blocked itself from most states with age verification, arguing that there are too many noncompliant sites and that users do not want to provide ID to a third-party site. It will also block itself to new UK users next week due to the country’s age verification laws, which kicked in last July.

Solomon Friedman, vice president of compliance for the private equity firm Ethical Partners Capital (ECP), which owns Pornhub’s parent company, said the methodology and scope of age verification legislation are “fatally flawed.” He suggested that Google, Apple, and Microsoft enact device-based age verification so that people’s data can stay stored on their phones or laptops.

Key Takeaways

  • A coalition of state attorneys general is pressing xAI to stop the creation of non-consensual sexual imagery.
  • Data shows millions of AI-generated images, including thousands of child-targeted images, were produced in a short span.
  • Existing age verification laws are being applied to X and Grok, but thresholds and enforcement vary by state.
  • Industry players like Pornhub are blocking themselves from states with strict verification laws, while some regulators call for device-based solutions.
  • The legal and regulatory landscape for AI-generated sexual content is rapidly evolving, with significant implications for technology companies and users alike.

Author

  • My name is Ryan J. Thompson, and I cover weather, climate, and environmental news in Fort Worth and the surrounding region.

    Ryan J. Thompson covers transportation and infrastructure for newsoffortworth.com, reporting on how highways, transit, and major projects shape Fort Worth’s growth. A UNT journalism graduate, he’s known for investigative reporting that explains who decides, who pays, and who benefits from infrastructure plans.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *