Three Teens Sue Elon Musk's xAI Over Grok-Generated CSAM
Proposed class action accuses xAI of knowingly launching a defective product that enabled creation of child sexual abuse material
Three Tennessee teenagers have filed a proposed class action lawsuit against Elon Musk and his AI company xAI, alleging that the firm's Grok chatbot was used to generate sexually explicit images and videos depicting them as minors. The suit, filed Monday, accuses xAI leadership of knowing that Grok would produce child sexual abuse material (CSAM) when the company rolled out its permissive "spicy mode" feature last year.
The plaintiffs — two current minors and one adult who was underage at the time of the alleged abuse — describe deeply disturbing scenarios in the complaint. One victim, identified as "Jane Doe 1," says she discovered last December that explicit AI-generated images of herself and at least 18 other minors had been circulated on Discord. According to the lawsuit, at least five files depicted her real face and body digitally manipulated into sexually explicit poses. The perpetrator, who has since been arrested, allegedly used the Grok-generated material as currency in Telegram group chats with hundreds of users, trading the images for explicit content of other children.
The lawsuit accuses xAI of failing to conduct adequate safety testing before deploying Grok's image generation capabilities, calling the product "defective in design." The filing arrives amid mounting pressure on xAI from multiple fronts. Grok drew intense scrutiny after it flooded the X platform with explicit imagery of both adults and minors, prompting a coalition of attorneys general to call for a Federal Trade Commission investigation, an EU probe, and a public warning from UK Prime Minister Keir Starmer. The U.S. Senate passed legislation in January allowing victims of nonconsensual deepfakes to sue their creators, while the Take It Down Act — signed by President Donald Trump in 2025 — will criminalize the distribution of nonconsensual AI-generated deepfakes when it takes effect in May.
"These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators," said Annika K. Martin of the law firm Lieff Cabraser, which represents the victims. The lawsuit seeks monetary damages for all affected individuals and a court order preventing xAI from generating and distributing alleged CSAM. X, which hosts Grok, has previously stated that users who prompt the tool to create illegal content will face the same consequences as those who upload such material directly, though investigations have shown that manipulating images through Grok remains possible despite attempted restrictions.
The case represents one of the first major legal tests of AI companies' liability for harmful content generated by their tools, and could set significant precedent for how the industry is held accountable for foreseeable misuse of its products. X did not respond to requests for comment.
Originally reported by The Verge.