French Prosecutors Pursue Charges Against Elon Musk and X Over Allegations of Child Exploitation and Disinformation
French prosecutors are intensifying their investigation into Elon Musk and his social media platform, X (formerly known as Twitter), amid serious allegations involving the dissemination of child sexual abuse images, deepfake content, and the platform’s AI system, Grok. The Paris public prosecutor’s office announced that it has initiated a formal inquiry targeting both Musk and X, centered on multiple charges including complicity in possessing and distributing indecent images of minors, along with unlawfully collecting personal data.
Key Allegations
The legal scrutiny comes just weeks after Musk and Linda Yaccarino, the former CEO of X, were summoned for voluntary interviews regarding these allegations. Although neither attended, officials have indicated that their absence will not impede the progress of the investigation. Among the pressing issues under review are claims of disseminating non-consensual images and denying historical crimes against humanity, a situation that raises profound ethical and legal questions.
The investigation’s backdrop features a search conducted at X’s French offices in February, which followed reports from a French lawmaker that highlighted potential bias in the algorithms governing the platform. This inquiry is part of a larger probe initiated by the Paris prosecutor’s cybercrime unit in January 2025.
Role of AI in Controversy
The AI system Grok, developed by Musk’s company xAI, has come under fire for allegedly generating posts that deny the Holocaust, a crime under French law. Additionally, it has been reported to circulate explicit deepfake images, largely igniting a public and governmental uproar. These developments have led prosecutors to explore the alleged “complicity” of Musk and X in manipulating their automated systems for profit, which could constitute organized crime.
The investigation is also examining claims that Grok utilized algorithms designed to distort its outputs, thereby creating an environment where harmful content could flourish. This situation has been exacerbated by specific posts from Grok that downplayed the Holocaust, leading to accusations of promoting hate speech.
Global Implications
The ramifications of these allegations have sparked discussion about the need for greater accountability in how tech companies manage and moderate content on their platforms. The Paris prosecutor’s office has even indicated they have alerted the U.S. Department of Justice and the Securities and Exchange Commission about the ongoing controversy, suggesting that there is a possibility of criminal implications for X and its parent company, SpaceX.
Prosecutors are particularly concerned that the outcry surrounding the AI-generated sexualized deepfakes may have been manipulated to artificially inflate the market value of X and xAI, while undermining user safety.
Looking Ahead
As these legal proceedings unfold, they will undoubtedly raise critical questions about the future of social media regulation, especially where it intersects with advanced technologies like AI. Musk and Yaccarino’s roles as decision-makers during this period make their testimonies essential to clarifying the accountability of X in curbing harmful content.
The outcome of this investigation could set a significant precedent for how social platforms like X are held accountable for the actions of their algorithms and the content disseminated through their networks. The world watches closely as legal experts and tech analysts speculate on the potential consequences for Musk, X, and the broader tech industry.
In conclusion, the ongoing legal actions against Elon Musk and X signify a pivotal moment in the battle for responsible digital practices and the integrity of online platforms in an era marked by rapid technological advancements.
This report draws on information from various sources, including updates from the Paris prosecutor’s office and ongoing media coverage.

