Elon Musk’s social media platform, X, is back in the spotlight for all the wrong reasons. This time, the drama isn’t about a spicy tweet or a name change; it’s about Grok, the platform’s built-in AI. As of late January 2026, the European Union has launched a massive, formal investigation into X after reports surfaced that Grok was being used to create “appalling” sexual deepfakes of women and children.
The controversy exploded when a watchdog group, the Center for Countering Digital Hate (CCDH), revealed that Grok’s “spicy mode” was used to generate an estimated 3 million sexualized images in just a few days. These weren’t just random drawings; they were hyper-realistic edits of real people, often created by users with simple prompts like “remove her clothes.” European Commission chief Ursula von der Leyen didn’t hold back, stating that the EU would not tolerate tech companies “violating and monetizing” the safety of its citizens.
Under the European Union’s Digital Services Act (DSA), big tech companies are legally required to remove illegal content before it spreads. Regulators are now checking if X failed to do its “homework” (risk assessments) before letting Grok loose. If X is found guilty of being reckless with safety, the punishment could be brutal: a fine of up to 6% of its global yearly income. To give you an idea of how serious this is, X was already slapped with a €120 million fine in December for separate transparency failures.
Musk often argues for “total free speech,” but the EU is making it clear that freedom doesn’t include the right to create non-consensual deepfakes. X has started limiting Grok’s image tools to paid subscribers and has blocked certain “nude” prompts, but for many countries – including Malaysia and Indonesia, which briefly banned the tool—it’s a case of “too little, too late.” This battle is no longer just about tech; it’s about who holds the leash on AI when it starts causing real-world harm.