The Feature Launched on December 20. By January, the World Was Banning It.
On December 20, 2025, Elon Musk announced that users could now prompt Grok — the AI chatbot embedded in his social media platform X — to edit and generate images directly in posts. The announcement was framed as a creative tool. What followed was something else entirely.
Within days, users discovered that Grok's content moderation had a catastrophic gap. You could tag the chatbot in any post containing a photo of a real person and prompt it to put them in a bikini, remove their clothing, or place them in inappropriate scenarios. The chatbot would comply. Publicly. Right there in the reply thread on X, visible to everyone.
By early January 2026, governments on four continents were moving against Grok. By March, three lawsuits had been filed in the United States. The EU had backed new laws. France had raided X's Paris offices. Teenagers in Tennessee had discovered AI-generated sexual images of themselves distributed on Discord. And the numbers behind the scandal had become almost incomprehensible — a machine generating abuse at industrial scale. 2 tracks the full arc of this story, which is not over.
Claude Opus 4.6 vs Sonnet 4.6: Full Comparison, Pricing, and Which Model Is Best for You
The Scale of What Happened: 3 Million Images in 11 Days
Before the government actions, before the lawsuits, the scale of what Grok produced needs to be understood on its own terms.
Researchers at the Center for Countering Digital Hate analyzed a random sample of 20,000 images from a wider total of 4.6 million produced by Grok's image-generation feature between December 29, 2025 and January 8, 2026. Based on that sample, CCDH estimates that Grok generated around 3 million photorealistic non-consensual intimate imagery in the 11-day period, reflecting an estimated average pace of 190 per minute. The researchers also estimated that Grok generated around 23,000 non-consensual intimate imagery of children over the same period — one every 41 seconds.
A separate analysis, reported by Bloomberg, was even more alarming in its real-time portrait. A 24-hour analysis conducted from January 5 to 6 found that users had Grok create 6,700 inappropriate or manipulated images per hour — 84 times more so than the top five deepfake websites combined. This was not a fringe abuse of the platform. This was happening at volume, in public, on one of the world's most widely used social media networks.
The victims were not abstractions. In 11 days, Grok users generated images of actors Selena Gomez, Millie Bobby Brown, and Christina Hendricks, singers Taylor Swift, Billie Eilish, Ariana Grande, Ice Spice, and Nicki Minaj, Swedish Deputy Prime Minister Ebba Busch, and former U.S. Vice President Kamala Harris. A schoolgirl's "before school selfie" was undressed by Grok and the result posted publicly. An image of six young girls wearing micro bikinis, generated by Grok, remained publicly available on X weeks later. Japan's Princess Kako had manipulated images of her posted on X. Ashley St. Clair, the mother of one of Musk's children, told NBC News that Grok produced countless explicit images of her including some based on photos from when she was 14.
AI Forensics, a European nonprofit that investigates algorithms, analyzed 800 pieces of recovered content and found that almost 10% were instances of photorealistic people, very young, engaged in explicit scenarios. Wired reported that far more graphic AI-generated sexual imagery was being created by Grok on its website and app, which are separate to X, including female celebrities removing their clothes and engaging in explicit scenarios.
GPT-5.4 Released: 1 Million Token Context, Computer Use and Everything You Need to Know
How "Spicy Mode" Built the Machine
This was not an accident that emerged from an otherwise responsible product. The architecture of what happened was intentional in its design, even if the specific harm was underestimated.
Grok was launched in 2023. In August 2024, image generation capabilities were added. The lawsuit against xAI details how xAI, under the direction of Elon Musk, deliberately designed Grok to create non-consensual explicit content, marketed a "Spicy Mode" to attract users, and configured the model's system prompt to assume "good intent" when users referenced "teenage" or "girl." Cases of Grok being used to remove clothes from women in pictures began surfacing as early as May 2025. By late December 2025, a trend of X users requesting such edits to women's photos without permission had taken root, and this received significant media attention in the first days of January 2026.
The xAI safety team, already small compared to competitors, lost several members in the weeks leading up to the crisis. Musk had internally pushed back against guardrails for Grok. When the crisis became undeniable, xAI's initial response to press inquiries — including from Al Jazeera, Fortune, and NPR — was an automated reply that read: "Legacy Media Lies."
On January 9, 2026, X restricted image generation to paying users in an effort to stem the flood. British Prime Minister Keir Starmer's office called the measure "insulting" to victims and "not a solution," saying it "simply turns an AI feature that allows the creation of unlawful images into a premium service." The restriction did not cover the standalone Grok app or website, and non-paying X users could still use the "Edit image" feature. Rather than disabling the feature, xAI restricted image generation to paying subscribers and advertised "Spicy Mode" as a premium benefit.
MFS Interoperability Bangladesh 2026: bKash, Nagad, and Bank Transfers Explained
Indonesia and Malaysia: The World's First Grok Bans
The first governments to act were in Southeast Asia, and their speed reflected both the severity of what was happening and the structural context of their legal systems.
On January 10, Indonesia's government temporarily blocked access to Grok. Communication and Digital Affairs Minister Meutya Hafid stated: "The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space." Initial findings showed that Grok lacked effective safeguards to stop users from creating and distributing illegal explicit content based on real photos of Indonesian residents. The ministry summoned X officials.
Malaysia's Communications and Multimedia Commission ordered its own temporary restriction the following day, following "repeated misuse of Grok to generate obscene, non-consensual explicit, indecent, grossly offensive, and non-consensual manipulated images, including content involving women and minors." The MCMC said it had issued notices to both X and xAI on January 3 and January 8 respectively, yet deemed their responses "insufficient to prevent harm or ensure legal compliance."
Both Indonesia and Malaysia are Muslim-majority countries with strict anti-pornography laws. Their actions were not performative — they were backed by legal architecture that gave them real enforcement tools. Indonesia categorised the misuse of AI for producing fake pornography as a form of "digital-based violence." Malaysia lifted the temporary restriction on January 23 after X implemented safety measures, though authorities said Grok would remain subject to continuous monitoring. Indonesia allowed Grok back online on February 2 after X Corp made a written commitment to service improvements and compliance with applicable laws.
For the rest of the world watching, Indonesia and Malaysia had done something unprecedented: they had blocked a major AI product over its content moderation failures, held that block in place until concrete commitments were made, and then conditionally restored access. It was a template that regulators elsewhere would study carefully. 2 covered this development extensively, with particular attention to its implications for Southeast Asian digital governance.
Best Freelancing Platforms for Bangladeshis 2026
India, France, and the UK: Investigations Multiply
The Southeast Asian bans were just the beginning of the global regulatory wave.
India's Ministry of Electronics and Information Technology moved first among the major economies, issuing an order giving X 72 hours to submit an action-taken report detailing technical safeguards to prevent Grok from generating obscene content. The order warned that failure to comply could jeopardize X's safe harbor protections that shield it from legal liability for user-generated content. India removed 3,500 posts and 600 accounts in response to government complaints — the largest such enforcement action X had taken since the scandal began.
In France, the response went further. On January 2, French ministers reported the AI tool to prosecutors, calling the content "manifestly illegal," and asked regulators to check compliance with the Digital Services Act. On February 3, Paris prosecutors — a cybercrime team and Europol — raided the Paris offices of X. The investigation, which began as one into algorithmic abuse and data extraction, expanded to cover Holocaust denial and sexual deepfakes. Elon Musk and former X CEO Linda Yaccarino were summoned to a hearing scheduled for April 20.
In the United Kingdom, Ofcom launched a formal investigation on January 12, calling it "a matter of the highest priority" and warning X could face a ban or fines up to 10% of global revenue. Prime Minister Keir Starmer told Parliament the images were "disgusting" and "unlawful." The UK simultaneously accelerated provisions of the Digital Use of Artificial Intelligence Act to make it illegal for AI image-generating services to create non-consensual intimate images of adults, with that rule taking effect February 6, 2026. Technology Secretary Liz Kendall called the AI-generated images "weapons of abuse." 2 has covered the UK regulatory response in detail, given its significance for Pakistani users of X and Grok.
The European Union: From Investigation to Legislation
The European response to the Grok scandal ultimately produced the most structurally significant legal change of any jurisdiction.
On January 26, 2026, the European Commission opened a formal investigation into X to determine whether Grok had met its legal obligations under the Digital Services Act. EU tech commissioner Henna Virkkunen stated that non-consensual sexual deepfakes of women and children were "a violent, unacceptable form of degradation" and that the rights of women and children should not be "collateral damage" of X's services.
Separately, Ireland's Data Protection Commission opened an investigation into whether X violated GDPR in relation to Grok's generation of images involving Europeans, including children. The European Commission ordered X to retain all internal documents and data related to Grok until the end of 2026. Spain's government ordered prosecutors to investigate X, Meta, and TikTok for alleged crimes related to AI-generated child sex abuse material.
Then came the legislative response. On March 13, 2026, EU member states backed a ban on AI systems generating sexualised deepfakes as part of proposed amendments to the bloc's comprehensive AI rulebook. European ambassadors agreed to prohibit "practices regarding the generation of non-consensual sexual and intimate content or child sexual abuse material." The ban will become law after final negotiations between the EU Parliament and member states. The Grok scandal had directly accelerated the EU's AI regulatory timeline in one of the most significant legislative developments of the year. 2 tracked every stage of the EU legislative process as it unfolded.
The Lawsuits: Accountability Moves to the Courts
While governments investigated, victims and their lawyers were building a parallel accountability track through the US court system.
On January 15, 2026, Ashley St. Clair filed a lawsuit against xAI in New York State Superior Court, alleging that Grok generated and distributed "countless sexually abusive, intimate, and degrading deepfake content" of her. In a counter-move, xAI sued St. Clair in Texas federal court on January 16, seeking over $75,000 in damages and claiming she violated xAI's terms of service by filing in New York instead of Texas.
The class action filed in the Northern District of California on March 17, 2026 alleged that xAI knowingly designed the technology to create non-consensual explicit imagery of real people and children, and that Elon Musk knew there were dangers associated with the technology but chose not to enact industry-standard guardrails that could have prevented child sex predators from using it. The lawsuit cited CCDH's research and sought to represent a nationwide class of everyone who had images of themselves as minors altered by xAI tools.
The plaintiffs' stories were not abstractions. One discovered AI-altered images of herself through an anonymous Instagram link leading to a Discord server containing similar sexualized content involving at least 18 other minors. The perpetrator had accessed photographs from her social media accounts. The lawsuit details that the perpetrator distributed the material on Discord, Telegram, and file-sharing platform Mega, trading the AI-generated images of minors for non-consensual explicit content of other minors.
Meanwhile, 35 US state attorneys general called on xAI to cease allowing sexual deepfakes to be generated, and California attorney general Rob Bonta announced a state investigation, stating "The avalanche of reports detailing the non-consensual, non-consensual explicit material that xAI has produced and posted online in recent weeks is shocking."
Musk's Response: Deflection, Dismissal, Litigation
Elon Musk's public handling of the crisis reflected both his general philosophy toward content moderation and his specific position on this issue. When users began circulating the non-consensual intimate imagery in early January, Musk reacted to one image — of a toaster dressed in a bikini by Grok — with laughing emojis, writing "Not sure why, but I couldn't stop laughing about this one." When governments criticised the platform, he responded that they "want any excuse for censorship" and described Starmer's position as "fascist."
When the scale of child-related content became undeniable, Musk told X on January 14 that he was "not aware of any illegal imagery of minors generated by Grok" — despite two lawsuits already alleging the creation of such material predating that statement. When pressed by regulators and media, xAI's press email delivered the automated response: "Legacy Media Lies."
This approach — dismissing media, counter-suing victims, attributing regulation to censorship — has been consistent throughout. It has not resolved the crisis. Multiple investigations remain open. The lawsuits are proceeding. The EU ban on deepfake generation is moving toward becoming law. And despite the Pentagon signing a $200 million contract for Grok and integrating it in January 2026, the reputational and legal exposure to xAI from this episode is ongoing and significant.
Why This Matters Beyond Grok
The Grok deepfake scandal is bigger than one chatbot or one billionaire's platform. It is a case study in what happens when a major AI capability is deployed at consumer scale — integrated into a platform with hundreds of millions of users — without the safety architecture that protects against foreseeable and catastrophic misuse.
Every other major AI image-generation platform had implemented CSAM (child sexual abuse material) prevention measures. Grok, according to the lawsuits and independent analyses, did not. The complaint alleges that xAI knowingly designed, marketed, and profited from an image and video generator capable of creating non-consensual explicit content of real people including children, while refusing to implement the industry-standard CSAM prevention measures used by every other major AI company.
The result was industrial-scale sexual abuse distributed through social media, with victims ranging from schoolchildren to politicians to entertainers, across dozens of countries. The Indonesia and Malaysia bans were the first government shutdowns of an AI product over its safety failures. The EU legislative response was the fastest movement of AI safety law in the bloc's history. The US lawsuits may produce the first major damages awards against an AI company for enabling child sexual abuse material.
This is where the line is. AI image generation is a powerful and in many applications beneficial technology. But power without safety architecture is not a product — it is a weapon handed to every bad actor with an internet connection. The Grok crisis demonstrated this at a scale that could not be ignored. The regulatory and legal responses it triggered will shape how AI companies approach safety for years. For continued coverage of AI safety, regulatory developments, and the lawsuits as they progress, WinTK publishes ongoing analysis for South Asian audiences navigating this rapidly evolving landscape.