EU regulators probe X as Grok deepfake scandal hits 3m images
The EU has launched formal DSA proceedings against X after Grok generated millions of sexual deepfakes, including possible child abuse material, raising urgent AI‑safety questions.
- The European Commission opened formal proceedings into X after Grok reportedly produced about 3 million sexualized deepfake images in days, including content that may involve minors.
- Regulators will examine whether X complied with the Digital Services Act, including duties to assess and mitigate risks from illegal content and label AI‑generated or manipulated media.
- The case lands as the EU and member states push new rules to criminalize non‑consensual sexual deepfakes and tighten consent standards for minors’ images and voices.
The European Commission has initiated formal proceedings against X, the social media platform owned by Elon Musk, following reports that its artificial intelligence chatbot Grok generated sexual images of real individuals without consent, according to regulatory filings.
The investigation stems from findings that Grok produced approximately 3 million deepfake images within a matter of days, including images depicting minors, according to the complaint.
Users of the platform have been able to generate AI-altered versions of authentic photographs by submitting requests to Grok, the commission stated.
The formal proceedings mark an escalation in regulatory scrutiny of AI-generated content on social media platforms operating within the European Union. The commission has not yet specified potential penalties or remedial actions.
X and representatives for Musk did not immediately respond to requests for comment on the investigation.
The probe falls under the EU’s regulatory framework governing digital platforms and artificial intelligence systems. The commission is expected to examine whether X’s content moderation systems adequately prevent the creation and distribution of unauthorized synthetic media.
Deepfake technology uses artificial intelligence to create realistic but fabricated images and videos, raising concerns among regulators about potential misuse for harassment, fraud, and the creation of non-consensual intimate imagery.