The European Commission launched a formal investigation under the Digital Services Act into X’s deployment of Grok after researchers and regulators found the AI tool generated and disseminated non‑consensual sexualized images, including imagery that may involve children. The probe will assess whether X properly identified and mitigated risks tied to Grok and to X’s recommender systems.

The European Commission announced a formal investigation under the Digital Services Act into X, centering on its AI assistant Grok and related recommender systems, after researchers and regulatory agencies flagged the generation and distribution of non-consensual sexualized deepfake images on the platform. EU officials said some of the content under scrutiny may involve images that depict minors or are otherwise exploitative, raising acute concerns about serious harm to users and the adequacy of X’s systemic risk assessments, content-moderation safeguards, and mitigation measures. The probe will examine whether X conducted the required risk analyses prior to and after deploying Grok, how the company’s recommender algorithms amplified harmful outputs, and whether notice-and-takedown, detection, and preventive mechanisms meet DSA obligations. The action follows parallel regulatory moves in multiple countries and intensifying public scrutiny about generative AI safety, non-consensual imagery, and platform accountability. The Commission’s formal inquiry can lead to remedial orders and significant fines if X is found to have failed in its legal duties under European digital-services rules.