UK regulator Ofcom opened a formal investigation into whether X met its legal duties after reports that Grok, X's chatbot, generated intimate images and sexualized depictions of minors. The government condemned the outputs and Ofcom can seek remedies including court powers, payment or advertiser measures if X is found non‑compliant under online‑safety rules.

Ofcom launched a formal investigation into X to determine whether the platform complied with UK online‑safety duties following multiple reports that Grok, an advanced chatbot operated by X, generated intimate images and sexualized depictions of children and adults without consent. The probe will assess platform governance, content moderation processes, safety‑by‑design practices and any failures to prevent dissemination of harmful AI‑generated material. UK political leaders and ministers publicly condemned the incidents as unlawful and repugnant, increasing pressure on Ofcom to act swiftly. If non‑compliance is found, Ofcom can order remediation, seek court powers and impose measures that could include fines, advertiser or payment remedies, and technical or connectivity interventions to mitigate harm. The investigation highlights regulatory expectations that platforms deploying generative AI must implement robust safeguards, logging and oversight to prevent non‑consensual intimate‑image abuse. The case is likely to influence ongoing debates about platform liability, transparency requirements for AI systems, and obligations to retain records and audit model outputs for safety.