UK and EU regulators have opened investigations into Grok, the generative AI system developed by xAI and integrated into the X platform, following reports that it was used to create and circulate non-consensual deepfake images, including sexually explicit material. The probes mark one of the clearest applications to date of existing digital safety and criminal frameworks to generative AI systems operating at consumer scale.
The UK: Online Safety Act
In the UK, media regulator Ofcom confirmed in January that it had launched a formal investigation into X under the Online Safety Act. The inquiry is focused on whether the platform failed to put in place adequate safeguards to prevent users from generating and sharing illegal content using Grok, including non-consensual intimate images and child sexual abuse material. Both categories are already criminal offences under UK law, regardless of whether the imagery is real or AI-generated.
The investigation follows public criticism from senior UK officials, including Prime Minister Keir Starmer, who described AI-generated sexualized images of real people as unlawful and unacceptable. Under the Online Safety Act, Ofcom has the power to impose fines of up to 10% of global annual revenue if it finds systemic failures in risk management or content moderation. For AI providers and platforms, the case illustrates how model capabilities can directly translate into regulatory exposure when deployed without sufficiently robust controls.
The EU: Digital Services Act
At the EU level, the European Commission has said it is examining Grok’s outputs to determine whether they breach EU law, including obligations under the Digital Services Act. The Commission issued the first fine to X of €120 million ($140 million) in December 2025 for breaching its transparency obligations under the Digital Services Act including ‘The breaches include the deceptive design of its ‘blue checkmark', the lack of transparency of its advertising repository, and the failure to provide access to public data for researchers’, and now a further debate is expected to take place on 20 January 2026, to call for stronger and faster enforcement.
EU officials have stated publicly that the generation and distribution of explicit deepfakes involving real individuals has “no place” on platforms accessible in the bloc. Several member states have also taken parallel action. French prosecutors have expanded an existing investigation into X to assess whether Grok-generated content could constitute child sexual abuse material under national law.
Although the EU AI Act has not yet fully come into force, the Grok case shows how existing horizontal regulation is already being used to address AI-related harms. The DSA places responsibility on platforms to assess and mitigate systemic risks arising from their services, including the misuse of automated systems. For enterprises deploying generative models in consumer-facing or open-ended environments, this shifts the focus from theoretical compliance to demonstrable, operational risk controls.
X has responded by introducing restrictions on Grok’s image-editing features and limiting certain functions in jurisdictions with stricter legal regimes. Regulators, however, have indicated that post hoc changes do not preclude enforcement. The central question is whether foreseeable risks were adequately assessed and mitigated before deployment.
US: Scrutiny Without a Single Federal Probe
In contrast to the formal investigations underway in the UK and EU, there is no single, unified federal investigation into Grok in the United States. However, the issue has drawn growing attention from state officials and federal lawmakers, reflecting a fragmented but intensifying scrutiny of generative AI–driven deepfakes. In mid-January, California’s attorney general confirmed that his office was reviewing complaints related to the creation and distribution of sexually explicit, non-consensual AI-generated images, including those linked to Grok. The review is focused on potential violations of state consumer protection and harassment laws rather than AI-specific regulation.
At the federal level, pressure has so far taken the form of congressional inquiries and public statements rather than enforcement action. Lawmakers have questioned whether existing protections for online platforms, including Section 230 of the Communications Decency Act, should apply to AI systems that actively generate content rather than merely host it. Separately, the Federal Trade Commission is expected to play a larger role as new laws targeting the distribution of non-consensual intimate imagery come into force, even though these statutes are technology-agnostic and do not directly regulate model design.
For enterprise AI providers, the US landscape highlights a different risk profile from Europe. Enforcement is emerging through state-level action, civil liability, and reinterpretation of existing laws, rather than through a centralized AI regulatory regime. While this can delay formal investigations, it increases uncertainty for companies operating across multiple jurisdictions, where compliance obligations may be shaped by precedent-setting cases rather than prescriptive rules.
The Outlook for Enterprise
For enterprise AI leaders, the investigations carry broader implications. Generative capabilities that scale rapidly through consumer platforms can trigger liability not only for downstream misuse but also for upstream design choices, training data governance, and feature access controls. The cases reinforce the need for jurisdiction-aware deployment strategies, rigorous content filtering, and auditable risk assessments tied to real-world legal obligations.
More broadly, the Grok probes suggest that European regulators are moving decisively from policy development to enforcement. As generative AI systems become embedded in widely used platforms, regulatory scrutiny is increasingly focused on outcomes rather than intent. Governance frameworks that treat safety, compliance, and accountability as core system requirements are becoming operational necessities rather than optional safeguards.