The European Commission is the latest to launch a formal investigation into Elon Musk’s X amid rising scrutiny of the creation and distribution of deepfakes, this time over systemic risk handling under the Digital Services Act (DSA).

The new investigation will assess whether the company properly assessed and mitigated risks associated with deploying Grok's functionalities in X in the EU. This includes risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.

To protect European citizens, the investigation will assess whether X complies with its DSA obligations to diligently assess and mitigate systemic risks, and whether the risk assessment report for Grok’s functionalities was properly conducted and transmitted to the Commission before its deployment.

The investigation will focus on the dissemination of illegal content, content that could have a negative effect in relation to gender-based violence, and content that could have serious negative consequences to physical and mental well-being stemming from deployments of Grok's functionalities into the X platform.

In parallel, the Commission extended its ongoing investigation launched in December 2023 into X's compliance with its recommender systems risk management obligations to establish whether X has properly assessed and mitigated all systemic risks, as defined in the DSA, associated with its recommender systems, including the impact of its recently announced switch to a Grok-based recommender system.

If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1) and 42(2) of the DSA. This could result in a fine of up to 6% of the company’s global annual turnover.


The most chaotic deepfake webinar in the world
When 40% of security professionals fail to spot deepfakes under test conditions, what chance do your employees have? Seven practitioners from the front lines—building detection systems, implementing governance frameworks, selling solutions, and cleaning up after attacks—deliver unfiltered reality about deepfake threats targeting enterprises right now. Speakers: Bahadir “Bob” Yavuz (Global Telco Consult, fraud detection specialist), Alexandra Jorison (Identif.ai, deepfake detection), Ray Ellis (AI Security Lead, multinational FMCG), Richard Mendoza (Senior vCISO, Compass MSP, AIGP certified), Craig Clark (Director, Clark & Company, education/public sector), Aruneesh Salhotra (Founder, Investor, OWASP AIBOM Project Lead) David Clarke (vCISO, ISO27001-SOC2) Key topics: Account takeover economics ($5K-$10K per incident) • Network-layer authentication using signals attackers can’t fake • Zero trust principles applied to human identity verification • The Arup $25M case study • Challenge-response protocols for video calls • False positive crisis in detection platforms • Shadow AI governance failures • Resource constraints degrading security team attention spans • Business case realities when proving ROI before incidents • Education sector vulnerabilities including Nudify apps targeting children • Third-party risk from vendors overselling detection capabilities Seven practitioners who’ve implemented systems, governed deployments, sold solutions, and handled the aftermath. The unfiltered reality of protecting organizations when seeing is no longer believing. By registering you agree to share your information with our commercial partners.

Share this post
The link has been copied!