Members of the European Parliament reached a preliminary political agreement on amendments to the EU AI Act on 11 March, with a full committee vote scheduled for 18 March. The deal, described as a step toward simplification, extends several compliance deadlines — requirements for systems listed in Annex III would apply from December 2027, while those in Annex I would push out to August 2028 — giving organisations more time to prepare for obligations that were originally due to land this summer.

The agreement also includes provisions targeting non-consensual explicit deepfakes, banning AI systems that alter, manipulate, or artificially generate realistic images or videos depicting sexually explicit activities involving an identifiable person without their consent. The move follows the European Commission's investigation into X's AI tool Grok over similar concerns.

However, the deal has not landed well with everyone. Information Technology Industry Council Policy Director for Europe Marco Leto Barone warned of "worrying signals" in the agreement, citing the shortening of the grace period for generative AI transparency requirements to just three months as a source of legal uncertainty. He also criticised the reinstatement of registration requirements for certain non-high-risk systems as a missed simplification opportunity.

The discontent runs wider than one trade body. Forty-eight EU-based trade associations wrote to MEPs and the Council arguing for immediate delays on the August deadlines and exemptions for organisations already covered by AI rules under existing sectoral frameworks.

That joint letter makes a pointed case that the fast-moving Omnibus process is failing to resolve the compliance problems facing sectors like healthcare, manufacturing, automotive, and energy. Companies in these industries are already subject to robust sectoral frameworks but find themselves caught in a double or even triple layer of regulation, classified as high-risk under the AI Act despite existing sector-specific oversight. The letter calls explicitly for sectors already governed by product safety legislation to be moved from Section A to Section B of Annex I, so that AI requirements can be addressed through appropriate sectoral channels.

The financial stakes are stark. The Commission's own analysis estimates that an SME developing a high-risk AI system could face up to €319,000 in initial compliance costs and up to €150,000 per year thereafter, with real-world figures potentially significantly higher when certification and staffing costs are included.

The industry coalition is also pushing for the timelines of the AI and digital omnibus packages to be formally aligned, given the direct interactions between the AI Act, GDPR, and cybersecurity legislation. Irish MEP Michael McNamara, the AI Omnibus rapporteur, acknowledged that some technical negotiations remain ongoing ahead of the 18 March vote.

Whether the committee vote produces further concessions remains to be seen. But the gap between what MEPs have agreed and what European industry is asking for remains conspicuous.


Share this post
The link has been copied!