Panelists went toe-to-toe at the Agentic + Generative AI For Insurance Europe 2026 conference, hosted by Intelligent Insurer on 12 February 2026, over the evolution needed to bridge the gaping protection gap for ever-evolving AI Products. The ‘Product liability and AI: Crafting Fit-For-Purpose AI Risk Products’ panel, part of the Frontier risks and product innovation - AI & Robotics workshop, saw Claire Davey, SVP - head of product innovation and emerging risk at Relm Insurance, Chris Mooree, president of Apollo ibott Commercia, Ed Ventham, co-founder & head of broking at Assured, and George Holevas, SVP, cyber, media & technology practice, specialty UK at Marsh, debate how the insurance industry should best tackle AI risk.
Behind the conversation stood the question of liability amid underdeveloped and non-standard global regulations that are failing to keep pace with rapidly evolving technology. AI systems are now embedded in core business processes, customer-facing tools, underwriting, claims automation, and decision engines. As deployment scales, so too does exposure. Moore pointed out that “most regulatory bodies haven't even defined what artificial intelligence is. There's no universal definition,” leaving a question mark over accountability, often resulting in finger-pointing when liability is called into question.
For enterprises deploying AI across multiple jurisdictions, this lack of definitional clarity creates practical governance challenges. Contractual allocation of risk between developers, deployers, and end users is becoming more complex.
Moore believes that, given the lack of regulatory guardrails, the insurance industry needs to step in to set some ethical and moral boundaries around liability by shaping available coverage for risk. In effect, coverage terms may begin to influence what is considered responsible AI practice — particularly where regulation remains fragmented.
Davey noted that, much as with cyber regulation, which developed largely off the back of privacy breaches, consumer data and consumer harm, AI regulation is becoming an increasing conversation due to the emerging risks. Davey expressed that, to date, AI adoption has largely been from corporations investing in it to try and modify their business operations and practices.
Davey said: “We're just on the cusp of starting to see how consumers and individuals, and more concerning, children, are starting to interact with this technology, and unfortunately, leading to some bad case examples. And I think there's a bigger conversation going on globally around harms that will really start to kick forward and drive AI regulation.”
This shift from internal efficiency use cases to consumer-facing AI materially increases liability exposure. As AI outputs directly affect individuals, the potential for claims related to discrimination, misinformation, or harm expands.
Davey expressed that jurisdictions are approaching things differently, with Europe, at present, regulating more heavily with legislation when it comes to liability arising out of AI. She expressed that “there's a core principle within the EU AI Act, in addition to the Product Liability Directive, that there needs to be human oversight. So, companies cannot just absolve themselves of liability, either as the management of the company that deploys the tool or as the developer of the tool. From my perspective. I feel like these discussions or concerns around where liabilities sit are a bit unnecessary or misleading.”
What this highlights is that governance frameworks, audit trails, and accountability structures must be designed into AI systems from the outset rather than retrofitted later.
Ventham raised the point that some cyber insurance can cover AI in a broader sense. He said: “Thinking as a client, I would argue that from a cyber perspective, AI is actually a component that you would expect to be picked up as it stands under the definition of computer system. It is part of your computer system if you're using it as a client. The next phase is to make it explicitly clear by stating the terminology for things like hallucinations in the policy wording.”
Davey countered this point with concerns: “A computer system definition is broad enough to capture perhaps what an AI system is, but we need to be thinking about what the insurable risks are, what are the loss scenarios that clients are concerned about? It’s issues like IP infringement and discrimination. If you look at a tech cyber policy, those things are often excluded.”
These policies would need to evolve far beyond their current standard to encapsulate the risk, returning to her stance that tailoring existing structures to include AI-specific coverage across existing lines would be more effective at protecting the client. For enterprise AI operators, this distinction matters: assuming coverage exists under traditional cyber policies may leave material gaps, particularly where claims arise from model outputs rather than system breaches.
What is clear from this conversation is that regulation needs to catch up. In the meantime, enterprise AI needs to be cautious and implement its own guardrails for protection through clearer internal governance, stronger oversight mechanisms, and a realistic assessment of how existing insurance coverage aligns with emerging AI loss scenarios.