Would your curent AI governance framework survive a real audit, a regulatory inquiry, or an agentic system going off-script at machine speed?

Across six sessions on the AI-360 BrightTalk channel, practitioners from Google, PayPal, IBM, Crown Cards, Santa Clara University School of Law, and beyond have been working through exactly that question.

Here's what's on the channel.


DevSecOps for AI: Why 90% Stays the Same — and the 10% That Changes Everything Priya Pandey, Aron Eidelman, and Leonid Yankulin (Google Security) | 51 mins

Three members of Google's security advocacy team make a case that most of your existing DevSecOps pipeline is still valid — and then explain exactly where it isn't. The core problem: traditional shift-left testing was built for deterministic systems. AI isn't. That gap — prompt injection succeeding 50 times, then failing the next — is where risk actually lives. The session covers runtime security layers, the limits of WAFs against agentic workloads, and why AI logs capturing PII in plain text demand a completely new observability approach.


MCP Security in Production: Supply Chain Risk, Proven RCE & Identity Gaps Aron Eidelman (Google Security) and David Pierce (PayPal) | 57 mins

Researchers have already demonstrated remote code execution on publicly accessible MCP servers. There are no signed repos, no attestation, no chain of custody in the current supply chain. This session doesn't theorise — it walks through what the attack surface actually looks like, how indirect prompt injection manipulates agent behaviour without user awareness, and why token pass-through quietly breaks your audit trail. Practical mitigations are on the table: OAuth 2.1 delegation, SLSA-based attestation, and how to offer developers well-lit paths without becoming the department of no.


The AI Governance Illusion: Why 75–80% Compliance Is Easier (and Harder) Than You Think Henry Platten, Chief AI Officer, Crown Cards; Prashant Kumar, Head of Responsible AI Governance, Risk & Business Continuity | 56 mins

The threshold for multi-jurisdictional AI compliance is lower than most organisations think: if you can explain what data trained a model, who signed off, and what happens when it fails, you're most of the way there. So why are most still struggling? Platten and Kumar — between them covering multiple continents and industries — argue that the gap between governance policy and operational governance is where real risk lives. ISO 42001, shadow AI, automation bias, and minimum viable frameworks for scale-ups all get proper treatment here.


AI Governance Beyond PDFs: Building Auditable, Legally Defensible AI Systems James Greenwood (CognitiveInsight.ai); Alisha Outridge (Byte & Chord); Siddhi Gowaikar (IBM) | 60 mins

Most organisations have governance documentation. Very few can prove what their AI actually did at a given point in time — and that distinction is where legal exposure lives. This panel looks at what auditors actually need versus what model cards and PDFs provide, how cryptographic audit trails create externally verifiable evidence, and why the US regulatory landscape fragmenting across federal, state, and local levels makes a unified evidence architecture more important, not less.


Garbage In, Garbage Faster: Why Agentic AI Exposes Your Organisational Debt Laura Van Weegen, Business Architect | 40 mins

The shortest session on the channel is also the most direct. Business Architect Laura Van Weegen's argument: AI doesn't create new organisational problems — it removes your ability to ignore the ones that already exist. Undocumented workflows, undefined decision ownership, and human workarounds masking broken systems all get amplified at machine speed. If agentic AI follows your documented processes, and those processes don't reflect reality, you have a problem no model can solve.


AI Governance 2026: Why Technical Readiness Without Human Transformation Creates Hidden Liability Cha'Von Clarke Joell (CKC Cares); James McNeely (Striv.AI); Linsey Krolik (Santa Clara University School of Law) | 60 mins

DHS applied an enterprise AI risk framework to 400 use cases between 2023–2024. The finding: systems that tested perfectly in controlled environments failed unpredictably at scale — not because of technical limitations, but human factors. This panel extends that lesson into 2026, covering use case drift, psychological readiness for high-stakes AI decisions, and what "breathable compliance" looks like when regulatory requirements are evolving faster than governance frameworks can be written.


All six sessions are available on demand via the AI-360 BrightTalk channel.

Share this post
The link has been copied!