“You Can’t Trust Anything”: How Deepfakes Are Breaking Enterprise Verification Models
Security models are no longer enough as multi-modal attacks overwhelm traditional controls, forcing a rethink of enterprise trust systems.
Security models are no longer enough as multi-modal attacks overwhelm traditional controls, forcing a rethink of enterprise trust systems.
Confluent deal highlights IBM’s focus on streaming data infrastructure to support AI deployment, governance, and hybrid cloud integration.
Project SnowWork introduces tooling to move AI from experimentation to execution, targeting enterprise-wide adoption and measurable ROI.
The Promptfoo deal underscores the importance of model evaluation, red-teaming, and reliability in scaling enterprise AI deployments.
Hyundai and Kia will integrate NVIDIA DRIVE to support scalable autonomous systems, from ADAS to robotaxi development.
New partner program from Anthropic funds training, technical support, and go-to-market collaboration to accelerate enterprise adoption of Claude AI.
Friday, 13 March 2026 Enterprise AI Governance & Security Would your current AI governance framework survive a real audit, a regulatory inquiry, or an agentic system going off-script at machine speed? Across six sessions on the AI-360 BrightTalk channel, practitioners from Google, PayPal, IBM, Crown Cards, Santa Clara University School
Google, PayPal, IBM, and beyond tackle AI governance, MCP security, and agentic risk — on demand via the AI-360 BrightTalk channel.
Security models are no longer enough as multi-modal attacks overwhelm traditional controls, forcing a rethink of enterprise trust systems.
MCP is rapidly transforming how AI agents interact with enterprise systems, opening up a new class of supply chain, identity, and governance risks that security teams can’t ignore.
Hefty cash burn threatens OpenAI’s longevity in the face of self-funded competitor.
Google DeepMind CEO warns that defensive systems must outpace AI-powered attack vectors as AGI approaches.
From the EU AI Act to cyber policy wording, panelists examined how emerging regulation and insurance structures intersect with enterprise AI deployment.
Supreme Court allows appeal in Emotional Perception AI v. Comptroller General, mandating EPO-aligned test for computer-implemented inventions under UK law.
As GenAI scales across enterprises, quantum advances are compressing security timelines, challenging encryption lifetimes, governance models, and breach assumptions.
Under a $151 Billion SHIELD contract, IBM will bring governed, interoperable, mission-grade AI to accelerate threat detection and response.
In parallel to its existing inquiry, the European Commission has launched a new investigation into how risks are assessed and mitigated in connection with the deployment of Grok’s functionalities in X.
IBM’s Cost of a Data Breach Report 2025 reveals faster detection offsets rising AI-driven attacks, though US breach costs hit a record high.
Experts discuss the practical steps organizations must take to secure AI, protect data, and operationalize responsible deployments.
Switzerland's AI guidelines emphasize people-centric approach, transparency, and accountability. The country aligns AI policy with broader digital strategy.
The collaboration between Georgia Tech and Meta in creating the OpenDAC database represents a significant milestone in the fight against climate change.
Thailand's AI strategy targets 10 sectors, emphasising digital empowerment but lacks explicit mentions of fundamental rights and fairness in AI development.
Tunisia is in the nascent stages of developing a national AI strategy, with efforts underway to establish a framework for responsible AI development and use.
Finland leads in AI development with a national strategy balancing innovation and ethics, while actively shaping EU regulations and global AI norms.
Portugal leads in responsible AI governance with its AI Portugal 2030 strategy, focusing on ethical principles and innovation while addressing societal challenges.