As organizations accelerate AI adoption, security and governance have moved from back-office concerns to board-level priorities. In a recent Ai-360 webinare, AI Security Reality Check: Greenland, Guardrails, and What Boards Actually Need, industry leaders Cynthia Colbert, Senior Partner Development Sales Manager GPSI (AI and Security), Srini Kakkera, CTO, and Neil Oschlag-Michael, Head of AI Risk and Security, shared insights into the operational realities of securing AI in enterprise environments, highlighting the pitfalls, opportunities, and lessons learned from early deployments.

Foundations of AI Security: Identity, Data, and Visibility

Colbert emphasized that AI security is less about the technology itself and more about visibility and control. “I don't always view AI security as a problem; I view it as an identity, data, and whole visibility problem. If you don't know where your sensitive data lives, how can you know who has access, and what happens in your environment? AI just scales the damage faster,” she said. Identity, access management, and endpoint controls form the foundation of mitigating risk in AI deployments, particularly in highly regulated sectors.

Kakkera underscored that the challenge is systemic. Unlike traditional software, where users are vetted and data governance is clear, AI models introduce new intersections of identity, data, and operational risk. “AI is very different when it comes to application onboarding in enterprises,” Kakkera said, explaining that enterprises often rush to deploy generative AI without understanding who owns the data, where it is stored, or how it flows between systems, creating significant exposure.

Oschlag-Michael illustrated the principle with a concrete example from his work with conversational AI. Virtual agents interact with users, sometimes accessing sensitive information from backend systems. His firm mitigates risk by keeping confidential data in controlled systems and using guardrails such as prepared SQL statements, cross-site scripting prevention, and zero-data retention agreements with third-party AI providers. “The backbone of security is what we call the good old-fashioned security,” he said, noting that modern AI-specific controls complement, rather than replace, fundamental security hygiene.

Operationalizing AI in Security Workflows

The discussion moved to operationalizing AI in security workflows. Cynthia outlined what “good” looks like for AI-assisted SOCs: human-in-the-loop intervention, auditable actions, least-privileged access, and reversible mediation. AI can accelerate investigations, improve prioritization, and reduce mundane tasks such as compliance reporting, but only when humans remain engaged in decision-making. Kakkera added that agentic AI can transform enterprise operations by analyzing diverse datasets and delivering actionable insights at scale, freeing human analysts for higher-order judgment tasks and cross-team collaboration.

Despite these advances, enterprise adoption remains cautious. Colbert observed that many organizations, particularly in regulated industries, still prefer proof-of-concept deployments, highlighting a “land and expand” approach to scaling AI across business units once a foot is in the door. Kakkera echoed this perspective, noting that AI adoption in enterprises is deliberately methodical, balancing innovation with operational reliability and regulatory compliance.

Geopolitics, Sovereignty, and Cloud Dependence

Geopolitical factors further complicate adoption. European organizations face questions about digital sovereignty and reliance on US-based technology. Oschlag-Michael was candid: “There’s no chance in hell we can manage alone,” he said, emphasizing the dependence on American cloud infrastructure. Kakkera and Colbert noted that data protection remains paramount, with ownership, governance, and residency rules guiding deployment decisions. The consensus was that secure AI is not simply a matter of localizing technology but of ensuring proper guardrails and controls regardless of infrastructure location.

Emerging risks in AI extend beyond traditional security threats. Kakkera highlighted model hallucinations, misaligned outputs, and identity spoofing as new categories of risk. “It’s not a human-introduced error; it’s a systematic error,” he explained. Enterprises must integrate AI risk frameworks, adhere to standards like ISO and SOC 2, and educate employees on safe AI usage. Oschlag-Michael stressed accountability: “You actually know who’s responsible for AI when you find out who’s responsible when it goes wrong,” noting that clear governance and traceability are essential for mitigating board-level risk.

Agentic AI in Customer-Facing Workflows

The panel also explored agentic AI, particularly in customer-facing contexts such as contact centers. Oschlag-Michael explained the criteria for human intervention: by design, by risk, on error, and by user request. In practice, effective deployment depends on defining the business case, ensuring smooth hand-offs, and maintaining context for human agents. Kakkera emphasized that these agentic workflows, when implemented thoughtfully, enable enterprises to process data more effectively, automate repetitive tasks, and coordinate across compliance, legal, and security functions.

Cynthia concluded by stressing that enterprise AI adoption is here to stay and will scale faster than cloud adoption did, albeit with necessary safeguards. “We need to learn how to work with AI smarter and better,” she said, underscoring that risk mitigation, compliance, and end-user education are central to sustainable AI deployment.

From Guardrails to Governance: A Framework for Leaders

The conversation reinforced a critical takeaway: securing AI is not about flashy technological solutions or rapid rollout. It is about integrating robust identity management, data governance, and auditable controls into AI workflows, applying traditional security principles to new models, and maintaining accountability at every level of the enterprise. As Kakkera put it, the goal is to adopt AI incrementally, with clear guardrails and human oversight, ensuring that innovation does not compromise reliability, compliance, or operational integrity.

For boards and executives, the discussion offers a clear framework for action. Risk management, governance, and operational controls must be prioritized alongside AI’s functional capabilities. Enterprises that understand where risk resides — in data, models, and workflows — and build processes to monitor, audit, and remediate issues will be positioned to realize AI’s potential while protecting stakeholders, assets, and customers.

AI security has evolved from a niche technical concern to a strategic imperative. Organizations that embrace the lessons shared by Colbert, Kakkera, and Oschlag-Michael — focusing on foundational security, incremental adoption, and operational transparency — will have a clear advantage in deploying AI safely and effectively at scale. The future of AI in enterprise is not about avoiding technology; it is about applying it responsibly, with rigor, and in alignment with governance and risk management principles.

Subscribe to our channel and watch out for the release of AI Security Reality Check: Greenland, Guardrails, and What Boards Actually Need.


Share this post
The link has been copied!