Deepfake risk is no longer theoretical, but enterprises remain structurally unprepared to manage it. That was unanimously agreed upon by seven practitioners who, in a recent AI-360 webinar, described a threat that does not behave like traditional cyberattacks and is already bypassing controls through a combination of media, identity, and scale.
The panel, comprising Bahadir "Bob" Yavuz, Global Telco Consult, fraud detection specialist; Alexandra Jorison, Identif.ai, deepfake detection; Ray Ellis, AI Security Lead, multinational FMCG; Richard Mendoza, Senior vCISO, Compass MSP, AIGP certified; Craig Clark, Director, Clark & Company, education/public sector; Aruneesh Salhotra, Founder, Investor, OWASP AIBOM Project Lead; and David Clarke, vCISO, ISO27001-SOC2, described a threat that does not fit existing models. One that blends synthetic media, identity fraud, and social engineering into coordinated attacks capable of bypassing multiple layers of control.
The most immediate risk is not video, but audio. Voice cloning has matured to the point where convincing impersonations can be created with minimal input, undermining voice-based verification across finance, customer service, and executive workflows. While video remains harder to scale, audio is already widely exploitable and eroding trust in a previously reliable signal.
However, the greatest risk lies in combination. Enterprises are seeing attacks that layer text, audio, images, and pre-recorded video into cohesive narratives. These campaigns often begin with phishing messages and escalate with additional media to reinforce credibility. The result is a shift from isolated attacks to coordinated, multi-step fraud operations designed to bypass controls that evaluate signals individually.
Most of these attacks are not real-time. Instead, they rely on pre-prepared content distributed at scale, reducing complexity while maintaining effectiveness. This reflects a broader evolution from opportunistic phishing to structured identity attacks that are harder to detect and disrupt.
Detection capabilities remain limited. Many organizations are not identifying deepfake incidents through designed controls but by chance, suggesting the scale of the problem is underreported. Even where tools are deployed, performance is inconsistent. High false positives and missed detections reduce trust in automated systems and create a false sense of coverage.
Human detection is also insufficient. Security teams often lack specialized expertise in synthetic media, and the broader workforce cannot reliably identify deepfakes. At the same time, increased workload and time pressure reduce attention to detail, making it more likely that subtle indicators are missed. In this context, traditional awareness training delivers limited value on its own.
This exposes a deeper issue: enterprises are still relying on outdated trust models. Visual and auditory cues, once considered reliable indicators of authenticity, can no longer be trusted. Organizations are being forced to rethink how identity and intent are verified.
Deepfakes also cut across organizational boundaries. Unlike traditional cyber threats, they do not sit within a single function. Incidents can begin as technical anomalies but quickly escalate into legal, reputational, and operational issues. Effective response requires coordination across security, legal, communications, HR, and leadership. In practice, many organizations lack clear ownership and escalation frameworks for these scenarios.
Governance remains underdeveloped. Few enterprises have defined what effective deepfake detection or mitigation looks like, making it difficult to evaluate tools or justify investment. This uncertainty contributes to a reactive posture, where action is taken only after a visible incident.

This pattern mirrors earlier phases of cybersecurity adoption. Organizations tend to delay investment until risk becomes tangible, despite clear indicators that exposure is increasing. In the case of deepfakes, this delay is particularly risky given the speed at which attack methods are evolving.
Framing the issue as a business problem is critical. Financial loss, reputational damage, and operational disruption provide clearer justification for investment than abstract technical risk. This also supports alignment across functions, ensuring that controls are embedded into workflows rather than treated as standalone solutions.
At the same time, organizations must balance security with usability. High-friction controls may reduce risk but can disrupt operations. As a result, there is growing emphasis on low-friction, background verification methods that analyze behavioral and contextual signals without requiring additional user input.
There are also trade-offs around data. Strengthening verification often requires collecting more information, which introduces new risks if that data is compromised. Enterprises must weigh the benefits of stronger identity assurance against the potential creation of additional attack surfaces.
The implications extend beyond security. Deepfakes challenge the reliability of digital evidence itself. The assumption that what can be seen or heard is inherently trustworthy no longer holds. This has consequences for customer interactions, brand integrity, and regulatory compliance.
In the near term, the priority is not perfect detection but basic preparedness. Organizations need to raise awareness, identify where deepfake risks may already exist, and implement clear validation and escalation processes. These capabilities must be embedded into business operations, not treated as isolated security measures.
Deepfakes are already present in enterprise environments, often going unnoticed. As detection improves, reported incidents will increase. The challenge is to act before that visibility arrives by building the structures, capabilities, and governance needed to operate in an environment where digital authenticity can no longer be assumed.
Watch the webinar to understand how the panel would approach these growing issues.


