At the AI Impact Summit 2026, held in New Delhi, India from 16–20 February, Demis Hassabis, CEO of Google DeepMind, delivered a stark message to an enterprise audience: artificial intelligence is poised to usher in a “golden era” of scientific discovery — but without coordinated global guardrails, the risks could scale just as rapidly as the benefits.

Speaking to policymakers, researchers, and industry leaders, Hassabis framed the next decade as potentially the most transformative period in human history. AI systems, he argued, are on the cusp of dramatically accelerating research and development across disciplines, particularly in science and medicine. Hassabis believes that AI could function as the ultimate force multiplier for human expertise, compressing timelines for discovery and unlocking breakthroughs previously constrained by human cognitive and experimental limits within ten years.

Yet the same general-purpose capabilities that promise productivity and economic expansion also introduce new categories of systemic risk.

AGI on the Horizon — and the Governance Gap

Hassabis stated that artificial general intelligence (AGI) may be within reach in the next five years, noting that general-purpose systems are advancing “ almost week by week.” He characterized the moment as comparable to the discovery of fire — with the potential for impact 10 times that of the Industrial Revolution, unfolding at 10 times the speed.

For enterprises, the implication is not incremental transformation but structural disruption. Hassabis believes one of the largest impacts will be in cross-disciplinary competence, putting combined subject expertise at scientists’ fingertips.

However, Hassabis cautioned that the trajectory is not fully predictable. He said that the technology must be approached with humility, emphasizing that society has yet to determine how to ensure these systems benefit the whole world. Scientific methods, including measurable evaluation standards and iterative safety testing, must underpin AI governance frameworks.

Hassabis stressed that the governance challenge cannot be left solely to technologists. International dialogue and multilateral coordination are essential, particularly as digital systems and open-source models transcend national borders.

Near-Term Risks: Bio and Cyber

Among the most immediate risks, Hassabis identified biological and cybersecurity threats.

He noted that current frontier models are already highly capable in cyber domains. This creates a dual-use dynamic: AI can strengthen defensive systems but can also be leveraged to identify vulnerabilities, automate exploits, or scale malicious operations. The critical requirement, he stressed, is ensuring that AI-powered defense capabilities outpace AI-enabled attack vectors.

Organizations deploying advanced models should prioritize AI-driven cybersecurity reinforcement. This includes continuous red-teaming, adversarial testing, and investment in AI systems designed specifically for defensive monitoring and patching.

Biological risk represents another near-term concern. As AI tools increasingly support molecular modeling, protein design, and drug discovery — advances exemplified by DeepMind’s AlphaFold — they also raise questions about misuse in synthetic biology contexts. Hassabis underscored the importance of standards alignment and cross-border cooperation to manage such dual-use risks.

Open Source and the Recall Problem

Hassabis highlighted a structural challenge unique to AI: digital technologies are difficult to contain geographically, and open-source ecosystems accelerate diffusion. While open collaboration has historically driven innovation, AI introduces a new vulnerability paradigm.

If a critical flaw or exploit is discovered in an open-source AI system, there is no centralized recall mechanism. Unlike physical products, digital models cannot easily be withdrawn once distributed. Enterprises integrating open-source foundation models must consider lifecycle governance, monitoring, and patch management in ways that extend beyond traditional software update models.

Jagged Intelligence and the Human Role

Hassabis observed that current systems remain uneven in capability, demonstrating “jagged intelligence.” They can excel at structured, verifiable tasks while failing at consistency or creative reasoning.

Tasks with clear right-or-wrong answers provide clean training signals. In contrast, domains requiring judgment, creativity, or hypothesis generation lack objective datasets, making them more difficult for AI systems to master. Solving a mathematical conjecture, for example, is categorically different from evaluating a hypothesis with existing validation data. For enterprises, this reinforces the importance of human-in-the-loop architectures. He does, however, believe that these capabilities will come.

Learning-based systems will continue to improve, but Hassabis stated that programming alone is insufficient. Direct data-driven learning remains central to capability gains.

Robotics and Embodied Risk

Hassabis also projected significant advances in robotics over the next few years, including both humanoid and non-humanoid systems. While practical deployment at scale is not yet mature, the convergence of general-purpose AI and embodied systems materially increases risk exposure.

A humanoid robot running a powerful foundation model introduces physical-world consequences. Hassabis stressed that guardrails must be in place before such systems are widely deployed.

A Decade That Will Define Enterprise AI

Hassabis closed on an optimistic but conditional note. The coming decade could unlock unprecedented advances in medicine, materials science, climate research, and productivity. AI has the potential to dramatically expand the effective research time available to scientists, reshaping innovation economics.

But risk mitigation will require international standards, shared protocols, and sustained cooperation. AI’s dual-purpose nature demands governance mechanisms proportional to its power. The opportunity is historic, but so is the responsibility.


Share this post
The link has been copied!