Perplexity has launched the Secure Intelligence Institute (SII), a dedicated research center focused on advancing AI security, privacy, and trust in frontier intelligence systems.
The launch positions security as a primary operational constraint in the deployment of general-purpose AI systems. As models evolve from passive tools into autonomous agents capable of executing multi-step tasks, interacting with external systems, and operating in open environments, the attack surface expands materially. SII is intended to address these risks through integrated research spanning authentication, adversarial testing, robust machine learning, and defense of agentic architectures.
The institute’s core function is to develop and apply security research directly to production AI systems, particularly agentic and multimodal models. It builds on existing key collaborations, including external security firm Trail of Bits, academic research groups across cryptography and machine learning, and policy engagement with the National Institute of Standards and Technology (NIST).
Rather than operating as a standalone academic initiative, SII consolidates existing security work already embedded in Perplexity’s product stack. This includes a pre-release security audit of its Comet AI-native browser conducted with Trail of Bits, incorporating threat modeling and adversarial testing, as well as the deployment of a defense-in-depth architecture designed for open-world usage.
The institute also builds on BrowseSafe, an open-source benchmark and detection model designed to evaluate AI system vulnerabilities across more than 14,700 real-world attack scenarios. This dataset reflects a shift toward empirical security validation for AI systems, particularly those operating as autonomous agents exposed to untrusted inputs.
SII’s remit extends into policy alignment and standards development. Perplexity has already contributed to NIST’s request for information on securing autonomous AI agents, indicating an intent to influence emerging regulatory frameworks while aligning internal system design with anticipated compliance requirements.
Leadership of the institute has been assigned to Dr Ninghui Li, a Purdue University computer science professor with a background in security and privacy research. His appointment signals an emphasis on formal methods and academically grounded approaches to securing production AI systems, particularly in areas such as usable security and trustworthy machine learning.
Strategically, the launch reflects a broader industry transition from model capability scaling to system-level reliability and security engineering. For enterprise adoption, this shift is critical: agentic AI systems introduce new classes of failure modes, including indirect prompt injection, tool misuse, and cross-system privilege escalation. Addressing these risks requires continuous validation, standardized benchmarks, and integration of security controls at the architecture level.