OpenAI published its Outbound Coordinated Vulnerability Disclosure Policy on June 3, 2025, establishing how the company will responsibly report security issues discovered in third-party software.
OpenAI systems have already uncovered zero-day vulnerabilities in third-party and open-source software, prompting this proactive policy development in anticipation of future discoveries. The company reports vulnerabilities discovered through ongoing research, targeted audits of open source code it leverages, or automated analysis using AI tools with goals of cooperative, respectful, and helpful disclosure to the broader ecosystem.
The policy covers issues found in open-source and commercial software through automated and manual code review, as well as discoveries from internal usage of third-party software and systems. It explains validation and prioritisation processes, vendor contact procedures, disclosure mechanics, and public disclosure timing with non-public disclosure as the default unless circumstances demand otherwise.
OpenAI's principles include being impact-oriented, cooperative, discreet by default, high scale and low friction, and providing attribution when relevant. The company takes an intentionally developer-friendly stance on disclosure timelines, electing to leave timelines open-ended by default, to reflect the evolving nature of vulnerability discovery, as AI systems become more effective at reasoning about code and generating reliable patches.
The approach anticipates AI models detecting greater numbers of bugs with increasing complexity, potentially requiring deeper collaboration and more time for sustainable resolution. OpenAI will continue working with software maintainers to develop disclosure norms, balancing urgency with long-term resilience while reserving the right to disclose when public interest demands.
The policy establishes contact procedures through outbounddisclosures@openai.com for questions about disclosure practices. OpenAI emphasises security as a continuous improvement journey, expressing gratitude to vendors, researchers, and community members while hoping transparent communication supports a healthier, more secure ecosystem.
The policy addresses the growing intersection of AI capabilities and cybersecurity, with automated vulnerability discovery potentially transforming software security practices. OpenAI's approach balances responsible disclosure with the reality that AI systems may discover vulnerabilities at unprecedented scale and complexity, requiring new collaboration frameworks with software vendors.
OpenAI's coordinated disclosure policy positions the company as a responsible leader in AI-assisted cybersecurity, while addressing potential liability concerns from automated vulnerability discovery. The framework establishes precedent for how AI companies should handle security discoveries, potentially influencing industry standards. The policy's emphasis on cooperation and developer-friendly timelines aims to maintain positive relationships with software vendors while demonstrating OpenAI's commitment to ecosystem security.