OpenAI has rotated its macOS code-signing certificates and required updates across its desktop applications following a software supply chain incident involving the open-source developer library Axios.
The issue originated from a compromised Axios package integrated into a GitHub Actions workflow used to sign OpenAI’s macOS applications, including ChatGPT Desktop, Codex, Codex CLI, and Atlas. The response centres on securing the software distribution pipeline and preventing the potential misuse of signing credentials.
The incident occurred on March 31, when the workflow downloaded and executed a malicious version of Axios as part of a broader ecosystem attack. This workflow had access to notarization materials and signing certificates used to verify the authenticity of OpenAI’s macOS applications. While OpenAI’s investigation found no evidence that the certificate was exfiltrated or misused, the company is treating it as compromised and has revoked and replaced it as a precaution.
In response, OpenAI introduced enforced client updates and a defined deprecation timeline. All macOS users are required to upgrade to versions signed with the new certificate, while older versions will lose support and functionality after May 8, 2026. This effectively invalidates previously trusted binaries and ensures that only software signed with updated credentials can run without user overrides.
The risk scenario underpinning the response is not data exfiltration but software impersonation. If the signing certificate had been successfully extracted, an attacker could have distributed malicious applications appearing as legitimate OpenAI software, bypassing standard macOS security controls. OpenAI confirmed there is no evidence that user data, internal systems, or intellectual property were accessed or altered.
The root cause was traced to a misconfigured CI/CD workflow that relied on a floating dependency reference rather than a fixed version or commit hash, allowing the malicious Axios update to be executed. OpenAI has reported that this has since been remediated, alongside broader reviews of build pipeline controls and notarization processes.
The incident reinforces the exposure of AI platforms to conventional software supply chain risks, particularly in build and release pipelines. As AI vendors increasingly distribute desktop and developer tools, trust in code-signing infrastructure becomes a critical control point. OpenAI’s response aligns with standard containment practices but also highlights the operational impact of dependency-level compromises.