OpenAI has released GPT-5.4, introducing expanded context limits, improved reasoning performance, and new variants designed for professional and developer workflows. The model is rolling out across ChatGPT, the OpenAI API, and Codex development tools.
GPT-5.4 is positioned as a general-purpose frontier model for knowledge work, with improvements focused on complex multi-step tasks such as coding, document analysis, and automation. The update continues OpenAI’s pattern of incremental model releases aimed at improving reliability, efficiency, and enterprise usability.
Key updates include:
- Expanded context window: GPT-5.4 supports up to 1 million tokens of context, enabling analysis of large documents, datasets, and codebases in a single prompt.
- Multiple model variants:
- GPT-5.4 Thinking is optimized for multi-step reasoning and complex problem solving.
- GPT-5.4 Pro is designed for higher-performance workloads and production deployments.
- Improved accuracy: The model reportedly reduces factual errors by roughly 33% compared with GPT-5.2, targeting reliability improvements in professional workflows.
- Tool and system interaction: GPT-5.4 introduces expanded capabilities for tool use and computer interaction, including improved performance in benchmarks that test autonomous task execution across software environments.
- Efficiency gains: OpenAI says the model can complete comparable tasks using fewer tokens, improving cost efficiency for API users.
The model is reported to show stronger performance in benchmarks associated with knowledge work and digital task execution. In internal evaluations measuring performance across professional-style tasks, GPT-5.4 achieved higher scores than earlier GPT-5 series models, reflecting improvements in reasoning and task planning.
Operationally, the release signals continued convergence between language models and agentic systems. GPT-5.4 is designed to work with external tools, browse information sources, and interact with software environments—capabilities that underpin emerging enterprise use cases such as automated research, software development assistance, and workflow orchestration.
For developers, the large context window and improved tool integration are particularly significant. Many enterprise deployments require models to ingest large internal documents, logs, or repositories. Increasing context capacity reduces the need for retrieval pipelines or multi-stage prompting architectures, simplifying system design for some workloads.
At the same time, the release reflects the industry’s broader shift toward models optimized for production reliability rather than pure benchmark gains. Variants such as GPT-5.4 Pro emphasize stability and throughput for large-scale deployments, while reasoning-focused modes allow users to allocate additional compute to complex tasks.
GPT-5.4 is currently available through OpenAI’s API and development tools, and through ChatGPT for paid tiers. As enterprises expand generative AI deployments beyond experimentation into operational workflows, incremental model upgrades such as GPT-5.4 increasingly focus on scalability, long-context processing, and improved reliability rather than headline capability jumps.