Mistral AI has released Mistral Medium 3, positioning the model as delivering state-of-the-art performance at 8X lower cost with simplified enterprise deployment capabilities. The company states the model "performs at or above 90% of Claude Sonnet 3.7 on benchmarks across the board" while offering significantly reduced pricing at $0.4 input and $2 output per million tokens.

The model surpasses leading open models including Llama 4 Maverick and enterprise models such as Cohere Command A on performance benchmarks. Mistral Medium 3 also beats cost leaders including DeepSeek v3 in both API and self-deployed systems according to the company's evaluation pipeline. The system can be deployed on any cloud environment, including self-hosted configurations requiring four GPUs and above.

Mistral Medium 3 demonstrates particular strength in coding and STEM tasks, coming close to what the company describes as "very large and much slower competitors." The model targets professional use cases with capabilities including hybrid or on-premises deployment, custom post-training, and integration into enterprise tools and systems.

Beta customers across financial services, energy, and healthcare sectors are currently using the model to enhance customer service with deep context, personalise business processes, and analyse complex datasets. Mistral's applied AI solutions enable the model to be continuously pretrained, fully fine-tuned, and integrated into enterprise knowledge bases for domain-specific training and adaptive workflows.

The Mistral Medium 3 API launched on Mistral La Plateforme and Amazon Sagemaker, with planned availability on IBM WatsonX, NVIDIA NIM, Azure AI Foundry, and Google Cloud Vertex. The company indicated that following releases of Mistral Small in March and Medium today, they are developing a "large" model for release "over the next few weeks."

The model addresses enterprise requirements for balancing performance with cost-effectiveness while enabling comprehensive integration into existing business systems. Custom deployment options include in-VPC configurations and post-training customisation for specific organisational needs.

Mistral Medium 3's combination of frontier-class performance at reduced operational costs positions the company to compete directly with established enterprise AI providers. The model's ability to operate in self-hosted environments with minimal hardware requirements removes deployment barriers for organizations requiring on-premises AI capabilities.


Share this post
The link has been copied!