Mistral Releases Mistral Small 4, Claiming Significant Performance Jump
Mistral AI has released a new model, Mistral Small 4, as part of its commercial API offerings. The announcement, made via a social media post from a company representative, indicates this release represents a "big jump" in performance for Mistral, particularly when compared to its previous models in the same tier.
The model is now available through Mistral's API platform. The company's model lineup is structured into tiers: Mistral Tiny, Mistral Small, Mistral Medium, and Mistral Large. The 'Small' tier is positioned as a cost-effective, general-purpose model suitable for a wide range of tasks.
What's New
The core claim from Mistral is that Mistral Small 4 delivers a substantial performance improvement over the previous model in the Small series. The announcement did not specify which previous model it is being compared against (e.g., Mistral Small 3 or an earlier iteration), nor did it provide quantitative benchmark results to define the "big jump."
Technical Details & Availability
- Model Tier: Small
- Version: 4
- Availability: Accessible immediately via the Mistral AI API.
- Pricing: As of the announcement, pricing details for Mistral Small 4 have not been separately disclosed. It is expected to follow the existing pricing structure for the Small tier, which is currently $0.20 per 1M input tokens and $0.60 per 1M output tokens.
Context
This release follows a pattern of rapid iteration from Mistral AI. The company has previously released models like Mistral 7B, Mixtral 8x7B, and the proprietary Mistral Large. The 'Small' tier models are designed to offer a strong balance of capability and cost, competing in a crowded market segment that includes offerings like OpenAI's GPT-3.5 Turbo and Anthropic's Claude 3 Haiku.
The lack of immediate, detailed benchmarks is not uncommon for initial API rollouts from commercial AI labs, which often prioritize developer access and real-world testing before publishing comprehensive evaluations.
What to Watch
Practitioners should monitor for the release of official benchmark data from Mistral AI, which will be necessary to validate the performance claims and accurately position Mistral Small 4 against its direct competitors. Key benchmarks to look for include MMLU (general knowledge), GSM8K (math), HumanEval (coding), and MT-Bench (chat).
The performance-per-dollar ratio will be a critical factor for its adoption, given the competitive pricing pressure in the small-to-medium model segment. Developers are advised to run their own task-specific evaluations to determine if the claimed improvements translate to their specific use cases.




