What Happened
On May 31, 2025, Mistral AI removed three of its previously available models from the Hugging Face Hub. The deletions were first reported via social media and are visible in the commit history of Mistral's official Hugging Face repository. The affected models are:
- Magistral: A model specialized for reasoning tasks.
- Pixtral: A multimodal model capable of processing both text and images.
- Devst: A model whose specific function was not widely detailed prior to its removal.
The models' pages on Hugging Face now return a "404 - This model is unavailable" error. As of this reporting, Mistral AI has not issued a public statement, blog post, or changelog entry explaining the removal.
Context
Mistral AI has built a reputation for its open-weight strategy, frequently releasing model weights and technical details to the community. This move is atypical. The company's flagship models, like the recent Mistral Large 2 and its smaller variants, remain available. The deletion appears targeted at these three specific, less-documented offerings.
For developers, the immediate impact is the loss of access to these specific model files. Any projects or applications that were built or experimented with using the mistralai/Magistral, mistralai/Pixtral, or mistralai/Devst identifiers via the Hugging Face transformers library or API will now fail to load the weights.
What It Means
The unannounced removal of models from a primary distribution channel is a significant operational shift for a company like Mistral. It suggests one of several possible strategic decisions:
- Technical Deprecation: The models may have been superseded by internal developments or were found to have critical flaws, prompting a swift retraction.
- Product Consolidation: Mistral might be streamlining its public-facing model portfolio to reduce support complexity and focus developer attention on its core, flagship product lines.
- Commercial Repositioning: There is speculation that these models, particularly a reasoning model like Magistral, could be retooled and relaunched as part of a future, potentially closed-source or commercially licensed offering.
Without an official statement, the reasoning remains speculative. However, the action underscores the inherent instability of relying on specific model artifacts from AI companies, even those with strong open-source leanings. It serves as a practical reminder for engineering teams to implement robust model versioning and artifact storage if a particular model checkpoint is critical to a workflow.




