Mistral: Mistral 7B Instruct v0.1
Mistral 7B Instruct v0.1 is an early-generation model from 2023 with no benchmark data and a very small context window of under 3K tokens — it has been superseded by many newer and more capable Mistral models and is not recommended for business use.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
Architecture
| Modality | Text → Text |
| Tokenizer | Mistral |
| Instruct Type | mistral |
| Parameters | 7B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.11 | $0.000110 |
| Output | $0.19 | $0.000190 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm