Mistral: Mistral 7B Instruct v0.2
Mistral 7B Instruct v0.2 is a compact 2023-era model with no benchmark data available in this listing, offering a 32K context window at very low cost. It has been superseded by newer Mistral releases and is not recommended for demanding business tasks.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct, with the following changes: - 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
Architecture
| Modality | Text → Text |
| Tokenizer | Mistral |
| Instruct Type | mistral |
| Parameters | 7B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.20 | $0.000200 |
| Output | $0.20 | $0.000200 |
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 14, 2026 7:52 pm