Mistral: Mistral 7B Instruct v0.3
Review
Mistral 7B Instruct v0.3 adds function calling to the base 7B model but has no independent benchmark data; it is a low-cost option for very simple structured tasks but is outclassed by newer and more capable models.
Assessment date: April 16, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct v0.2, with the following changes: - Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling NOTE: Support for function calling depends on the provider.
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Mistral |
| Instruct Type | mistral |
| Parameters | 7B |
How does Mistral: Mistral 7B Instruct v0.3 stack up?
Compare side-by-side with other legacy models.
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.20 | $0.000200 |
| Output | $0.20 | $0.000200 |
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 21, 2026 8:52 pm