Mistral: Mixtral 8x22B Instruct
Mixtral 8x22B Instruct offers a reasonable 65K context window and tool use support, but its benchmark scores are low and it has been outpaced by newer models at similar or lower price points; suitable only for legacy integrations.
Assessment date: March 12, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
#moe
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Mistral |
| Instruct Type | mistral |
| Parameters | 22B |
Benchmark Scores
Evaluations
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $2.00 | $0.002000 |
| Output | $6.00 | $0.006000 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 13, 2026 7:52 pm