Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a capable mixture-of-experts model from Mistral with tool use and function calling support, though its intelligence benchmarks are modest by today's standards. At a very competitive price point it can handle lighter content and instruction-following tasks, but has been largely superseded by newer models.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Mistral |
| Instruct Type | mistral |
| Parameters | 7B |
Benchmark Scores
Evaluations
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.54 | $0.000540 |
| Output | $0.54 | $0.000540 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm