WizardLM-2 8x22B
Review
WizardLM-2 8x22B from Microsoft has no benchmark data available and has been largely superseded by more capable open-source alternatives; it may suit experimental use but cannot be recommended for business-critical applications.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state-of-the-art opensource models. It is an instruct finetune of Mixtral 8x22B. #moe
Architecture
| Modality | Text → Text |
| Tokenizer | Mistral |
| Instruct Type | vicuna |
| Parameters | 22B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.62 | $0.000620 |
| Output | $0.62 | $0.000620 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 4, 2026 8:54 pm