Qwen: Qwen-Max
Qwen-Max offers tool use, function calling, and a competitive price point, but without benchmark data its capabilities cannot be objectively assessed. Businesses should consider newer, well-benchmarked Qwen models that offer verified performance alongside similar pricing.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Qwen-Max, based on Qwen2.5, provides the best inference performance among Qwen models, especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown.
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Qwen |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $1.04 | $0.001040 |
| Output | $4.16 | $0.004160 |
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm