Qwen: QwQ 32B
Qwen QwQ 32B is a strong-value reasoning model with notably high coding benchmark scores and solid MMLU-Pro performance, available at very competitive pricing. Its limited 32K context window and modest agentic scores hold it back from broader business use, but it's a compelling option for cost-conscious coding and analytical tasks.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Qwen |
| Instruct Type | qwq |
| Parameters | 32B |
Benchmark Scores
Evaluations
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.15 | $0.000150 |
| Output | $0.40 | $0.000400 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm