Qwen: Qwen3 235B A22B
Qwen3 235B A22B is the flagship of the Qwen3 family, offering the strongest reasoning and coding performance in the range alongside a 128K context window and tool-use support at a reasonable price. It competes in the lower-mid tier of the overall landscape and suits businesses seeking capable open-weight models at accessible cost, though Western API availability may be more limited than with major providers.
Assessment date: March 12, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Qwen3 |
| Instruct Type | qwen3 |
| Parameters | 235B |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Evaluations
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.46 | $0.000455 |
| Output | $1.82 | $0.001820 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 13, 2026 7:52 pm