Qwen: Qwen3 235B A22B
Analysis Summary
Qwen: Qwen3 235B A22B sits in the Efficient tier on our leaderboard, ranked #186 of 544 published models on overall intelligence. At $0.455 input and $1.82 output per 1M tokens, it is among the most expensive on the market. It offers a standard large context window and supports tool use, function calling, and reasoning.
Editorial notes
Qwen3 235B A22B is a large MoE model with tool use and a 131K context at moderate pricing, but its intelligence and coding indices are low relative to its size, and it applies a regional accessibility penalty.
Assessed May 5, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and..
Capabilities
Architecture Detail
| Instruct Type | qwen3 |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Qwen: Qwen3 235B A22B stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
qwen/qwen3-235b-a22b
|
| Provider | qwen |
| Release Date | April 28, 2025 |
| Context Length | 131,072 tokens |
| Max Completion | 8,192 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.46 | $0.000455 |
| Output | $1.82 | $0.001820 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: May 5, 2026 11:06 am