Qwen: Qwen3.5-35B-A3B
Qwen3.5-35B-A3B is a strong mid-size MoE model with excellent agentic scores, multimodal support, and a 262K context window at very competitive pricing — well-suited for agentic workflows, though the -4 regional accessibility adjustment applies given its provider's limited Western business adoption.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid architecture that integrates linear attention mechanisms and a sparse mixture-of-experts model, achieving higher inference efficiency. Its overall performance is comparable to that of the Qwen3.5-27B.
Capabilities
Architecture
| Modality | Text + Image + Video → Text |
| Tokenizer | Qwen3 |
| Parameters | 35B |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.16 | $0.000163 |
| Output | $1.30 | $0.001300 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 4, 2026 8:54 pm