inclusionAI: Ling-2.6-flash
Analysis Summary
At $0.080 input and $0.240 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use and function calling.
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Ling-2.6-flash is an instant (instruct) model from inclusionAI with 104B total parameters and 7.4B active parameters, designed for real-world agents that require fast responses, strong execution, and high token efficiency..
Capabilities
Performance Indices
Source: Artificial Analysis
This model was released recently. Independent benchmark evaluations are typically completed within days of release ā these figures are preliminary and are likely to be updated as testing is finalised.
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does inclusionAI: Ling-2.6-flash stack up?
Compare side-by-side with other similar models.
Model Information
| OpenRouter ID |
inclusionai/ling-2.6-flash
|
| Provider | inclusionai |
| Release Date | April 21, 2026 |
| Context Length | 262,144 tokens |
| Max Completion | 32,768 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.08 | $0.000080 |
| Output | $0.24 | $0.000240 |
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 28, 2026 8:38 pm