inclusionAI: Ling-2.6-flash (free)
Analysis Summary
inclusionAI: Ling-2.6-flash (free) sits in the Efficient tier on our leaderboard, ranked #230 of 525 published models on overall intelligence. At $0.000 input and $0.000 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use and function calling.
Editorial notes
InclusionAI's Ling-2.6-Flash is a free model with tool use and a 262K context window, offering decent agentic capability for its tier, though its reasoning and coding scores are modest and best suited to lighter tasks.
Assessed April 23, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Ling-2.6-flash is an instant (instruct) model from inclusionAI with 104B total parameters and 7.4B active parameters, designed for real-world agents that require fast responses, strong execution, and high token efficiency..
Capabilities
Performance Indices
Source: Artificial Analysis
This model was released recently. Independent benchmark evaluations are typically completed within days of release — these figures are preliminary and are likely to be updated as testing is finalised.
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does inclusionAI: Ling-2.6-flash (free) stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
inclusionai/ling-2.6-flash:free
|
| Provider | inclusionai |
| Release Date | April 21, 2026 |
| Context Length | 262,144 tokens |
| Max Completion | 32,768 tokens |
| Status | Active |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 25, 2026 8:38 pm