Inception: Mercury Coder
Analysis Summary
Inception: Mercury Coder sits in the Legacy tier on our leaderboard, ranked #360 of 525 published models on overall intelligence. At $0.250 input and $0.750 output per 1M tokens, it is among the most expensive on the market. It offers a standard large context window and supports tool use and function calling.
Editorial notes
Inception's Mercury Coder is a diffusion-based coding model with tool use support and a 128K context window, but without benchmark data its performance cannot be assessed ā an interesting experimental option rather than a proven business tool.
Assessed April 23, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku..
Capabilities
How does Inception: Mercury Coder stack up?
Compare side-by-side with other legacy models.
Model Information
| OpenRouter ID |
inception/mercury-coder
|
| Provider | inception |
| Release Date | April 30, 2025 |
| Context Length | 128,000 tokens |
| Max Completion | 32,000 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.25 | $0.000250 |
| Output | $0.75 | $0.000750 |
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 25, 2026 8:38 pm