Inception: Mercury Coder

Inception: Mercury Coder

inception · Released Apr 30, 2025 Legacy
Intelligence #360 / 525
23.8 Our Score
Speed
— Not reported
Input #275 / 525
$0.250 per 1M tokens
Output #263 / 525
$0.750 per 1M tokens
Context #266 / 525
128,000 tokens

Analysis Summary

Inception: Mercury Coder sits in the Legacy tier on our leaderboard, ranked #360 of 525 published models on overall intelligence. At $0.250 input and $0.750 output per 1M tokens, it is among the most expensive on the market. It offers a standard large context window and supports tool use and function calling.

Editorial notes

Inception's Mercury Coder is a diffusion-based coding model with tool use support and a 128K context window, but without benchmark data its performance cannot be assessed — an interesting experimental option rather than a proven business tool.

Assessed April 23, 2026

Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?

Performance Profile

Intelligence0Technical0Value7.8Content2.5
Intelligence 0/10
Technical 0/10
Content 2.5/10
Value 7.8/10

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku..

Capabilities

Tool Use Function Calling

How does Inception: Mercury Coder stack up?

Compare side-by-side with other legacy models.

Compare Models

Model Information

OpenRouter ID inception/mercury-coder
Providerinception
Release Date April 30, 2025
Context Length128,000 tokens
Max Completion32,000 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.25 $0.000250
Output $0.75 $0.000750

Leaderboard Categories