Inception: Mercury Coder

Inception: Mercury Coder

inception · Released Apr 30, 2025
30
Our Score

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality.

$0.25 / 1M Input Price
$0.75 / 1M Output Price
128,000 tokens Context Window
32,000 tokens Max Output

Capabilities

Tool Use Function Calling

Architecture

ModalityText → Text
TokenizerOther

Model Information

OpenRouter ID inception/mercury-coder
Providerinception
Release Date April 30, 2025
Context Length128,000 tokens
Max Completion32,000 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.25 $0.000250
Output $0.75 $0.000750

Leaderboard Categories