Inception: Mercury 2
Analysis Summary
Inception: Mercury 2 sits in the Specialist tier on our leaderboard, ranked #77 of 525 published models on overall intelligence. At $0.250 input and $0.750 output per 1M tokens, it is among the most expensive on the market. It offers a standard large context window and supports tool use and function calling.
Editorial notes
Inception's Mercury 2 offers reasonable reasoning and instruction-following capability at competitive pricing ($0.25/$0.75 per million tokens) with tool use support — a decent mid-tier option for content and SEO tasks, though it trails the leading models on complex reasoning and coding.
Assessed April 23, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Inception: Mercury 2 stack up?
Compare side-by-side with other specialist models.
Model Information
| OpenRouter ID |
inception/mercury-2
|
| Provider | inception |
| Release Date | March 4, 2026 |
| Context Length | 128,000 tokens |
| Max Completion | 50,000 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.25 | $0.000250 |
| Output | $0.75 | $0.000750 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 25, 2026 8:38 pm