Inception: Mercury
Review
Inception's Mercury supports tool use and function calling at a competitive price point, but without published benchmark data its real-world capability is difficult to assess for professional business use.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude..
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Other |
How does Inception: Mercury stack up?
Compare side-by-side with other legacy models.
Model Information
| OpenRouter ID |
inception/mercury
|
| Provider | inception |
| Release Date | June 26, 2025 |
| Context Length | 128,000 tokens |
| Max Completion | 32,000 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.25 | $0.000250 |
| Output | $0.75 | $0.000750 |
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 14, 2026 8:52 pm