OpenAI: o4 Mini
Analysis Summary
OpenAI: o4 Mini sits in the Specialist tier on our leaderboard, ranked #74 of 525 published models on overall intelligence. At $1.10 input and $4.40 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use, function calling, vision, and reasoning.
Editorial notes
OpenAI's o4 Mini punches well above its price point with exceptional math and coding benchmark scores, strong agentic capability, vision support, and tool calling — outstanding value for businesses needing reliable reasoning at scale.
Assessed April 23, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does OpenAI: o4 Mini stack up?
Compare side-by-side with other specialist models.
Model Information
| OpenRouter ID |
openai/o4-mini
|
| Provider | openai |
| Release Date | April 16, 2025 |
| Context Length | 200,000 tokens |
| Max Completion | 100,000 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $1.10 | $0.001100 |
| Output | $4.40 | $0.004400 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 25, 2026 8:38 pm