OpenAI: o4 Mini

OpenAI: o4 Mini

openai · Released Apr 16, 2025 Specialist
Intelligence #74 / 525
65.2 Our Score
Speed #52 / 244
154.6 tokens / sec
Input #410 / 525
$1.10 per 1M tokens
Output #417 / 525
$4.40 per 1M tokens
Context #146 / 525
200,000 tokens

Analysis Summary

OpenAI: o4 Mini sits in the Specialist tier on our leaderboard, ranked #74 of 525 published models on overall intelligence. At $1.10 input and $4.40 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use, function calling, vision, and reasoning.

Editorial notes

OpenAI's o4 Mini punches well above its price point with exceptional math and coding benchmark scores, strong agentic capability, vision support, and tool calling — outstanding value for businesses needing reliable reasoning at scale.

Assessed April 23, 2026

Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?

Performance Profile

Intelligence6.2Technical5.1Value7Content6.5
Intelligence 6.2/10
Technical 5.1/10
Content 6.5/10
Value 7/10

OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning..

Capabilities

Tool Use Function Calling Vision

Performance Indices

Source: Artificial Analysis

33.1 Intelligence Index
25.6 Coding Index
35.4 Agentic Index
90.7 Math Index

Benchmark Scores

Intelligence

GPQA Diamond 78.4% Graduate-level scientific reasoning
HLE 17.5% Humanity's Last Exam
MMLU Pro 83.2% Multi-task language understanding
MATH 500 98.9% Mathematical problem-solving
AIME 94% Competition mathematics
AIME 2025 90.7% Competition mathematics (2025)
SciCode 46.5% Scientific computing

Technical

LiveCodeBench 85.9% Live coding evaluation
TerminalBench Hard 15.2% Agentic terminal tasks
τ²-Bench 55.6% Conversational agent benchmark

Content

IFBench 68.7% Instruction following
LCR 55% Long-context reasoning

Benchmark data from Artificial Analysis and Hugging Face

How does OpenAI: o4 Mini stack up?

Compare side-by-side with other specialist models.

Compare Models

Model Information

OpenRouter ID openai/o4-mini
Provideropenai
Release Date April 16, 2025
Context Length200,000 tokens
Max Completion100,000 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $1.10 $0.001100
Output $4.40 $0.004400

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

100%
Avg Uptime
16,954ms
Best Latency (TTFT)
94 tok/s
Best Throughput
1/1
Active Endpoints
Available via: OpenAI

Leaderboard Categories