OpenAI: GPT-4.1 Mini

OpenAI: GPT-4.1 Mini

openai · Released Apr 14, 2025
56
Our Score

GPT-4.1 Mini is a mid-sized model delivering performance competitive with GPT-4o at substantially lower latency and cost. It retains a 1 million token context window and scores 45.1% on hard instruction evals, 35.8% on MultiChallenge, and 84.1% on IFEval. Mini also shows strong coding ability (e.g., 31.6% on Aider’s polyglot diff benchmark) and vision understanding, making it suitable for interactive applications with tight performance constraints.

$0.40 / 1M Input Price
$1.60 / 1M Output Price
1M tokens Context Window
32,768 tokens Max Output

Capabilities

Tool Use Function Calling Vision

Architecture

ModalityText + Image + File → Text
TokenizerGPT

Performance Indices

Source: Artificial Analysis

22.9 Intelligence Index
18.5 Coding Index
30.3 Agentic Index
46.3 Math Index

Benchmark Scores

Evaluations

GPQA Diamond 66.4%
Graduate-level scientific reasoning
HLE 4.6%
Humanity's Last Exam
MMLU Pro 78.1%
Multi-task language understanding
LiveCodeBench 48.3%
Live coding evaluation
SciCode 40.4%
Scientific computing
MATH 500 92.5%
Mathematical problem-solving
AIME 43%
Competition mathematics
AIME 2025 46.3%
Competition mathematics (2025)
IFBench 38.3%
Instruction following
LCR 42.3%
Long-context reasoning
TerminalBench Hard 7.6%
Agentic terminal tasks
τ²-Bench 52.9%
Conversational agent benchmark

Benchmark data from Artificial Analysis and Hugging Face

Model Information

OpenRouter ID openai/gpt-4.1-mini
Provideropenai
Model FamilyGPT-4
Release Date April 14, 2025
Context Length1,047,576 tokens
Max Completion32,768 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.40 $0.000400
Output $1.60 $0.001600

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

100%
Avg Uptime
629ms
Best Latency (TTFT)
45 tok/s
Best Throughput
2/2
Active Endpoints
Available via: OpenAI, Azure

Leaderboard Categories