OpenAI: o1

OpenAI: o1

openai · Released Dec 17, 2024 Specialist
Intelligence #102 / 523
56.7 Our Score
Speed #122 / 236
89.6 tokens / sec
Input #511 / 523
$15.00 per 1M tokens
Output #511 / 523
$60.00 per 1M tokens
Context #144 / 523
200,000 tokens

Analysis Summary

OpenAI: o1 sits in the Specialist tier on our leaderboard, ranked #102 of 523 published models on overall intelligence. At $15.00 input and $60.00 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use, function calling, vision, and reasoning.

Editorial notes

OpenAI's o1 is a strong reasoning model with impressive benchmark scores across logic, coding, and instruction-following, supported by multimodal input and a 200K context window. Its premium pricing is a drawback for cost-sensitive deployments, but for businesses requiring reliable deep reasoning and agentic capability, it remains a compelling choice.

Assessed April 23, 2026

Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?

Performance Profile

Intelligence5.6Technical4.5Value5Content6.5
Intelligence 5.6/10
Technical 4.5/10
Content 6.5/10
Value 5/10

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 model series is trained with large-scale reinforcement learning to reason..

Capabilities

Tool Use Function Calling Vision

Performance Indices

Source: Artificial Analysis

30.8 Intelligence Index
20.5 Coding Index
37.8 Agentic Index

Benchmark Scores

Intelligence

GPQA Diamond 74.7% Graduate-level scientific reasoning
HLE 7.7% Humanity's Last Exam
MMLU Pro 84.1% Multi-task language understanding
MATH 500 97% Mathematical problem-solving
AIME 72.3% Competition mathematics
SciCode 35.8% Scientific computing

Technical

LiveCodeBench 67.9% Live coding evaluation
TerminalBench Hard 12.9% Agentic terminal tasks
τ²-Bench 62.6% Conversational agent benchmark

Content

IFBench 70.3% Instruction following
LCR 59.3% Long-context reasoning

Benchmark data from Artificial Analysis and Hugging Face

How does OpenAI: o1 stack up?

Compare side-by-side with other specialist models.

Compare Models

Model Information

OpenRouter ID openai/o1
Provideropenai
Model Familyo1
Release Date December 17, 2024
Context Length200,000 tokens
Max Completion100,000 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $15.00 $0.015000
Output $60.00 $0.060000

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

11,008ms
Best Latency (TTFT)
98.5 tok/s
Best Throughput
0/1
Active Endpoints
Available via: OpenAI

Leaderboard Categories