Anthropic: Claude Opus 4.5

Anthropic: Claude Opus 4.5

anthropic · Released Nov 24, 2025 Professional
Intelligence #18 / 525
82.5 Our Score
Speed #178 / 244
59.6 tokens / sec
Input #494 / 525
$5.00 per 1M tokens
Output #498 / 525
$25.00 per 1M tokens
Context #146 / 525
200,000 tokens

Analysis Summary

Anthropic: Claude Opus 4.5 sits in the Professional tier on our leaderboard, ranked #18 of 525 published models on overall intelligence. At $5.00 input and $25.00 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use, function calling, vision, and reasoning.

Editorial notes

Claude Opus 4.5 from Anthropic delivers strong coding and agentic performance with vision support and a solid reasoning profile, making it a capable choice for complex business and development tasks. It is superseded within the Opus family by later releases, and its premium pricing limits value, but it remains a highly competent model for professional use.

Assessed April 24, 2026

Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?

Performance Profile

Intelligence7.4Technical8Value6Content7.5
Intelligence 7.4/10
Technical 8/10
Content 7.5/10
Value 6/10

Claude Opus 4.5 is Anthropic’s frontier reasoning model optimized for complex software engineering, agentic workflows, and long-horizon computer use. It offers strong multimodal capabilities, competitive performance across real-world coding and..

Capabilities

Tool Use Function Calling Vision

Performance Indices

Source: Artificial Analysis

43.1 Intelligence Index
42.9 Coding Index
63.6 Agentic Index
62.7 Math Index

Benchmark Scores

Intelligence

GPQA Diamond 81% Graduate-level scientific reasoning
HLE 12.9% Humanity's Last Exam
MMLU Pro 88.9% Multi-task language understanding
AIME 2025 62.7% Competition mathematics (2025)
SciCode 47% Scientific computing

Technical

LiveCodeBench 73.8% Live coding evaluation
TerminalBench Hard 40.9% Agentic terminal tasks
τ²-Bench 86.3% Conversational agent benchmark

Content

IFBench 43% Instruction following
LCR 65.3% Long-context reasoning

Benchmark data from Artificial Analysis and Hugging Face

How does Anthropic: Claude Opus 4.5 stack up?

Compare side-by-side with other professional models.

Compare Models

Model Information

OpenRouter ID anthropic/claude-opus-4.5
Provideranthropic
Release Date November 24, 2025
Context Length200,000 tokens
Max Completion64,000 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $5.00 $0.005000
Output $25.00 $0.025000

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

99.4%
Avg Uptime
1,281ms
Best Latency (TTFT)
43 tok/s
Best Throughput
3/4
Active Endpoints
Available via: Anthropic, Amazon Bedrock, Google

Leaderboard Categories