Anthropic: Claude Haiku 4.5

Anthropic: Claude Haiku 4.5

anthropic · Released Oct 15, 2025
66
Our Score

Claude Haiku 4.5 is Anthropic’s fastest and most efficient model, delivering near-frontier intelligence at a fraction of the cost and latency of larger Claude models. Matching Claude Sonnet 4’s performance across reasoning, coding, and computer-use tasks, Haiku 4.5 brings frontier-level capability to real-time and high-volume applications. It introduces extended thinking to the Haiku line; enabling controllable reasoning depth, summarized or interleaved thought output, and tool-assisted workflows with full support for coding, bash, web search, and computer-use tools. Scoring >73% on SWE-bench Verified, Haiku 4.5 ranks among the world’s best coding models while maintaining exceptional responsiveness for sub-agents, parallelized execution, and scaled deployment.

$1.00 / 1M Input Price
$5.00 / 1M Output Price
200,000 tokens Context Window
64,000 tokens Max Output

Capabilities

Tool Use Function Calling Vision

Architecture

ModalityText + Image → Text
TokenizerClaude

Performance Indices

Source: Artificial Analysis

37.1 Intelligence Index
32.6 Coding Index
41 Agentic Index
83.7 Math Index

Benchmark Scores

Evaluations

GPQA Diamond 67.2%
Graduate-level scientific reasoning
HLE 9.7%
Humanity's Last Exam
MMLU Pro 76%
Multi-task language understanding
LiveCodeBench 61.5%
Live coding evaluation
SciCode 43.3%
Scientific computing
AIME 2025 83.7%
Competition mathematics (2025)
IFBench 54.3%
Instruction following
LCR 70.3%
Long-context reasoning
TerminalBench Hard 27.3%
Agentic terminal tasks
τ²-Bench 54.7%
Conversational agent benchmark

Benchmark data from Artificial Analysis and Hugging Face

Model Information

OpenRouter ID anthropic/claude-haiku-4.5
Provideranthropic
Release Date October 15, 2025
Context Length200,000 tokens
Max Completion64,000 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $1.00 $0.001000
Output $5.00 $0.005000

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

99.9%
Avg Uptime
489ms
Best Latency (TTFT)
90 tok/s
Best Throughput
3/3
Active Endpoints
Available via: Amazon Bedrock, Google, Anthropic

Leaderboard Categories