Anthropic: Claude 3.5 Haiku
Analysis Summary
Anthropic: Claude 3.5 Haiku sits in the Specialist tier on our leaderboard, ranked #128 of 525 published models on overall intelligence. At $0.800 input and $4.00 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use, function calling, and vision.
Editorial notes
Claude 3.5 Haiku is Anthropic's lightweight multimodal model with vision support, a 200K context window, and solid instruction-following ā making it a capable option for content tasks on a budget. However, its reasoning and coding benchmarks are modest, limiting its appeal for more demanding business workflows.
Assessed April 23, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for dynamic..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Anthropic: Claude 3.5 Haiku stack up?
Compare side-by-side with other specialist models.
Model Information
| OpenRouter ID |
anthropic/claude-3.5-haiku
|
| Provider | anthropic |
| Model Family | Claude 3.5 |
| Release Date | November 4, 2024 |
| Context Length | 200,000 tokens |
| Max Completion | 8,192 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.80 | $0.000800 |
| Output | $4.00 | $0.004000 |
Live Performance
Live endpoint metrics ā refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 25, 2026 8:38 pm