Anthropic: Claude Opus 4
Claude Opus 4 is a capable multimodal model from Anthropic with strong instruction following, vision support, and tool use — but it has been superseded by Claude Opus 4.5 and 4.6, which score significantly higher. At $15/$75 per million tokens it is also expensive relative to its current benchmark standing, making it a less compelling choice than Anthropic's newer flagships.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Claude Opus 4 is benchmarked as the world’s best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software engineering, achieving leading results on SWE-bench (72.5%) and Terminal-bench (43.2%). Opus 4 supports extended, agentic workflows, handling thousands of task steps continuously for hours without degradation.
Capabilities
Architecture
| Modality | Text + Image + File → Text |
| Tokenizer | Claude |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Evaluations
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $15.00 | $0.015000 |
| Output | $75.00 | $0.075000 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm