Anthropic: Claude Opus 4.7
Analysis Summary
Anthropic: Claude Opus 4.7 sits in the Frontier tier on our leaderboard, ranked #1 of 551 published models on overall intelligence. At $5.00 input and $25.00 output per 1M tokens, it is among the most expensive on the market. It offers an exceptionally large context window suited to long-document workflows and supports tool use, function calling, vision, and reasoning.
Editorial notes
Claude Opus 4.7 from Anthropic leads the field on intelligence, coding, and agentic benchmarks, with vision, tool use, a 1M token context, and best-in-class instruction following for client-facing workflows.
Assessed May 5, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Opus 4.7 is the next generation of Anthropic's Opus family, built for long-running, asynchronous agents. Building on the coding and agentic strengths of Opus 4.6, it delivers stronger performance on..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Anthropic: Claude Opus 4.7 stack up?
Compare side-by-side with other frontier models.
Model Information
| OpenRouter ID |
anthropic/claude-opus-4.7
|
| Provider | anthropic |
| Release Date | April 16, 2026 |
| Context Length | 1,000,000 tokens |
| Max Completion | 128,000 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $5.00 | $0.005000 |
| Output | $25.00 | $0.025000 |
Live Performance
Live endpoint metrics ā refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: May 11, 2026 8:38 pm