Anthropic: Claude Opus 4.7
Claude Opus 4.7 from Anthropic is a newly announced model with no benchmark data yet available, so a conservative score applies until performance data is published; it supports vision, tool use, and function calling with a 1M token context window, suggesting strong capability breadth once verified.
Assessment date: April 16, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
Opus 4.7 is the next generation of Anthropic's Opus family, built for long-running, asynchronous agents. Building on the coding and agentic strengths of Opus 4.6, it delivers stronger performance on..
Capabilities
Architecture
| Modality | Text + Image → Text |
| Tokenizer | Claude |
Performance Indices
Source: Artificial Analysis
This model was released recently. Independent benchmark evaluations are typically completed within days of release — these figures are preliminary and are likely to be updated as testing is finalised.
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Anthropic: Claude Opus 4.7 stack up?
Compare side-by-side with other emerging models.
Model Information
| OpenRouter ID |
anthropic/claude-opus-4.7
|
| Provider | anthropic |
| Release Date | April 16, 2026 |
| Context Length | 1,000,000 tokens |
| Max Completion | 128,000 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $5.00 | $0.005000 |
| Output | $25.00 | $0.025000 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 22, 2026 11:25 pm