DeepSeek: R1 0528
DeepSeek's R1 0528 is a reasoning-focused model with a large context window and tool/function calling support, but its agentic and coding scores are very low and its overall benchmark performance is modest. Western business accessibility considerations also apply, limiting its recommended use for UK and European teams.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
May 28th update to the original DeepSeek R1 Performance on par with OpenAI o1, but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active..
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | DeepSeek |
| Instruct Type | deepseek-r1 |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does DeepSeek: R1 0528 stack up?
Compare side-by-side with other efficient models.
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.50 | $0.000500 |
| Output | $2.15 | $0.002150 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 15, 2026 8:53 pm