Compare AI Models
Select up to 4 models to compare side by side.
|
|
|
|---|---|
| Overview | |
| Provider | Cohere |
| Our Score | 35 |
| Performance Tier | Efficient |
| Best For | — |
| Released | Mar 2025 |
| Knowledge Cutoff | — |
| Dimension Scores | |
| Intelligence | 3.0 / 10 |
| Technical | 1.5 / 10 |
| Content | 3.5 / 10 |
| Value | 6.5 / 10 |
| Pricing | |
| Input / 1M tokens | $2.50 |
| Output / 1M tokens | $10.00 |
| Performance | |
| Context Window | 256K |
| Throughput (tok/s) | — |
| Latency (TTFT ms) | — |
| Parameters | — |
| Intelligence Indices (Artificial Analysis) | |
| Intelligence | 13.5 |
| Coding | 9.9 |
| Math | 13 |
| Agentic | 8 |
| Benchmarks | |
| MMLU-Pro | 0.7 |
| MMLU | — |
| GPQA Diamond | 0.5 |
| GPQA (HF) | — |
| LiveCodeBench | 0.3 |
| HumanEval | — |
| MATH-500 | 0.8 |
| AIME '25 | 0.1 |
| AIME | 0.1 |
| MATH (HF) | — |
| Humanity's Last Exam | 0 |
| SWE-bench | — |
| IFBench | 0.4 |
| IFEval | — |
| Capabilities | |
| Tool Use | ✗ No |
| Function Calling | ✗ No |
| Vision | ✗ No |
| Open Source | ✗ No |
| License | — |
Looking for the full rankings? View the AI Model Leaderboard or try the Price Calculator to estimate monthly costs.