Google: Gemini 2.0 Flash
Google's Gemini 2.0 Flash offers an exceptional 1M token context window, multimodal support across text, image, audio, and video, and tool use at a very low price. Its reasoning benchmarks are modest, but its breadth of modality support and cost-efficiency make it a practical choice for content and automation workflows.
Assessment date: March 12, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5. It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences.
Capabilities
Architecture
| Modality | Text + Image + File + Audio + Video → Text |
| Tokenizer | Gemini |
Benchmark Scores
Evaluations
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.10 | $0.000100 |
| Output | $0.40 | $0.000400 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 13, 2026 7:52 pm