Google: Gemini 2.0 Flash Lite
Analysis Summary
Google: Gemini 2.0 Flash Lite sits in the Specialist tier on our leaderboard, ranked #125 of 523 published models on overall intelligence. At $0.075 input and $0.300 output per 1M tokens, it is among the most expensive on the market. It offers an exceptionally large context window suited to long-document workflows and supports tool use, function calling, and vision.
Editorial notes
Google's Gemini 2.0 Flash Lite is an exceptionally affordable multimodal model with a 1M token context window, vision support, and tool/function calling at just $0.075/1M input tokens. Its reasoning and coding capabilities are modest, but for high-volume, cost-sensitive content tasks it represents outstanding value.
Assessed April 23, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Gemini 2.0 Flash Lite offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5,..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Google: Gemini 2.0 Flash Lite stack up?
Compare side-by-side with other specialist models.
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.08 | $0.000075 |
| Output | $0.30 | $0.000300 |
Live Performance
Live endpoint metrics ā refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 24, 2026 12:17 pm