Meta: Llama 4 Maverick
Meta's Llama 4 Maverick combines a massive 1M token context window with multimodal support, tool use, and function calling at a competitive price, making it well-suited to long-document and content-heavy business workflows; raw reasoning scores are modest but the breadth of capabilities and value proposition are strong.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward..
Capabilities
Architecture
| Modality | Text + Image → Text |
| Tokenizer | Llama4 |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Meta: Llama 4 Maverick stack up?
Compare side-by-side with other specialist models.
Model Information
| OpenRouter ID |
meta-llama/llama-4-maverick
|
| Provider | meta-llama |
| Model Family | Llama 4 |
| Release Date | April 5, 2025 |
| Context Length | 1,048,576 tokens |
| Max Completion | 16,384 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.15 | $0.000150 |
| Output | $0.60 | $0.000600 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 15, 2026 8:53 pm