Meta: Llama 4 Maverick
Analysis Summary
Meta: Llama 4 Maverick sits in the Efficient tier on our leaderboard, ranked #155 of 551 published models on overall intelligence. At $0.150 input and $0.600 output per 1M tokens, it is among the most expensive on the market. It offers an exceptionally large context window suited to long-document workflows and supports vision.
Editorial notes
Llama 4 Maverick from Meta brings vision support and a 1M token context at very low pricing, but its intelligence and agentic indices are weak, limiting its suitability for complex business tasks.
Assessed May 5, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Meta: Llama 4 Maverick stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
meta-llama/llama-4-maverick
|
| Provider | meta-llama |
| Model Family | Llama 4 |
| Release Date | April 5, 2025 |
| Context Length | 1,048,576 tokens |
| Max Completion | 16,384 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.15 | $0.000150 |
| Output | $0.60 | $0.000600 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: May 11, 2026 8:38 pm