Meta: Llama 4 Scout
Analysis Summary
Meta: Llama 4 Scout sits in the Efficient tier on our leaderboard, ranked #190 of 551 published models on overall intelligence. At $0.080 input and $0.300 output per 1M tokens, it is among the most expensive on the market. It offers a generous context window for extended reasoning and code review and supports tool use, function calling, and vision.
Editorial notes
Meta Llama 4 Scout delivers vision, tool use, and a 327K context at very low pricing, with reasonable GPQA and MMLU scores, though coding and agentic benchmarks are limited.
Assessed May 5, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Meta: Llama 4 Scout stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
meta-llama/llama-4-scout
|
| Provider | meta-llama |
| Model Family | Llama 4 |
| Release Date | April 5, 2025 |
| Context Length | 327,680 tokens |
| Max Completion | 16,384 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.08 | $0.000080 |
| Output | $0.30 | $0.000300 |
Live Performance
Live endpoint metrics ā refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: May 11, 2026 8:38 pm