Meta: Llama 3.1 405B (base)
Meta's Llama 3.1 405B base model offers broad knowledge coverage and reasonable benchmark scores, but as an uninstruction-tuned base model it lacks the agentic and tool-use capabilities businesses need, and its pricing is high relative to its practical output quality.
Assessment date: April 16, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This is the base 405B pre-trained version. It has demonstrated strong performance compared to leading closed-source models in human evaluations. Usage of this model is subject to Meta's Acceptable Use Policy.
Architecture
| Modality | Text → Text |
| Tokenizer | Llama3 |
| Instruct Type | none |
| Parameters | 405B |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Meta: Llama 3.1 405B (base) stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
meta-llama/llama-3.1-405b
|
| Provider | meta-llama |
| Model Family | Llama 3 |
| Release Date | August 2, 2024 |
| Context Length | 32,768 tokens |
| Max Completion | 32,768 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $4.00 | $0.004000 |
| Output | $4.00 | $0.004000 |
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 16, 2026 8:54 pm