Meta: Llama 3.3 70B Instruct
Meta's Llama 3.3 70B Instruct is a widely adopted open-weight model with tool and function calling support at a very competitive price point, but lacks benchmark data in this listing. It's a practical choice for businesses wanting a deployable, cost-effective model for general text tasks.
Assessment date: March 12, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. Model Card
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | Llama3 |
| Instruct Type | llama3 |
| Parameters | 70B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.10 | $0.000100 |
| Output | $0.32 | $0.000320 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 13, 2026 7:52 pm