Meta: Llama 3.2 11B Vision Instruct
Meta's Llama 3.2 11B Vision model brings multimodal capability at a very low price, but its benchmark scores are among the weakest in the current landscape, limiting its usefulness to simple vision-assisted tasks rather than substantive business applications.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and..
Capabilities
Architecture
| Modality | Text + Image → Text |
| Tokenizer | Llama3 |
| Instruct Type | llama3 |
| Parameters | 11B |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does Meta: Llama 3.2 11B Vision Instruct stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
meta-llama/llama-3.2-11b-vision-instruct
|
| Provider | meta-llama |
| Model Family | Llama 3 |
| Release Date | September 25, 2024 |
| Context Length | 131,072 tokens |
| Max Completion | 16,384 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.25 | $0.000245 |
| Output | $0.25 | $0.000245 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 15, 2026 8:53 pm