NVIDIA: Llama 3.3 Nemotron Super 49B V1.5
Analysis Summary
NVIDIA: Llama 3.3 Nemotron Super 49B V1.5 sits in the Efficient tier on our leaderboard, ranked #166 of 525 published models on overall intelligence. At $0.100 input and $0.400 output per 1M tokens, it is among the most expensive on the market. It offers a standard large context window and supports tool use, function calling, and reasoning.
Editorial notes
NVIDIA's Llama 3.3 Nemotron Super 49B V1.5 is a cost-effective open-weight model with tool use support, but its intelligence and coding benchmarks are notably weak for a 49B parameter model, limiting its usefulness for demanding business tasks. It may suit lightweight inference workloads where cost is the primary concern.
Assessed April 23, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Llama-3.3-Nemotron-Super-49B-v1.5 is a 49B-parameter, English-centric reasoning/chat model derived from Metaās Llama-3.3-70B-Instruct with a 128K context. Itās post-trained for agentic workflows (RAG, tool calling) via SFT across math, code, science, and..
Capabilities
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does NVIDIA: Llama 3.3 Nemotron Super 49B V1.5 stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
nvidia/llama-3.3-nemotron-super-49b-v1.5
|
| Provider | nvidia |
| Release Date | October 10, 2025 |
| Context Length | 131,072 tokens |
| Max Completion | 16,384 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.10 | $0.000100 |
| Output | $0.40 | $0.000400 |
Live Performance
Live endpoint metrics ā refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 25, 2026 8:38 pm