NVIDIA: Nemotron Nano 12B 2 VL (free)

NVIDIA: Nemotron Nano 12B 2 VL (free)

nvidia · Released Oct 28, 2025
30
Our Score

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency. The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets optimized for optical-character recognition, chart reasoning, and multimodal comprehension. Nemotron Nano 2 VL achieves leading results on OCRBench v2 and scores ≈ 74 average across MMMU, MathVista, AI2D, OCRBench, OCR-Reasoning, ChartQA, DocVQA, and Video-MME—surpassing prior open VL baselines. With Efficient Video Sampling (EVS), it handles long-form videos while reducing inference cost. Open-weights, training data, and fine-tuning recipes are released under a permissive NVIDIA open license, with deployment supported across NeMo, NIM, and major inference runtimes.

128,000 tokens Context Window
128,000 tokens Max Output
12B Parameters

Capabilities

Tool Use Function Calling Vision

Architecture

ModalityText + Image + Video → Text
TokenizerOther
Parameters12B

Model Information

OpenRouter ID nvidia/nemotron-nano-12b-v2-vl:free
Providernvidia
Release Date October 28, 2025
Context Length128,000 tokens
Max Completion128,000 tokens
Status Active

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

100%
Avg Uptime
1,524ms
Best Latency (TTFT)
54 tok/s
Best Throughput
1/1
Active Endpoints
Available via: Nvidia