OpenAI: gpt-oss-120b (free)

OpenAI: gpt-oss-120b (free)

openai · Released Aug 5, 2025
65
Our Score

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

131,072 tokens Context Window
131,072 tokens Max Output
120B Parameters

Capabilities

Tool Use Function Calling

Architecture

ModalityText → Text
TokenizerGPT
Parameters120B

Performance Indices

Source: Artificial Analysis

33.3 Intelligence Index
28.6 Coding Index
93.4 Math Index

Benchmark Scores

Evaluations

GPQA Diamond 78.2%
Graduate-level scientific reasoning
HLE 18.5%
Humanity's Last Exam
MMLU Pro 80.8%
Multi-task language understanding
LiveCodeBench 87.8%
Live coding evaluation
SciCode 38.9%
Scientific computing

Benchmark data from Artificial Analysis and Hugging Face

Model Information

OpenRouter ID openai/gpt-oss-120b:free
Provideropenai
Release Date August 5, 2025
Context Length131,072 tokens
Max Completion131,072 tokens
Status Active

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

99.7%
Avg Uptime
575ms
Best Latency (TTFT)
94 tok/s
Best Throughput
1/1
Active Endpoints
Available via: OpenInference

Leaderboard Categories