OpenAI: gpt-oss-120b (free)
The free tier of OpenAI's gpt-oss-120b delivers strong maths and solid coding performance with tool use support at no cost, making it an excellent option for budget-conscious developers, though its reasoning depth is mid-tier compared to the top models in the market.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
Capabilities
Architecture
| Modality | Text → Text |
| Tokenizer | GPT |
| Parameters | 120B |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Benchmark data from Artificial Analysis and Hugging Face
Model Information
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 4, 2026 8:54 pm