AllenAI: Olmo 3.1 32B Think
AllenAI's Olmo 3.1 32B Think shows strong maths and instruction-following scores relative to its size and price, but overall intelligence and coding capability remain limited — a niche open model better suited to research than general business deployment.
Assessment date: April 16, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
Olmo 3.1 32B Think is a large-scale, 32-billion-parameter model designed for deep reasoning, complex multi-step logic, and advanced instruction following. Building on the Olmo 3 series, version 3.1 delivers refined reasoning behavior and stronger performance across demanding evaluations and nuanced conversational tasks. Developed by Ai2 under the Apache 2.0 license, Olmo 3.1 32B Think continues the Olmo initiative’s commitment to openness, providing full transparency across model weights, code, and training methodology.
Architecture
| Modality | Text → Text |
| Tokenizer | Other |
| Parameters | 32B |
Performance Indices
Source: Artificial Analysis
Benchmark Scores
Intelligence
Technical
Content
Benchmark data from Artificial Analysis and Hugging Face
How does AllenAI: Olmo 3.1 32B Think stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
allenai/olmo-3.1-32b-think
|
| Provider | allenai |
| Release Date | December 16, 2025 |
| Context Length | 65,536 tokens |
| Max Completion | 65,536 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.15 | $0.000150 |
| Output | $0.50 | $0.000500 |
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 16, 2026 8:54 pm