EleutherAI: Llemma 7b
Review
EleutherAI's Llemma 7B is a mathematics-focused research model with no benchmark data in this context, a very small context window, and relatively high pricing for its size; not suitable for general business use and best left to academic or research settings.
Assessment date: April 4, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Performance Profile
Llemma 7B is a language model for mathematics. It was initialized with Code Llama 7B weights, and trained on the Proof-Pile-2 for 200B tokens. Llemma models are particularly strong at chain-of-thought mathematical reasoning and using computational tools for mathematics, such as Python and formal theorem provers.
Architecture
| Modality | Text → Text |
| Tokenizer | Other |
| Instruct Type | code-llama |
| Parameters | 7B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.80 | $0.000800 |
| Output | $1.20 | $0.001200 |
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 4, 2026 8:54 pm