EleutherAI: Llemma 7b
EleutherAI's Llemma 7B is a specialised mathematics-focused model with no available benchmark data in this evaluation, a very small 4K context window, and limited general-purpose capability. It is not well-suited for typical business applications.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Llemma 7B is a language model for mathematics. It was initialized with Code Llama 7B weights, and trained on the Proof-Pile-2 for 200B tokens. Llemma models are particularly strong at chain-of-thought mathematical reasoning and using computational tools for mathematics, such as Python and formal theorem provers.
Architecture
| Modality | Text → Text |
| Tokenizer | Other |
| Instruct Type | code-llama |
| Parameters | 7B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.80 | $0.000800 |
| Output | $1.20 | $0.001200 |
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm