Goliath 120B
Goliath 120B is a 2023-era community model with no benchmark data and a small 6K context window, limiting its utility for business workflows. Without performance evidence, it cannot be recommended for professional use.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale. Credits to
- @chargoddard for developing the framework used to merge the model - mergekit.
- @Undi95 for helping with the merge ratios. #merge
Architecture
| Modality | Text → Text |
| Tokenizer | Llama2 |
| Instruct Type | airoboros |
| Parameters | 120B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $3.75 | $0.003750 |
| Output | $7.50 | $0.007500 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm