Sao10K: Llama 3.1 70B Hanami x1
Sao10K's Llama 3.1 70B Hanami x1 has no benchmark data and a very small 16K context window, with pricing that doesn't reflect strong value. It's a niche fine-tune with limited evidence of business-grade capability.
Assessment date: March 12, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
This is Sao10K's experiment over Euryale v2.2.
Architecture
| Modality | Text → Text |
| Tokenizer | Llama3 |
| Parameters | 70B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $3.00 | $0.003000 |
| Output | $3.00 | $0.003000 |
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 13, 2026 7:52 pm