SorcererLM 8x22B
SorcererLM 8x22B has no benchmark data, a small 16K context window, and high pricing, offering no clear advantage over better-documented alternatives for business use.
Assessment date: March 12, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
SorcererLM is an advanced RP and storytelling model, built as a Low-rank 16-bit LoRA fine-tuned on WizardLM-2 8x22B. - Advanced reasoning and emotional intelligence for engaging and immersive interactions
- Vivid writing capabilities enriched with spatial and contextual awareness
- Enhanced narrative depth, promoting creative and dynamic storytelling
Architecture
| Modality | Text → Text |
| Tokenizer | Mistral |
| Instruct Type | vicuna |
| Parameters | 22B |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $4.50 | $0.004500 |
| Output | $4.50 | $0.004500 |
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 12, 2026 7:52 pm