Z.ai: GLM 5.1

Z.ai: GLM 5.1

z-ai · Released Apr 7, 2026 Professional New
82
Our Score

Performance Profile

Intelligence8.7Technical8.3Value7Content7.9
Intelligence 8.7/10
Technical 8.3/10
Content 7.9/10
Value 7/10

GLM-5.1 delivers a major leap in coding capability, with particularly significant gains in handling long-horizon tasks. Unlike previous models built around minute-level interactions, GLM-5.1 can work independently and continuously on..

$0.95 / 1M
Input Price
$3.15 / 1M
Output Price
202,752 tokens
Context Window
65,535 tokens
Max Output

Capabilities

Tool Use Function Calling

Architecture

ModalityText → Text
TokenizerOther

Performance Indices

Source: Artificial Analysis

51.4 Intelligence Index
43.4 Coding Index
70.5 Agentic Index

This model was released recently. Independent benchmark evaluations are typically completed within days of release — these figures are preliminary and are likely to be updated as testing is finalised.

Benchmark Scores

Intelligence

GPQA Diamond 86.8% Graduate-level scientific reasoning
HLE 28% Humanity's Last Exam
SciCode 43.8% Scientific computing

Technical

TerminalBench Hard 43.2% Agentic terminal tasks
τ²-Bench 97.7% Conversational agent benchmark

Content

IFBench 76.3% Instruction following
LCR 62.3% Long-context reasoning

Benchmark data from Artificial Analysis and Hugging Face

How does Z.ai: GLM 5.1 stack up?

Compare side-by-side with other professional models.

Compare Models

Model Information

OpenRouter ID z-ai/glm-5.1
Providerz-ai
Release Date April 7, 2026
Context Length202,752 tokens
Max Completion65,535 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.95 $0.000950
Output $3.15 $0.003150

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

98.3%
Avg Uptime
584ms
Best Latency (TTFT)
81 tok/s
Best Throughput
14/14
Active Endpoints
Available via: Chutes, Io Net, GMICloud, Novita, Parasail, Together, Fireworks, AtlasCloud +6 more

Leaderboard Categories