LiquidAI: LFM2-24B-A2B

LiquidAI: LFM2-24B-A2B

liquid · Released Feb 25, 2026 New
25
Our Score

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

$0.03 / 1M Input Price
$0.12 / 1M Output Price
32,768 tokens Context Window
24B Parameters

Architecture

ModalityText → Text
TokenizerOther
Parameters24B

Performance Indices

Source: Artificial Analysis

10.5 Intelligence Index
3.6 Coding Index
11.1 Agentic Index

Benchmark Scores

Evaluations

GPQA Diamond 47.4%
Graduate-level scientific reasoning
HLE 4.4%
Humanity's Last Exam
SciCode 10.9%
Scientific computing
IFBench 45.9%
Instruction following
τ²-Bench 11.1%
Conversational agent benchmark

Benchmark data from Artificial Analysis and Hugging Face

Model Information

OpenRouter ID liquid/lfm-2-24b-a2b
Providerliquid
Release Date February 25, 2026
Context Length32,768 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.03 $0.000030
Output $0.12 $0.000120

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

357ms
Best Latency (TTFT)
125 tok/s
Best Throughput
0/1
Active Endpoints
Available via: Together