Tongyi DeepResearch 30B A3B

Tongyi DeepResearch 30B A3B

alibaba · Released Sep 18, 2025
28
Our Score

Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-seeking tasks and delivers state-of-the-art performance on benchmarks like Humanity's Last Exam, BrowserComp, BrowserComp-ZH, WebWalkerQA, GAIA, xbench-DeepSearch, and FRAMES. This makes it superior for complex agentic search, reasoning, and multi-step problem-solving compared to prior models. The model includes a fully automated synthetic data pipeline for scalable pre-training, fine-tuning, and reinforcement learning. It uses large-scale continual pre-training on diverse agentic data to boost reasoning and stay fresh. It also features end-to-end on-policy RL with a customized Group Relative Policy Optimization, including token-level gradients and negative sample filtering for stable training. The model supports ReAct for core ability checks and an IterResearch-based 'Heavy' mode for max performance through test-time scaling. It's ideal for advanced research agents, tool use, and heavy inference workflows.

$0.09 / 1M Input Price
$0.45 / 1M Output Price
131,072 tokens Context Window
131,072 tokens Max Output
30B Parameters

Capabilities

Tool Use Function Calling

Architecture

ModalityText → Text
TokenizerOther
Parameters30B

Model Information

OpenRouter ID alibaba/tongyi-deepresearch-30b-a3b
Provideralibaba
Release Date September 18, 2025
Context Length131,072 tokens
Max Completion131,072 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.09 $0.000090
Output $0.45 $0.000450

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

100%
Avg Uptime
265ms
Best Latency (TTFT)
139.5 tok/s
Best Throughput
1/1
Active Endpoints
Available via: AtlasCloud