Qwen: Qwen2.5 Coder 7B Instruct

Qwen: Qwen2.5 Coder 7B Instruct

qwen · Released Apr 15, 2025
30
Our Score

Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows. This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.

$0.03 / 1M Input Price
$0.09 / 1M Output Price
32,768 tokens Context Window
7B Parameters

Architecture

ModalityText → Text
TokenizerQwen
Parameters7B

Performance Indices

Source: Artificial Analysis

10 Intelligence Index

Benchmark Scores

Evaluations

GPQA Diamond 33.9%
Graduate-level scientific reasoning
HLE 4.8%
Humanity's Last Exam
MMLU Pro 47.3%
Multi-task language understanding
LiveCodeBench 12.6%
Live coding evaluation
SciCode 14.8%
Scientific computing
MATH 500 66%
Mathematical problem-solving
AIME 5.3%
Competition mathematics

Benchmark data from Artificial Analysis and Hugging Face

Model Information

OpenRouter ID qwen/qwen2.5-coder-7b-instruct
Providerqwen
Release Date April 15, 2025
Context Length32,768 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.03 $0.000030
Output $0.09 $0.000090

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

234ms
Best Latency (TTFT)
27 tok/s
Best Throughput
0/1
Active Endpoints
Available via: Nebius

Leaderboard Categories