Mistral: Mixtral 8x7B Instruct

Mistral: Mixtral 8x7B Instruct

mistralai · Released Dec 10, 2023
38
Our Score

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

$0.54 / 1M Input Price
$0.54 / 1M Output Price
32,768 tokens Context Window
16,384 tokens Max Output
7B Parameters

Capabilities

Tool Use Function Calling

Architecture

ModalityText → Text
TokenizerMistral
Instruct Typemistral
Parameters7B

Performance Indices

Source: Artificial Analysis

7.7 Intelligence Index

Benchmark Scores

Evaluations

GPQA Diamond 29.2%
Graduate-level scientific reasoning
HLE 4.5%
Humanity's Last Exam
MMLU Pro 38.7%
Multi-task language understanding
LiveCodeBench 6.6%
Live coding evaluation
SciCode 2.8%
Scientific computing
MATH 500 29.9%
Mathematical problem-solving

Benchmark data from Artificial Analysis and Hugging Face

Model Information

OpenRouter ID mistralai/mixtral-8x7b-instruct
Providermistralai
Release Date December 10, 2023
Context Length32,768 tokens
Max Completion16,384 tokens
Status Active

Pricing

Token Type Cost per 1M tokens Cost per 1K tokens
Input $0.54 $0.000540
Output $0.54 $0.000540

Live Performance

Live endpoint metrics — refreshed every 30 minutes.

100%
Avg Uptime
197ms
Best Latency (TTFT)
84 tok/s
Best Throughput
2/2
Active Endpoints
Available via: DeepInfra, Together

Leaderboard Categories