Perplexity: Sonar
Analysis Summary
Perplexity: Sonar sits in the Efficient tier on our leaderboard, ranked #260 of 556 published models on overall intelligence. At $1.00 input and $1.00 output per 1M tokens, it is among the most expensive on the market. It offers a mid-sized context window and supports vision.
Editorial notes
Perplexity Sonar provides vision and a 127K context at low symmetric pricing, but its intelligence index is limited and benchmark coverage is sparse, making it better suited to lightweight search-augmented tasks.
Assessed May 14, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
Sonar is lightweight, affordable, fast, and simple to use — now featuring citations and the ability to customize sources. It is designed for companies seeking to integrate lightweight question-and-answer features..
Capabilities
Benchmark Scores
Intelligence
Technical
Benchmark data from Artificial Analysis and Hugging Face
How does Perplexity: Sonar stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
perplexity/sonar
|
| Provider | perplexity |
| Release Date | January 27, 2025 |
| Context Length | 127,072 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $1.00 | $0.001000 |
| Output | $1.00 | $0.001000 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: May 14, 2026 5:33 pm