OpenAI: GPT-4 Turbo
Analysis Summary
OpenAI: GPT-4 Turbo sits in the Efficient tier on our leaderboard, ranked #161 of 525 published models on overall intelligence. At $10.00 input and $30.00 output per 1M tokens, it is among the most expensive on the market. It offers a standard large context window and supports tool use, function calling, and vision.
Editorial notes
GPT-4 Turbo from OpenAI offers multimodal input, a 128K context window, and tool/function calling support, with moderate benchmark scores for its era; however, it has been superseded by GPT-4o and newer models, and its premium pricing makes it poor value compared to current alternatives.
Assessed April 16, 2026
Rankings consider pricing, capabilities, benchmarks, and real-world applicability and are refreshed as new models launch. Feedback?
Performance Profile
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to December 2023.
Capabilities
Benchmark Scores
Intelligence
Technical
Benchmark data from Artificial Analysis and Hugging Face
How does OpenAI: GPT-4 Turbo stack up?
Compare side-by-side with other efficient models.
Model Information
| OpenRouter ID |
openai/gpt-4-turbo
|
| Provider | openai |
| Model Family | GPT-4 Turbo |
| Release Date | April 9, 2024 |
| Context Length | 128,000 tokens |
| Max Completion | 4,096 tokens |
| Status | Active |
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $10.00 | $0.010000 |
| Output | $30.00 | $0.030000 |
Live Performance
Live endpoint metrics ā refreshed every 30 minutes.
Leaderboard Categories
External Resources
Explore Related Models
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: April 25, 2026 8:38 pm