Relace: Relace Apply 3
Relace Apply 3 has no benchmark data available and is from a lesser-known provider, making objective capability assessment impossible; without performance evidence it cannot be recommended for professional business use regardless of its pricing.
Assessment date: March 14, 2026
Our methodology takes into account a range of factors including pricing, functionality, capabilities, benchmark performance, and real-world applicability. Rankings are reviewed and updated regularly as new models are released. Issues with our rankings? Contact us
Relace Apply 3 is a specialized code-patching LLM that merges AI-suggested edits straight into your source files. It can apply updates from GPT-4o, Claude, and others into your files at 10,000 tokens/sec on average. The model requires the prompt to be in the following format: {instruction}
{initial_code}
{edit_snippet} Zero Data Retention is enabled for Relace. Learn more about this model in their documentation
Architecture
| Modality | Text → Text |
| Tokenizer | Other |
Model Information
Pricing
| Token Type | Cost per 1M tokens | Cost per 1K tokens |
|---|---|---|
| Input | $0.85 | $0.000850 |
| Output | $1.25 | $0.001250 |
Live Performance
Live endpoint metrics — refreshed every 30 minutes.
External Resources
Data sourced from OpenRouter API, Artificial Analysis and Hugging Face Open LLM Leaderboard. Scores are editorially curated by our team.
Last updated: March 15, 2026 7:52 pm