AI Models
Explore and compare all available AI models through the KKoode API
Providers
Sort By
Price Range (per 1M tokens)
$0$50+
Features
GPT-4o
by OpenAI
New
Input:$0.005
Output:$0.015
Latency:580ms
Context:128K
Vision
Function Calling
JSON Mode
Claude 3.5 Sonnet
by Anthropic
New
Input:$0.003
Output:$0.015
Latency:1.2s
Context:200K
Vision
Tool Use
XML Mode
Gemini 1.5 Pro
by Google
Input:$0.0025
Output:$0.0075
Latency:850ms
Context:1M
Vision
Function Calling
Long Context
DeepSeek-V2
by DeepSeek
Input:$0.002
Output:$0.01
Latency:720ms
Context:128K
Vision
Function Calling
Mistral Large 2
by Mistral AI
Input:$0.0027
Output:$0.0081
Latency:650ms
Context:32K
Vision
Function Calling
JSON Mode
Claude 3 Opus
by Anthropic
Input:$0.015
Output:$0.075
Latency:2.5s
Context:200K
Vision
Tool Use
High Accuracy
Model Comparison
Compare performance metrics across different models
Model | Context Size | Latency | Input Cost | Output Cost | Features |
---|---|---|---|---|---|
GPT-4o | 128K | 580ms | $0.005 | $0.015 | Vision JSON |
Claude 3.5 Sonnet | 200K | 1.2s | $0.003 | $0.015 | Vision Tool Use |
Gemini 1.5 Pro | 1M | 850ms | $0.0025 | $0.0075 | Vision Long Context |
Performance Benchmarks
Model performance on standard benchmarks
Performance benchmark chart will appear here