Skip to main content
ZeroEval Documentation home page
Search...
⌘K
Tracing
Quickstart
SDKs
Advanced
Reference
Prompts
Introduction
Setup
Optimization
Models
Prompts
Reference
Judges
Introduction
Setup
Multimodal Evaluation
Submitting Feedback
Pulling Evaluations
ZeroEval Documentation home page
Search...
⌘K
Ask AI
Support
Console
Console
Search...
Navigation
Optimization
Models
Guides
Guides
Support
Console
Optimization
Models
Copy page
Evaluate your agent’s performance across multiple models
Copy page
ZeroEval lets you evaluate real production traces of specific agent tasks across different models, then ranking them over time. This helps you pick the best model for each part of your agent.
Was this page helpful?
Yes
No
Setup
Prompts
⌘I