- Human feedback — thumbs-up/down, star ratings, corrections, and expected outputs submitted by users or reviewers
- AI feedback — automated evaluations from calibrated judges that score outputs against criteria you define
How feedback flows
Agent produces output
Your agent runs and ZeroEval captures the full trace — inputs, outputs,
model, prompt version.
Feedback is attached
Humans review outputs in the dashboard or your app submits feedback
programmatically. Judges evaluate outputs automatically based on your
criteria.
Quality becomes measurable
Feedback appears on spans, traces, and completions in the console. Filter by
thumbs-up rate, judge scores, or tags to find patterns.