Skip to main content

Completion Feedback

Attach structured feedback to a specific LLM completion to power prompt optimization.

sendFeedback()

await ze.sendFeedback({
  promptSlug: "support-bot",
  completionId: "550e8400-e29b-41d4-a716-446655440000",
  thumbsUp: false,
  reason: "Response was too verbose",
  expectedOutput: "A concise 2-3 sentence response",
});
ParameterTypeRequiredDescription
promptSlugstringYesPrompt name (same as ze.prompt({ name: ... }))
completionIdstringYesUUID of the completion span
thumbsUpbooleanYesPositive or negative feedback
reasonstringNoExplanation of the feedback
expectedOutputstringNoWhat the output should have been
metadataRecord<string, unknown>NoAdditional metadata
judgeIdstringNoJudge automation ID (for judge feedback)
expectedScorenumberNoExpected score (for scored judges)
scoreDirection'too_high' | 'too_low'NoScore direction for scored judges

End-to-end example

import * as ze from "zeroeval";
import { OpenAI } from "openai";

ze.init();
const client = ze.wrap(new OpenAI());

const systemPrompt = await ze.prompt({
  name: "support-bot",
  content: "You are a helpful customer support agent.",
});

const response = await client.chat.completions.create({
  model: "gpt-4",
  messages: [
    { role: "system", content: systemPrompt },
    { role: "user", content: "How do I reset my password?" },
  ],
});

const isGood = evaluateResponse(response.choices[0].message.content);

await ze.sendFeedback({
  promptSlug: "support-bot",
  completionId: response.id,
  thumbsUp: isGood,
  reason: isGood ? "Clear instructions" : "Missing reset link",
});