Skip to main content

Completion Feedback

Attach structured feedback to a specific LLM completion to power prompt optimization.

send_feedback()

ze.send_feedback(
    prompt_slug="support-bot",
    completion_id="550e8400-e29b-41d4-a716-446655440000",
    thumbs_up=False,
    reason="Response was too verbose",
    expected_output="A concise 2-3 sentence response"
)
ParameterTypeRequiredDescription
prompt_slugstrYesPrompt name (same as ze.prompt(name=...))
completion_idstrYesUUID of the completion span
thumbs_upboolYesPositive or negative feedback
reasonstrNoExplanation of the feedback
expected_outputstrNoWhat the output should have been
metadatadictNoAdditional metadata
judge_idstrNoJudge automation ID (for judge feedback)
expected_scorefloatNoExpected score (for scored judges)
score_directionstrNo"too_high" or "too_low"
criteria_feedbackdictNoPer-criterion feedback: {"criterion": {"expected_score": 4.0, "reason": "..."}}

End-to-end example

import zeroeval as ze
from openai import OpenAI

ze.init()
client = OpenAI()

system_prompt = ze.prompt(
    name="support-bot",
    content="You are a helpful customer support agent."
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "How do I reset my password?"}
    ]
)

is_good = evaluate_response(response.choices[0].message.content)

ze.send_feedback(
    prompt_slug="support-bot",
    completion_id=response.id,
    thumbs_up=is_good,
    reason="Clear instructions" if is_good else "Missing reset link"
)