Submitting Feedback
Feedback is the foundation of prompt optimization. You can submit feedback for completions through the ZeroEval dashboard, the Python SDK, or the public API. Feedback helps ZeroEval understand what good and bad outputs look like for your specific use case.Feedback through the dashboard
The easiest way to provide feedback is through the ZeroEval dashboard. Navigate to your task’s “Suggestions” tab, review incoming completions, and provide thumbs up/down feedback with optional reasons and expected outputs.Feedback through the SDK
For programmatic feedback submission, use the Python or TypeScript SDK. This is useful when you have automated evaluation systems or want to collect feedback from your application in production.Parameters
| Python | TypeScript | Type | Required | Description |
|---|---|---|---|---|
prompt_slug | promptSlug | str/string | Yes | The slug/name of your prompt (same as used in ze.prompt()) |
completion_id | completionId | str/string | Yes | The UUID of the completion to provide feedback on |
thumbs_up | thumbsUp | bool/boolean | Yes | True/true for positive, False/false for negative |
reason | reason | str/string | No | Optional explanation of why you gave this feedback |
expected_output | expectedOutput | str/string | No | Optional description of what the expected output should be |
metadata | metadata | dict/object | No | Optional additional metadata to attach to the feedback |
The
completion_id is automatically tracked when you use ze.prompt() with automatic tracing enabled. You can access it from the OpenAI response object’s id field, or retrieve it from your traces in the dashboard.Complete example with feedback
Auto-optimization: When you use
ze.prompt() with content, ZeroEval automatically fetches the latest optimized version from your dashboard if one exists. Your content serves as a fallback for initial setup. This means your prompts improve automatically as you tune them, without any code changes.If you need to test the hardcoded content specifically (e.g., for debugging or A/B testing), use from_="explicit" (Python) or from: "explicit" (TypeScript):Feedback through the API
For integration with non-Python systems or direct API access, you can submit feedback using the public HTTP API.Endpoint
Authentication
Requires API key authentication via theAuthorization header: