Documentation Index
Fetch the complete documentation index at: https://docs.zeroeval.com/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Basic Setup
Replace hardcoded prompt strings withze.prompt(). Your existing text becomes the fallback content that’s used until an optimized version is available.
ze.prompt() is tracked, versioned, and linked to the completions it produces. You’ll see production traces at ZeroEval → Prompts.
When you provide
content, ZeroEval automatically uses the latest optimized
version from your dashboard if one exists. The content parameter serves as a
fallback for when no optimized versions are available yet.Version Control
Auto-optimization (default)
Explicit mode
Latest mode
PromptRequestError if none is found.
Pin to a specific version
Prompt Library
For more control, useze.get_prompt() to fetch prompts from the Prompt Library with tag-based deployments and caching.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
slug | str | — | Prompt slug (e.g. "support-triage") |
version | int | None | Fetch a specific version number |
tag | str | "latest" | Tag to fetch ("production", "latest", etc.) |
fallback | str | None | Content to use if the prompt is not found |
variables | dict | None | Template variables for {{var}} tokens |
task_name | str | None | Override the task name for tracing |
render | bool | True | Whether to render template variables |
missing | str | "error" | What to do with missing variables: "error" or "ignore" |
use_cache | bool | True | Use in-memory cache for repeated fetches |
timeout | float | None | Request timeout in seconds |
Return value
Returns aPrompt object with:
| Field | Type | Description |
|---|---|---|
content | str | The rendered prompt content |
version | int | Version number |
version_id | str | Version UUID |
tag | str | Tag this version was fetched from |
is_latest | bool | Whether this is the latest version |
model | str | Model bound to this version (if any) |
metadata | dict | Additional metadata |
source | str | "api" or "fallback" |
content_hash | str | SHA-256 hash of the content |
Model Deployments
When you deploy a model to a prompt version in the dashboard, the SDK automatically patches themodel parameter in your LLM calls:
Multi-Artifact Runs
When a single prompt-linked run produces multiple judged outputs (e.g. a final decision and a visual card), useze.artifact_span to mark each output as a named artifact. The primary artifact becomes the default completion preview; secondary artifacts are accessible in the detail view.
Manual Prompt-Linked Spans
When you wantze.prompt() to manage a specific LLM interaction but do not want global auto-instrumentation (e.g. your agent makes many LLM calls and only one should be tracked as a prompt generation), disable integrations and create the span yourself.
When to use this
- Your codebase makes many LLM calls but only a subset should appear as prompt completions.
- You use a provider that has no auto-integration (a custom HTTP endpoint, an internal model service, etc.).
- You want full control over which codepath produces prompt-linked traces.
Setup
Disable all integrations, then initialize:Create a prompt-linked span
Callze.prompt() inside an active span so the SDK writes task / zeroeval metadata onto the trace. Then open a child span with kind="llm" (or use ze.artifact_span()) around your provider call. The SDK automatically propagates prompt linkage to child spans in the same trace.
task, zeroeval.prompt_version_id, etc.) onto every span in the trace, so the inner llm span is linked to the prompt version without any extra wiring. Judge evaluations, feedback, and the prompt completions page all work as if an auto-integration created the span.
You can also use ze.artifact_span() instead of ze.span(kind="llm") when you want the output to appear as a named completion artifact:
Sending Feedback
Attach feedback to completions to power prompt optimization:| Parameter | Type | Required | Description |
|---|---|---|---|
prompt_slug | str | Yes | Prompt name (same as used in ze.prompt()) |
completion_id | str | Yes | UUID of the completion |
thumbs_up | bool | Yes | Positive or negative feedback |
reason | str | No | Explanation of the feedback |
expected_output | str | No | What the output should have been |
metadata | dict | No | Additional metadata |