Documentation Index
Fetch the complete documentation index at: https://docs.zeroeval.com/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Basic Setup
Replace hardcoded prompt strings withze.prompt(). Your existing text becomes the fallback content that’s used until an optimized version is available.
ze.prompt() is tracked, versioned, and linked to the completions it produces. You’ll see production traces at ZeroEval → Prompts.
When you provide
content, ZeroEval automatically uses the latest optimized
version from your dashboard if one exists. The content parameter serves as a
fallback for when no optimized versions are available yet.Version Control
Auto-optimization (default)
Explicit mode
Latest mode
PromptRequestError if none is found.
Pin to a specific version
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
name | string | Yes | — | Task name for this prompt |
content | string | No | undefined | Prompt content (fallback or explicit) |
from | string | No | undefined | "latest", "explicit", or a 64-char SHA-256 hash |
variables | Record<string, string> | No | undefined | Template variables for {{var}} tokens |
Return value
ReturnsPromise<string> — a decorated prompt string with metadata that integrations use to link completions to prompt versions and auto-patch models.
Errors
| Error | When |
|---|---|
Error | Both content and from provided (except from: "explicit"), or neither |
PromptRequestError | from: "latest" but no versions exist |
PromptNotFoundError | from is a hash that doesn’t exist |
Model Deployments
When you deploy a model to a prompt version in the dashboard, the SDK automatically patches themodel parameter in your LLM calls:
Manual Prompt-Linked Spans
When you wantze.prompt() to manage a specific LLM interaction but do not want global auto-instrumentation (e.g. your agent makes many LLM calls and only one should be tracked as a prompt generation), disable integrations and create the span yourself.
When to use this
- Your codebase makes many LLM calls but only a subset should appear as prompt completions.
- You use a provider that has no auto-integration (Vapi, a custom HTTP endpoint, etc.).
- You want full control over which codepath produces prompt-linked traces.
Setup
Disable integrations you do not want, then initialize:Create a prompt-linked span
- Call
ze.prompt()to register the prompt version and get a decorated string. - Use
extractZeroEvalMetadata()to split the decorated string into clean content and linkage metadata. - Send the clean content to your provider.
- Wrap the call in
ze.withSpan()withkind: 'llm'and the metadata inattributes.zeroeval.
llm span linked to the prompt version. Judge evaluations, feedback, and the prompt completions page all work as if an auto-integration created the span.
extractZeroEvalMetadata is exported from the top-level zeroeval package.Sending Feedback
Attach feedback to completions to power prompt optimization:| Parameter | Type | Required | Description |
|---|---|---|---|
promptSlug | string | Yes | Prompt name (same as used in ze.prompt()) |
completionId | string | Yes | UUID of the completion |
thumbsUp | boolean | Yes | Positive or negative feedback |
reason | string | No | Explanation of the feedback |
expectedOutput | string | No | What the output should have been |
metadata | Record<string, unknown> | No | Additional metadata |