Skip to main content

Installation

pip install zeroeval

Basic Setup

Replace hardcoded prompt strings with ze.prompt(). Your existing text becomes the fallback content that’s used until an optimized version is available.
import zeroeval as ze
from openai import OpenAI

ze.init()
client = OpenAI()

system_prompt = ze.prompt(
    name="support-bot",
    content="You are a helpful customer support agent for {{company}}.",
    variables={"company": "TechCorp"}
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "How do I reset my password?"}
    ]
)
That’s it. Every call to ze.prompt() is tracked, versioned, and linked to the completions it produces. You’ll see production traces at ZeroEval → Prompts.
When you provide content, ZeroEval automatically uses the latest optimized version from your dashboard if one exists. The content parameter serves as a fallback for when no optimized versions are available yet.

Version Control

Auto-optimization (default)

prompt = ze.prompt(
    name="customer-support",
    content="You are a helpful assistant."
)
Uses the latest optimized version if one exists, otherwise falls back to the provided content.

Explicit mode

prompt = ze.prompt(
    name="customer-support",
    from_="explicit",
    content="You are a helpful assistant."
)
Always uses the provided content. Useful for debugging or A/B testing a specific version.

Latest mode

prompt = ze.prompt(
    name="customer-support",
    from_="latest"
)
Requires an optimized version to exist. Fails with PromptRequestError if none is found.

Pin to a specific version

prompt = ze.prompt(
    name="customer-support",
    from_="a1b2c3d4..."  # 64-char SHA-256 hash
)

Prompt Library

For more control, use ze.get_prompt() to fetch prompts from the Prompt Library with tag-based deployments and caching.
prompt = ze.get_prompt(
    "support-triage",
    tag="production",
    fallback="You are a helpful assistant.",
    variables={"product": "Acme"},
)

print(prompt.content)
print(prompt.version)
print(prompt.model)

Parameters

ParameterTypeDefaultDescription
slugstrPrompt slug (e.g. "support-triage")
versionintNoneFetch a specific version number
tagstr"latest"Tag to fetch ("production", "latest", etc.)
fallbackstrNoneContent to use if the prompt is not found
variablesdictNoneTemplate variables for {{var}} tokens
task_namestrNoneOverride the task name for tracing
renderboolTrueWhether to render template variables
missingstr"error"What to do with missing variables: "error" or "ignore"
use_cacheboolTrueUse in-memory cache for repeated fetches
timeoutfloatNoneRequest timeout in seconds

Return value

Returns a Prompt object with:
FieldTypeDescription
contentstrThe rendered prompt content
versionintVersion number
version_idstrVersion UUID
tagstrTag this version was fetched from
is_latestboolWhether this is the latest version
modelstrModel bound to this version (if any)
metadatadictAdditional metadata
sourcestr"api" or "fallback"
content_hashstrSHA-256 hash of the content

Model Deployments

When you deploy a model to a prompt version in the dashboard, the SDK automatically patches the model parameter in your LLM calls:
system_prompt = ze.prompt(
    name="support-bot",
    content="You are a helpful customer support agent."
)

response = client.chat.completions.create(
    model="gpt-4",  # Gets replaced with the deployed model
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "Hello"}
    ]
)

Sending Feedback

Attach feedback to completions to power prompt optimization:
ze.send_feedback(
    prompt_slug="support-bot",
    completion_id=response.id,
    thumbs_up=True,
    reason="Clear and concise response"
)
ParameterTypeRequiredDescription
prompt_slugstrYesPrompt name (same as used in ze.prompt())
completion_idstrYesUUID of the completion
thumbs_upboolYesPositive or negative feedback
reasonstrNoExplanation of the feedback
expected_outputstrNoWhat the output should have been
metadatadictNoAdditional metadata