ZeroEval’s autotune feature allows you to continuously improve your prompts and automatically deploy the best-performing models. The setup is simple and powerful. Setup

Getting started (<5 mins)

Replace hardcoded prompts with ze.prompt() and include the name of the specific part of your agent that you want to tune.
# Before
prompt = "You are a helpful assistant"

# After - with autotune
prompt = ze.prompt(
    name="assistant",
    content="You are a helpful assistant"
)
That’s it! You’ll start seeing production traces in your dashboard for this specific task at ZeroEval › Tuning › [task_name].

Pushing models to production

Once you see a model that performs well, you can send it to production with a single click, as seen below. Model deployment Your specified model gets replaced automatically any time you use the prompt from ze.prompt(), as seen below.
# You write this
response = client.chat.completions.create(
    model="gpt-4",  # ← Gets replaced!
    messages=[{"role": "system", "content": prompt}]
)

Example

Here’s autotune in action for a simple customer support bot:
import zeroeval as ze
from openai import OpenAI

ze.init()
client = OpenAI()

# Define your prompt with version tracking
system_prompt = ze.prompt(
    name="support-bot",
    content="""You are a customer support agent for {{company}}.
    Be helpful, concise, and professional.""",
    variables={"company": "TechCorp"}
)

# Use it normally - model gets patched automatically
response = client.chat.completions.create(
    model="gpt-4",  # This might run claude-3-sonnet in production!
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "I need help with my order"}
    ]
)

Understanding Prompt Versions

Every time you change your prompt content, a new version is created:
# Version 1 - Initial prompt
prompt_v1 = ze.prompt(
    name="customer-support",
    content="You are a helpful assistant."
)

# Version 2 - Updated prompt (automatically creates new version)
prompt_v2 = ze.prompt(
    name="customer-support", 
    content="You are a helpful customer support assistant."  # Changed!
)

# Fetch specific versions by hash
latest_prompt = ze.prompt(
    name="customer-support",
    from="latest"  # Always get the latest tuned version
)

# Or fetch a specific version by its content hash
specific_prompt = ze.prompt(
    name="customer-support",
    from="a1b2c3d4..."  # 64-character SHA-256 hash
)