ZeroEval’s autotune feature allows you to continuously improve your prompts and automatically deploy the best-performing models. The setup is simple and powerful.
Once you see a model that performs well, you can send it to production with a single click, as seen below.Your specified model gets replaced automatically any time you use the prompt from ze.prompt(), as seen below.
Here’s autotune in action for a simple customer support bot:
Copy
Ask AI
import zeroeval as zefrom openai import OpenAIze.init()client = OpenAI()# Define your prompt with version trackingsystem_prompt = ze.prompt( name="support-bot", content="""You are a customer support agent for {{company}}. Be helpful, concise, and professional.""", variables={"company": "TechCorp"})# Use it normally - model gets patched automaticallyresponse = client.chat.completions.create( model="gpt-4", # This might run claude-3-sonnet in production! messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": "I need help with my order"} ])
Every time you change your prompt content, a new version is created:
Copy
Ask AI
# Version 1 - Initial promptprompt_v1 = ze.prompt( name="customer-support", content="You are a helpful assistant.")# Version 2 - Updated prompt (automatically creates new version)prompt_v2 = ze.prompt( name="customer-support", content="You are a helpful customer support assistant." # Changed!)# Fetch specific versions by hashlatest_prompt = ze.prompt( name="customer-support", from="latest" # Always get the latest tuned version)# Or fetch a specific version by its content hashspecific_prompt = ze.prompt( name="customer-support", from="a1b2c3d4..." # 64-character SHA-256 hash)