Why track prompts
- Version history — every prompt change creates a new version you can compare and roll back to
- Production visibility — see exactly which prompt version is running, how often it’s called, and what it produces
- Feedback loop — attach thumbs-up/down feedback to completions, then use it to optimize prompts and evaluate models
- One-click deployments — push a winning prompt or model to production without redeploying your app
How it works
Replace hardcoded prompts
Swap string literals for
ze.prompt() calls. Your existing prompt text
becomes the fallback content.Versions are created automatically
Each unique prompt string creates a tracked version. Changes in your code
produce new versions without any extra work.
Completions are linked to versions
When your LLM integration fires, ZeroEval links each completion to the exact
prompt version and model that produced it.
Get started
Python
ze.prompt() and ze.get_prompt() for Python applicationsTypeScript
ze.prompt() for TypeScript and JavaScript applications