- Content-based versioning: Each unique prompt content gets its own version via SHA-256 hashing
- Variable templating: Use
{{variable}}
syntax for dynamic content - Automatic tracking: All interactions are traced for analysis
- One-click model deployments: Models update instantly without code changes
How it works
1
Instrument your code
Replace hardcoded prompts with
ze.prompt()
calls2
Every change creates a version
Each time you modify your prompt content, a new version is automatically created and tracked
3
Collect performance data
ZeroEval automatically tracks all LLM interactions and their outcomes
4
Tune and evaluate
Use the UI to run experiments, vote on outputs, and identify the best prompt/model combinations
5
One-click model deployments
Winning configurations are automatically deployed to your application without code changes