Skip to main content
Prompt optimization is a different approach to the traditional evals experience. Instead of setting up complex eval pipelines, we simply ingest your production traces and let you optimize your prompts based on your feedback.

How it works

1

Instrument your code

Replace hardcoded prompts with ze.prompt() calls in Python or ze.prompt({...}) in TypeScript
2

Every change creates a version

Each time you modify your prompt content, a new version is automatically created and tracked
3

Collect performance data

ZeroEval automatically tracks all LLM interactions and their outcomes
4

Tune and evaluate

Use the UI to run experiments, vote on outputs, and identify the best prompt/model combinations
5

One-click model deployments

Winning configurations are automatically deployed to your application without code changes