# ZeroEval Documentation ## Docs - [Introduction](https://docs.zeroeval.com/autotune/introduction.md): Run evaluations on models and prompts to find the best variants for your agents - [Reference](https://docs.zeroeval.com/autotune/reference.md): Parameters and configuration for ze.prompt - [Setup](https://docs.zeroeval.com/autotune/setup.md): Getting started with autotune - [Models](https://docs.zeroeval.com/autotune/tuning/models.md): Evaluate your agent's performance across multiple models - [Prompts](https://docs.zeroeval.com/autotune/tuning/prompts.md): Use feedback on production traces to generate and validate better prompts - [Introduction](https://docs.zeroeval.com/calibrated-judges/introduction.md): Continuously evaluate your production traffic with judges that learn over time - [Setup](https://docs.zeroeval.com/calibrated-judges/setup.md): Create and calibrate an AI judge in minutes - [A/B Tests](https://docs.zeroeval.com/evaluations/ab-tests.md): Run weighted A/B tests on models, prompts, or any variants in your code. - [Datasets](https://docs.zeroeval.com/evaluations/datasets.md): Create, version, and manage datasets with the ZeroEval Python SDK. - [Experiments](https://docs.zeroeval.com/evaluations/experiments.md): Run tasks and evaluations on your datasets using decorators – simple, clean, and powerful. - [Prompt Library](https://docs.zeroeval.com/evaluations/prompt-management.md): Fetch versioned prompts by slug with tags, variables, fallback, and task association. - [Introduction](https://docs.zeroeval.com/llm-gateway/introduction.md): A unified interface to seamlessly access and switch between various Large Language Models from different providers. - [Manual Instrumentation](https://docs.zeroeval.com/tracing/manual-instrumentation.md): Create spans manually for LLM calls and custom operations - [OpenTelemetry](https://docs.zeroeval.com/tracing/opentelemetry.md): Send traces to ZeroEval using the OpenTelemetry collector - [Quickstart](https://docs.zeroeval.com/tracing/quickstart.md): Get started with tracing and observability in ZeroEval - [Reference](https://docs.zeroeval.com/tracing/reference.md): Environment variables and configuration parameters for the ZeroEval tracer - [Integrations](https://docs.zeroeval.com/tracing/sdks/python/integrations.md): Automatic instrumentation for popular AI/ML frameworks - [Reference](https://docs.zeroeval.com/tracing/sdks/python/reference.md): Complete API reference for the Python SDK - [Setup](https://docs.zeroeval.com/tracing/sdks/python/setup.md): Get started with ZeroEval tracing in Python applications - [Integrations](https://docs.zeroeval.com/tracing/sdks/typescript/integrations.md): Tracing integrations with popular libraries - [Reference](https://docs.zeroeval.com/tracing/sdks/typescript/reference.md): Complete API reference for the TypeScript SDK - [Setup](https://docs.zeroeval.com/tracing/sdks/typescript/setup.md): Get started with ZeroEval tracing in TypeScript and JavaScript applications - [Sessions](https://docs.zeroeval.com/tracing/sessions.md): Group related spans into sessions for better organization and analysis - [Signals](https://docs.zeroeval.com/tracing/signals.md): Capture real-world feedback and metrics to enrich your traces, spans, and sessions. - [Tags](https://docs.zeroeval.com/tracing/tagging.md): Simple ways to attach rich, query-able tags to your traces.