ZeroEval Documentation home pagelight logodark logo
  • Support
  • Console
  • Console
Guides
  • Community & Support
  • Tracing
    • Quickstart
    • Sessions
    • Tags
    • Signals
    • SDKs
    Evaluations
    • Datasets
    • Experiments
    LLM Gateway
    • Introduction
    On this page
    • Features
    • Next Steps

    Introduction

    Build reliable AI applications with evaluations, A/B testing, and monitoring

    ZeroEval is a comprehensive platform for building reliable AI applications. It provides the essential tools to evaluate, test, and monitor your AI systems in both development and production.

    ​
    Features

    Evaluations & Datasets

    Create versioned datasets and run experiments to systematically test your AI models

    A/B Testing

    Compare models in production with real user feedback to make data-driven decisions

    LLM Gateway

    One single endpoint for 100+ models across all major providers

    Monitoring & Tracing

    Track costs, latency, errors, and debug issues with session replay and traces

    ​
    Next Steps

    Quickstart Guide

    Get up and running in < 5 minutes

    Console

    Explore your traces and datasets inside of the console

    Was this page helpful?

    Assistant
    Responses are generated using AI and may contain mistakes.
    ZeroEval Documentation home pagelight logodark logo
    xgithub
    llms.txtllms-full.txt
    xgithub
    Powered by Mintlify