Model Providers

OpenAI

If you use OpenAI models within your application, you can trace, monitor, and analyze them with Judgment. You can do this by either

  1. Judgeval Wrap: Add our wrap around the initialization of your client
  2. OpenTelemetry: Use the OpenAIInstrumentor library to wrap OpenAI SDK calls and send Otel spans to Judgment.

Judgeval Wrap

from openai import OpenAI
from judgeval.tracer import wrap

client = OpenAI()
wrapped_client = wrap(client)  # Use wrapped_client from here on

OpenTelemetry

from judgeval.tracer import Tracer
# Make sure you installed the dependency: opentelemetry-instrumentation-openai
from opentelemetry.instrumentation.openai import OpenAIInstrumentor


tracer = Tracer(project_name="openai_agents_project")
OpenAIInstrumentor().instrument()

Always initialize the Tracer before calling OpenAIInstrumentor().instrument() to ensure proper trace routing.

Only use one of these options to trace OpenAI calls or you may see duplicate tracing.