Tracing Providers

OpenLit Integration

Export OpenLit traces to the Judgment platform.

OpenLit integration sends traces from your OpenLit-instrumented applications to Judgment. If you're already using OpenLit for observability, this integration forwards those traces to Judgment without requiring additional instrumentation.

Quickstart

Install Dependencies

uv add openlit judgeval openai
pip install openlit judgeval openai

Initialize Integration

setup.py
from judgeval.tracer import Tracer
from judgeval.integrations.openlit import Openlit

tracer = Tracer(project_name="openlit_project")
Openlit.initialize()

Always initialize the Tracer before calling Openlit.initialize() to ensure proper trace routing.

Add to Existing Code

Add these lines to your existing OpenLit-instrumented application:

from openai import OpenAI
from judgeval.tracer import Tracer  
from judgeval.integrations.openlit import Openlit  

tracer = Tracer(project_name="openlit-agent")  
Openlit.initialize()  

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-5-mini",
    messages=[{"role": "user", "content": "Hello, world!"}]
)

print(response.choices[0].message.content)

All OpenLit traces are exported to the Judgment platform.

No OpenLit Initialization Required: When using Judgment's OpenLit integration, you don't need to call openlit.init() separately. The Openlit.initialize() call handles all necessary OpenLit setup automatically.

import openlit  
openlit.init()  

from judgeval.tracer import Tracer  
from judgeval.integrations.openlit import Openlit  
tracer = Tracer(project_name="your_project")  
Openlit.initialize()  

from openai import OpenAI
client = OpenAI()

Example: Multi-Workflow Application

Tracking Non-OpenLit Operations: Use @tracer.observe() to track any function or method that's not automatically captured by OpenLit. The multi-workflow example below shows how @tracer.observe() (highlighted) can be used to monitor custom logic and operations that happen outside your OpenLit-instrumented workflows.

multi_workflow_example.py
from judgeval.tracer import Tracer
from judgeval.integrations.openlit import Openlit
from openai import OpenAI

tracer = Tracer(project_name="multi_workflow_app")
Openlit.initialize()

client = OpenAI()

def analyze_text(text: str) -> str:
    response = client.chat.completions.create(
        model="gpt-5-mini",
        messages=[
            {"role": "system", "content": "You are a helpful AI assistant."},
            {"role": "user", "content": f"Analyze: {text}"}
        ]
    )
    return response.choices[0].message.content

def summarize_text(text: str) -> str:
    response = client.chat.completions.create(
        model="gpt-5-mini",
        messages=[
            {"role": "system", "content": "You are a helpful AI assistant."},
            {"role": "user", "content": f"Summarize: {text}"}
        ]
    )
    return response.choices[0].message.content

def generate_content(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-5-mini",
        messages=[
            {"role": "system", "content": "You are a creative AI assistant."},
            {"role": "user", "content": prompt}
        ]
    )
    return response.choices[0].message.content

@tracer.observe(span_type="function")  
def main():
    text = "The future of artificial intelligence is bright and full of possibilities."

    analysis = analyze_text(text)
    summary = summarize_text(text)
    story = generate_content(f"Create a story about: {text}")

    print(f"Analysis: {analysis}")
    print(f"Summary: {summary}")
    print(f"Story: {story}")

if __name__ == "__main__":
    main()