Model Providers

Ollama

If you use Ollama within your application, you can trace, monitor, and analyze them with Judgment using OllamaInstrumentor.

OpenTelemetry

pip install opentelemetry-instrumentation-ollama
from judgeval import Judgeval
# Make sure you installed the dependency: opentelemetry-instrumentation-ollama
from opentelemetry.instrumentation.ollama import OllamaInstrumentor

client = Judgeval(project_name="ollama_project")
tracer = client.tracer.create()
OllamaInstrumentor().instrument()

Always initialize the Tracer before calling OllamaInstrumentor().instrument() to ensure proper trace routing.

Ollama allows you to run large language models locally. This instrumentation automatically traces all Ollama API calls.

On this page