ScorerData
Individual scorer result containing score, reasoning, and metadata
Individual scorer result containing the score, reasoning, and metadata for a single scorer applied to an example.
namerequired
:strName of the scorer that generated this result
thresholdrequired
:floatThreshold value used to determine pass/fail for this scorer
successrequired
:boolWhether this individual scorer succeeded (score >= threshold)
score
:Optional[float]Numerical score returned by the scorer (typically 0.0-1.0)
minimum_score_range
:floatMinimum possible score value for this scorer
maximum_score_range
:floatMaximum possible score value for this scorer
reason
:Optional[str]Human-readable explanation of why the scorer gave this result
strict_mode
:Optional[bool]evaluation_model
:Optional[str]Model used for evaluation (e.g., "gpt-5.2", "claude-3")
error
:Optional[str]Error message if the scorer failed to execute
additional_metadata
:Dict[str, Any]Extra information specific to this scorer or evaluation run
id
:Optional[str]Methods
to_dict()
:Dict[str, Any]Convert the scorer data to a dictionary format for API serialization
Usage Examples
from judgeval import Judgeval
from judgeval.v1.data.example import Example
client = Judgeval(project_name="default_project")
# Run an evaluation
example = Example.create(
input="What is the capital of France?",
expected_output="Paris",
actual_output="Paris is the capital city of France."
)
results = client.evaluation.create().run(
examples=[example],
scorers=[client.scorers.built_in.answer_relevancy()]
)
# Access scorer data from a ScoringResult
for result in results:
for scorer_data in result.scorers_data:
print(f"Scorer: {scorer_data.name}")
print(f"Score: {scorer_data.score} (threshold: {scorer_data.threshold})")
print(f"Success: {scorer_data.success}")
print(f"Reason: {scorer_data.reason}")
if scorer_data.error:
print(f"Error: {scorer_data.error}")
if scorer_data.additional_metadata:
print(f"Metadata: {scorer_data.additional_metadata}")