Skip to content

Constant

Constants for the gllm_evals package.

This module contains all string literals used throughout the package to ensure consistency and maintainability.

Authors

Christina Alexandra (christina.alexandra@gdplabs.id)

References

NONE

AttachmentConfigKeys

Attachment configuration keys.

BatchConstants

Batch processing constants.

ColumnNames

Column names for the fixed columns mapping.

CustomPromptKeys

Keys for custom prompt dictionaries.

These keys are used in the custom_prompts dictionary structure returned by _extract_custom_prompt() methods: { FEWSHOT: {EXAMPLES: str, MODE: str}, EVALUATION_STEPS: list[str] }

Used by metrics that support custom prompts: - LMBasedMetric - DeepEvalGEvalMetric - RAGASMetric - LangChainOpenEvalsLLMAsAJudgeMetric - LangChainAgentEvalsLLMAsAJudgeMetric

CustomPromptModes

Valid modes for custom prompt application via CSV columns.

These modes control how custom few-shot examples are applied: - APPEND: Adds custom examples to existing examples - REPLACE: Replaces all existing examples with custom examples

Used by metrics that support custom prompts: - LMBasedMetric - DeepEvalGEvalMetric - RAGASMetric - LangChainOpenEvalsLLMAsAJudgeMetric - LangChainAgentEvalsLLMAsAJudgeMetric

validate_and_normalize(mode, metric_name='unknown') classmethod

Validate and normalize a mode value.

Parameters:

Name Type Description Default
mode str

The mode string to validate (e.g., "append", "REPLACE", "Append ")

required
metric_name str

Name of the metric for error messages

'unknown'

Returns:

Name Type Description
str str

Normalized mode (lowercase, trimmed)

Raises:

Type Description
ValueError

If mode is not in VALID_MODES after normalization

Examples:

>>> CustomPromptModes.validate_and_normalize("append", "completeness")
'append'
>>> CustomPromptModes.validate_and_normalize("REPLACE ", "completeness")
'replace'
>>> CustomPromptModes.validate_and_normalize("invalid", "completeness")
ValueError: Invalid mode 'invalid' for metric 'completeness'...

DefaultValues

Default values used throughout the package.

EnvironmentVarsKeys

Environment variables keys used in the package.

ErrorConstants

Error constants used throughout the package.

ErrorMessages

Error messages used throughout the package.

EvaluatorKeys

Evaluator identifiers used across rule engines and aggregation.

EvaluatorNames

Evaluator names used in the package.

ExportKeys

Export keys used for API responses.

ExportTypeKeys

Export type keys used for API responses.

GDriveKeys

Google Drive related keys.

GLChatSDKKeys

GLChat SDK keys used for API calls.

GeneralClassConstants

General class constants used in the package.

GeneralConstants

General constants used in the package.

GeneralMetadataKeys

General metadata keys used for API responses.

GlobalExplanationMetricKeys

Global explanation metric keys used for API responses.

KwargsKeys

Kwargs keys used for API responses.

LangfuseAPIKeys

Keys used in Langfuse API calls.

LangfuseMetadataKeys

Langfuse metadata keys used for API calls.

LangfuseTraceKeys

Langfuse trace keys used for API calls.

MetricNames

Metric names used throughout the package.

PromptRoles

Prompt roles used in the package.

PromptTags

Constants for prompt tag markers used in few-shot example handling.

These tags are used to mark sections in prompts where few-shot examples should be inserted or replaced. The tags support both replace and append modes.

Used by metrics that support tagged few-shot examples: - LMBasedMetric - LangChainOpenEvalsLLMAsAJudgeMetric - LangChainAgentEvalsLLMAsAJudgeMetric

Example usage in prompt template
You are an evaluator.

<FEW_SHOTS>
Example 1: ...
Example 2: ...
</FEW_SHOTS>

Now evaluate: {input}

RatingValues

Rating values used for evaluation results.

ResultKeys

Result dictionary keys used in evaluation outputs.

ResultMetricKeys

Result metric keys used for API responses.

S3Keys

GLChat S3 related keys.

ScoreResultKeys

Score result keys used for API responses.

ScoringThresholds

Scoring thresholds used for rating classification.

SuccessMessages

Success messages used throughout the package.

TestKeys

Test keys used in unit tests.

TrajectoryScoreValues

Score values used for trajectory evaluation results.