Retrieve evaluator
Retrieves detailed information about a specific evaluator, including full configuration.
## Authentication
All endpoints require API key authentication:
```bash
Authorization: Bearer YOUR_API_KEY
```
## Path Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `evaluator_id` | string | The unique ID of the evaluator |
## Examples
```python Python
evaluator_id = "0f4325f9-55ef-4c20-8abe-376694419947"
url = f"https://api.respan.ai/api/evaluators/{evaluator_id}/"
headers = {
"Authorization": "Bearer YOUR_API_KEY"
}
response = requests.get(url, headers=headers)
print(response.json())
```
```bash cURL
curl -X GET "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \
-H "Authorization: Bearer YOUR_API_KEY"
```
## Response
**Status: 200 OK**
### LLM Evaluator Response (New Format)
```json
{
"id": "0f4325f9-55ef-4c20-8abe-376694419947",
"name": "Response Quality Evaluator",
"evaluator_slug": "response_quality_v2",
"type": "llm",
"score_value_type": "numerical",
"eval_class": "",
"description": "Evaluates response quality on a 1-5 scale",
"score_config": {
"min_score": 1,
"max_score": 5,
"choices": [
{"name": "Poor", "value": 1},
{"name": "Fair", "value": 2},
{"name": "Good", "value": 3},
{"name": "Great", "value": 4},
{"name": "Excellent", "value": 5}
]
},
"passing_conditions": {
"primary_score": {
"operator": "gte",
"value": 3
}
},
"llm_config": {
"model": "gpt-4o-mini",
"evaluator_definition": "Rate the quality:\n<input>{{input}}</input>\n<output>{{output}}</output>",
"scoring_rubric": "1=Poor, 5=Excellent",
"temperature": 0.1,
"max_tokens": 200
},
"code_config": null,
"configurations": {},
"created_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"updated_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"created_at": "2025-09-11T09:43:55.858321Z",
"updated_at": "2025-09-11T09:43:55.858331Z",
"custom_required_fields": [],
"categorical_choices": null,
"starred": false,
"organization": 2,
"tags": []
}
```
### LLM Evaluator Response (Legacy Format)
```json
{
"id": "0f4325f9-legacy",
"name": "Response Quality Evaluator (Legacy)",
"evaluator_slug": "response_quality_v1",
"type": "llm",
"score_value_type": "numerical",
"eval_class": "",
"description": "Evaluates response quality on a 1-5 scale",
"configurations": {
"evaluator_definition": "Rate the response quality based on accuracy, relevance, and completeness.\n<llm_input>{{llm_input}}</llm_input>\n<llm_output>{{llm_output}}</llm_output>",
"scoring_rubric": "1=Poor, 2=Fair, 3=Good, 4=Very Good, 5=Excellent",
"llm_engine": "gpt-4o-mini",
"model_options": {
"temperature": 0.1,
"max_tokens": 200
},
"min_score": 1.0,
"max_score": 5.0,
"passing_score": 3.0
},
"score_config": null,
"passing_conditions": null,
"llm_config": null,
"code_config": null,
"created_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"updated_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"created_at": "2025-09-11T09:43:55.858321Z",
"updated_at": "2025-09-11T09:43:55.858331Z",
"custom_required_fields": [],
"categorical_choices": null,
"starred": false,
"organization": 2,
"tags": []
}
```
### Human Evaluator with LLM Assistance (New Format)
<Note>
This example shows how a **human** evaluator can have LLM automation configured, demonstrating the decoupling of annotation method from evaluator type.
</Note>
```json
{
"id": "human-llm-123",
"name": "Human Review with AI Assistance",
"evaluator_slug": "human_ai_assist_v1",
"type": "human",
"score_value_type": "numerical",
"eval_class": "",
"description": "Human review with LLM-suggested scores",
"score_config": {
"min_score": 1,
"max_score": 5
},
"passing_conditions": {
"primary_score": {
"operator": "gte",
"value": 3
}
},
"llm_config": {
"model": "gpt-4o-mini",
"evaluator_definition": "Suggest a quality score for this response",
"temperature": 0.1
},
"code_config": null,
"configurations": {},
"categorical_choices": null,
"created_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"updated_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"created_at": "2025-09-11T09:44:00.000000Z",
"updated_at": "2025-09-11T09:44:00.000000Z",
"custom_required_fields": [],
"starred": false,
"organization": 2,
"tags": []
}
```
### Human Single-Select Evaluator Response (Legacy Format)
```json
{
"id": "cat-eval-123",
"name": "Content Quality Assessment",
"evaluator_slug": "content_quality_categorical",
"type": "human",
"score_value_type": "single_select",
"eval_class": "",
"description": "Human assessment of content quality with predefined categories",
"configurations": {},
"score_config": null,
"passing_conditions": null,
"llm_config": null,
"code_config": null,
"categorical_choices": [
{ "name": "Excellent", "value": 5 },
{ "name": "Good", "value": 4 },
{ "name": "Average", "value": 3 },
{ "name": "Poor", "value": 2 },
{ "name": "Very Poor", "value": 1 }
],
"created_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"updated_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"created_at": "2025-09-11T09:44:00.000000Z",
"updated_at": "2025-09-11T09:44:00.000000Z",
"custom_required_fields": [],
"starred": false,
"organization": 2,
"tags": []
}
```
### Code Evaluator Response (New Format)
```json
{
"id": "bool-eval-456",
"name": "Length Check",
"evaluator_slug": "length_check_v1",
"type": "code",
"score_value_type": "boolean",
"eval_class": "",
"description": "Checks if response is longer than 10 characters",
"score_config": {},
"passing_conditions": null,
"llm_config": null,
"code_config": {
"eval_code_snippet": "def main(eval_inputs):\n output = eval_inputs.get('output', '')\n return len(str(output)) > 10"
},
"configurations": {},
"categorical_choices": [],
"created_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"updated_by": {
"first_name": "Respan",
"last_name": "Team",
"email": "admin@respan.ai"
},
"created_at": "2025-09-11T09:45:00.000000Z",
"updated_at": "2025-09-11T09:45:00.000000Z",
"custom_required_fields": [],
"starred": false,
"organization": 2,
"tags": ["automation", "validation"]
}
```
## Response Fields
<Note>
**New Format**: Evaluators now include `score_config`, `passing_conditions`, `llm_config`, and `code_config` fields. These allow any evaluator type to have both LLM and code automation configured, decoupling annotation method from evaluator type.
</Note>
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique evaluator identifier |
| `name` | string | Display name of the evaluator |
| `evaluator_slug` | string | URL-friendly identifier |
| `type` | string | Evaluator type: `llm`, `human`, or `code` |
| `score_value_type` | string | Score format: `numerical`, `boolean`, `percentage`, `single_select`, `multi_select`, `json`, `text` |
| `eval_class` | string | Pre-built template class (if used) |
| `description` | string | Description of the evaluator |
| `score_config` | object | **New**: Score type configuration (min/max, choices, etc.) |
| `passing_conditions` | object | **New**: Passing conditions using universal filter format |
| `llm_config` | object | **New**: LLM automation config (if configured) |
| `code_config` | object | **New**: Code automation config (if configured) |
| `configurations` | object | Legacy type-specific configuration settings |
| `categorical_choices` | array | Legacy choices (use `score_config.choices` in new format) |
| `created_by` | object | User who created the evaluator |
| `updated_by` | object | User who last updated the evaluator |
| `created_at` | string | ISO timestamp of creation |
| `updated_at` | string | ISO timestamp of last update |
| `custom_required_fields` | array | Additional required fields |
| `starred` | boolean | Whether the evaluator is starred |
| `organization` | integer | Organization ID |
| `tags` | array | Tags associated with the evaluator |
## Configuration Fields by Type (Legacy)
### LLM Evaluators (`type: "llm"`)
| Field | Type | Description |
|-------|------|-------------|
| `evaluator_definition` | string | The evaluation prompt/instruction with template variables |
| `scoring_rubric` | string | Description of the scoring criteria |
| `llm_engine` | string | LLM model to use (e.g., "gpt-4o-mini", "gpt-4o") |
| `model_options` | object | LLM parameters like temperature, max_tokens |
| `min_score` | number | Minimum possible score |
| `max_score` | number | Maximum possible score |
| `passing_score` | number | Score threshold for passing |
### Code Evaluators (`type: "code"`)
| Field | Type | Description |
|-------|------|-------------|
| `eval_code_snippet` | string | Python code with evaluate() function |
### Human Evaluators (`type: "human"`)
- No specific configuration fields
- Use `categorical_choices` field when `score_value_type` is `"categorical"`
## Error Responses
### 404 Not Found
```json
{
"detail": "Not found."
}
```
### 401 Unauthorized
```json
{
"detail": "Your API key is invalid or expired, please check your API key at https://platform.respan.ai/platform/api/api-keys"
}
```
### 403 Forbidden
```json
{
"detail": "You do not have permission to access this evaluator."
}
```
Authentication
AuthorizationBearer
API key authentication. Get your API key from https://platform.respan.ai/platform/api-keys
Path parameters
evaluator_id
Evaluator Id
Response
Successful response for Retrieve evaluator
id
name
evaluator_slug
type
score_value_type
eval_class
description
score_config
passing_conditions
llm_config
code_config
configurations
created_by
updated_by
created_at
updated_at
custom_required_fields
categorical_choices
starred
organization
Errors
401
Unauthorized Error