Update evaluator

Updates specific fields of an evaluator. Supports partial updates of configuration fields. ## Authentication All endpoints require API key authentication: ```bash Authorization: Bearer YOUR_API_KEY ``` ## Path Parameters | Parameter | Type | Description | |-----------|------|-------------| | `evaluator_id` | string | The unique ID of the evaluator to update | ## Request Body <Note> **New Format**: You can now update `score_config`, `passing_conditions`, `llm_config`, and `code_config` fields to add or modify automation for any evaluator type. </Note> You can update any of the following fields. Only provide the fields you want to update: | Field | Type | Description | |-------|------|-------------| | `name` | string | Display name for the evaluator | | `description` | string | Description of what the evaluator does | | `score_config` | object | **New**: Score type configuration (min/max, choices, etc.) | | `passing_conditions` | object | **New**: Passing conditions using universal filter format | | `llm_config` | object | **New**: LLM automation config | | `code_config` | object | **New**: Code automation config | | `configurations` | object | Legacy type-specific configuration settings | | `categorical_choices` | array | Legacy choices (use `score_config.choices` in new format) | | `custom_required_fields` | array | Additional required fields | | `starred` | boolean | Whether the evaluator is starred | | `tags` | array | Tags for organization | **Note:** Configuration fields are merged with existing values. Non-null values take precedence over existing null values. ## Examples ### New Format: Add LLM Config to Existing Evaluator <Note> This example shows adding LLM automation to any evaluator type, demonstrating the decoupling of annotation method from evaluator type. </Note> ```python Python evaluator_id = "0f4325f9-55ef-4c20-8abe-376694419947" url = f"https://api.respan.ai/api/evaluators/{evaluator_id}/" headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" } # Add LLM automation to an existing evaluator data = { "llm_config": { "model": "gpt-4o-mini", "evaluator_definition": "Rate the quality:\n<input>{{input}}</input>\n<output>{{output}}</output>", "scoring_rubric": "1=Poor, 5=Excellent", "temperature": 0.1, "max_tokens": 200 }, "passing_conditions": { "primary_score": { "operator": "gte", "value": 3 } } } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "llm_config": { "model": "gpt-4o-mini", "evaluator_definition": "Rate the quality:\n<input>{{input}}</input>\n<output>{{output}}</output>", "temperature": 0.1 } }' ``` ### New Format: Add Code Config to Existing Evaluator ```python Python # Add code automation to an existing evaluator data = { "code_config": { "eval_code_snippet": "def main(eval_inputs):\n output = eval_inputs.get('output', '')\n return len(str(output)) > 10" } } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "code_config": { "eval_code_snippet": "def main(eval_inputs):\n return len(str(eval_inputs.get(\"output\", \"\"))) > 10" } }' ``` ### New Format: Update Score Config ```python Python # Update score configuration data = { "score_config": { "min_score": 0, "max_score": 10, "choices": [ {"name": "Poor", "value": 0}, {"name": "Average", "value": 5}, {"name": "Excellent", "value": 10} ] } } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "score_config": { "min_score": 0, "max_score": 10 } }' ``` ### Legacy Format: Update LLM Evaluator Configuration ```python Python evaluator_id = "0f4325f9-55ef-4c20-8abe-376694419947" url = f"https://api.respan.ai/api/evaluators/{evaluator_id}/" headers = { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" } # Update scoring rubric and passing score data = { "configurations": { "scoring_rubric": "Updated: 1=Very Poor, 2=Poor, 3=Fair, 4=Good, 5=Excellent", "passing_score": 4.0 } } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "configurations": { "scoring_rubric": "Updated: 1=Very Poor, 2=Poor, 3=Fair, 4=Good, 5=Excellent", "passing_score": 4.0 } }' ``` ### Update Name and Description ```python Python data = { "name": "Enhanced Response Quality Evaluator", "description": "Advanced evaluator for response quality assessment with updated criteria" } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Enhanced Response Quality Evaluator", "description": "Advanced evaluator for response quality assessment with updated criteria" }' ``` ### Update Categorical Choices ```python Python # For categorical evaluators categorical_evaluator_id = "cat-eval-123" url = f"https://api.respan.ai/api/evaluators/{categorical_evaluator_id}/" data = { "categorical_choices": [ { "name": "Outstanding", "value": 5 }, { "name": "Very Good", "value": 4 }, { "name": "Good", "value": 3 }, { "name": "Fair", "value": 2 }, { "name": "Poor", "value": 1 } ] } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/cat-eval-123/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "categorical_choices": [ { "name": "Outstanding", "value": 5 }, { "name": "Very Good", "value": 4 }, { "name": "Good", "value": 3 }, { "name": "Fair", "value": 2 }, { "name": "Poor", "value": 1 } ] }' ``` ### Update Code Evaluator ```python Python # For code evaluators code_evaluator_id = "bool-eval-456" url = f"https://api.respan.ai/api/evaluators/{code_evaluator_id}/" data = { "name": "Enhanced Length Checker", "configurations": { "eval_code_snippet": "def evaluate(llm_input, llm_output, **kwargs):\n '''\n Enhanced length checker with word count\n Returns True if response has >= 10 words, False otherwise\n '''\n if not llm_output:\n return False\n \n word_count = len(llm_output.strip().split())\n return word_count >= 10" } } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/bool-eval-456/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "Enhanced Length Checker", "configurations": { "eval_code_snippet": "def evaluate(llm_input, llm_output, **kwargs):\n if not llm_output:\n return False\n word_count = len(llm_output.strip().split())\n return word_count >= 10" } }' ``` ### Update Tags and Starred Status ```python Python data = { "starred": True, "tags": ["quality", "assessment", "production"] } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "starred": true, "tags": ["quality", "assessment", "production"] }' ``` ### Update LLM Engine and Model Options ```python Python data = { "configurations": { "llm_engine": "gpt-4o", "model_options": { "temperature": 0.2, "max_tokens": 300 } } } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```bash cURL curl -X PATCH "https://api.respan.ai/api/evaluators/0f4325f9-55ef-4c20-8abe-376694419947/" \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "configurations": { "llm_engine": "gpt-4o", "model_options": { "temperature": 0.2, "max_tokens": 300 } } }' ``` ## Response **Status: 200 OK** Returns the updated evaluator object with all current field values: ```json { "id": "0f4325f9-55ef-4c20-8abe-376694419947", "name": "Enhanced Response Quality Evaluator", "evaluator_slug": "response_quality_v1", "type": "llm", "score_value_type": "numerical", "eval_class": "", "description": "Advanced evaluator for response quality assessment with updated criteria", "configurations": { "evaluator_definition": "Rate the response quality based on accuracy, relevance, and completeness.\n<llm_input>{{llm_input}}</llm_input>\n<llm_output>{{llm_output}}</llm_output>", "scoring_rubric": "Updated: 1=Very Poor, 2=Poor, 3=Fair, 4=Good, 5=Excellent", "llm_engine": "gpt-4o", "model_options": { "temperature": 0.2, "max_tokens": 300 }, "min_score": 1.0, "max_score": 5.0, "passing_score": 4.0 }, "created_by": { "first_name": "Respan", "last_name": "Team", "email": "admin@respan.ai" }, "updated_by": { "first_name": "Respan", "last_name": "Team", "email": "admin@respan.ai" }, "created_at": "2025-09-11T09:43:55.858321Z", "updated_at": "2025-09-11T10:15:22.123456Z", "custom_required_fields": [], "categorical_choices": null, "starred": true, "organization": 2, "tags": ["quality", "assessment", "production"] } ``` ## Configuration Update Rules ### Partial Configuration Updates When updating `configurations`, the system performs a **merge operation**: - Existing configuration fields are preserved unless explicitly overridden - New fields are added to the configuration - Setting a field to `null` removes it from the configuration - Nested objects (like `model_options`) are completely replaced, not merged ### Example Configuration Merge **Original Configuration:** ```json { "evaluator_definition": "Original prompt", "scoring_rubric": "Original rubric", "llm_engine": "gpt-4o-mini", "min_score": 1.0, "max_score": 5.0 } ``` **Update Request:** ```json { "configurations": { "scoring_rubric": "Updated rubric", "passing_score": 3.0 } } ``` **Resulting Configuration:** ```json { "evaluator_definition": "Original prompt", "scoring_rubric": "Updated rubric", "llm_engine": "gpt-4o-mini", "min_score": 1.0, "max_score": 5.0, "passing_score": 3.0 } ``` ## Error Responses ### 400 Bad Request ```json { "configurations": [ "Configuration validation failed: llm_engine 'invalid-model' is not supported" ] } ``` ### 401 Unauthorized ```json { "detail": "Your API key is invalid or expired, please check your API key at https://platform.respan.ai/platform/api/api-keys" } ``` ### 403 Forbidden ```json { "detail": "You do not have permission to update this evaluator." } ``` ### 404 Not Found ```json { "detail": "Not found." } ``` ### 422 Unprocessable Entity ```json { "categorical_choices": [ "This field is required when score_value_type is 'categorical'." ] } ```

Authentication

AuthorizationBearer
API key authentication. Get your API key from https://platform.respan.ai/platform/api-keys

Path parameters

evaluator_idstringRequired
Evaluator Id

Request

This endpoint expects an object.

Response

Successful response for Update evaluator
idstring
namestring
evaluator_slugstring
typestring
score_value_typestring
eval_classstring
descriptionstring
configurationsobject
created_byobject
updated_byobject
created_atstring
updated_atstring
custom_required_fieldslist of strings
categorical_choiceslist of objects or null
starredboolean
organizationinteger
tagslist of strings

Errors

401
Unauthorized Error