Input & Output
input
input
string / object / array — Universal input to the model. Required. Format depends on the log type.output
output
string / object / array — Model’s response. Required. Format depends on the log type.prompt_messages
prompt_messages
array — (Legacy) Messages sent to the model. Use input instead.completion_message
completion_message
object — (Legacy) Final assistant message. Use output instead.full_request
full_request
object — Complete request payload sent to the provider. Tool calls and function definitions are auto-extracted.full_response
full_response
object — Full response object from the provider.Metrics
start_time
start_time
string — Request start time in RFC3339 format (e.g. "2025-09-08T07:46:14.007279Z").timestamp
timestamp
string — Request end time in RFC3339 format.latency
latency
number — Total request latency in seconds.time_to_first_token
time_to_first_token
number — Time to first token in seconds. Useful for measuring streaming responsiveness.tokens_per_second
tokens_per_second
number — Output token throughput (tokens per second).cost
cost
number — Total request cost in USD. Auto-calculated from model and token counts if omitted.usage
usage
object — Token usage breakdown.| Sub-field | Type | Description |
|---|---|---|
prompt_tokens | integer | Tokens in the prompt/input |
completion_tokens | integer | Tokens in the model output |
total_tokens | integer | Sum of prompt and completion tokens |
prompt_tokens_details | object | Granular token breakdown (e.g., cached tokens) |
prompt_cache_hit_tokens
prompt_cache_hit_tokens
integer — Number of tokens served from cache.prompt_cache_creation_tokens
prompt_cache_creation_tokens
integer — Number of tokens added to cache.prompt_unit_price
prompt_unit_price
number — Custom price per 1M prompt tokens. Use for self-hosted or fine-tuned models.completion_unit_price
completion_unit_price
number — Custom price per 1M completion tokens.Identifiers & metadata
unique_id
unique_id
string — Unique identifier for the log. Auto-generated if not provided.model
model
string — Model name (e.g. "gpt-4o", "claude-3-5-sonnet-20240620"). Required.log_type
log_type
string — Type of log. Defaults to "chat". See Log types.provider_id
provider_id
string — Provider identifier (e.g. "openai", "anthropic").environment
environment
string — Runtime environment (e.g. "test", "prod"). Used to separate test and production data.customer_identifier
customer_identifier
string — User or customer-level identifier. See Customer identifier.customer_params
customer_params
object — Extended customer info: customer_identifier, name, email.metadata
metadata
object — Custom key-value pairs for tagging, analytics, and filtering.custom_identifier
custom_identifier
string — Indexed custom identifier for fast querying.thread_identifier
thread_identifier
string — Conversation thread identifier. Logs with the same value are grouped into a thread.group_identifier
group_identifier
string — Group identifier for related logs.prompt_id
prompt_id
string — Prompt template identifier. Auto-set when using prompt management.prompt_name
prompt_name
string — Prompt template name.prompt_version_number
prompt_version_number
integer — Prompt version number.deployment_name
deployment_name
string — Deployment name.organization_key_id
organization_key_id
string — API key identifier used for the request.Tracing
trace_unique_id
trace_unique_id
string — Required. Groups all spans into a single trace. All spans sharing this ID form one trace.span_unique_id
span_unique_id
string — Required. Unique identifier for this span within the trace.span_parent_id
span_parent_id
string — Parent span ID. Creates the hierarchical tree structure. Omit or set to null for root spans.span_name
span_name
string — Descriptive name for the operation (e.g. "openai.chat", "retrieval.search").span_workflow_name
span_workflow_name
string — The nearest workflow this span belongs to. Used to label the root-level workflow in trace views.span_path
span_path
string — Nested path within the workflow hierarchy.trace_group_identifier
trace_group_identifier
string — Groups related traces together, even across different sessions or systems.respan_params
respan_params
object — Additional Respan parameters passed via the tracing SDK.Status & errors
status_code
status_code
integer — HTTP status code. Defaults to 200.status
status
string — Semantic status: "success" or "error".error_message
error_message
string — Error description if the request failed.warnings
warnings
string / object — Non-fatal issues encountered during the request.LLM configuration
temperature
temperature
number — Randomness control (0–2).max_tokens
max_tokens
integer — Maximum number of tokens to generate.top_p
top_p
number — Nucleus sampling parameter (0–1).frequency_penalty
frequency_penalty
number — Penalizes tokens based on frequency (0–2).presence_penalty
presence_penalty
number — Penalizes tokens already present (0–2).stop
stop
array — Sequences that halt generation.n
n
integer — Number of completions to generate.stream
stream
boolean — Whether the response was streamed.response_format
response_format
object — Output format: text, json_schema, or json_object.tools
tools
array — Available tool/function definitions.tool_choice
tool_choice
string / object — Controls tool selection: "none", "auto", or specific tool.Other
positive_feedback
positive_feedback
boolean — User sentiment. true = positive, false = negative.keywordsai_api_controls
keywordsai_api_controls
object — Logging behavior controls.| Sub-field | Type | Description |
|---|---|---|
block | boolean | If false, server returns immediately without awaiting log completion |