Ingest a batch of spans to construct one or more traces. Use this to import historical data or programmatically build traces when SDK instrumentation isn’t feasible.
If you’re starting fresh, we recommend using an SDK/integration (e.g., OpenAI Agents SDK) to capture traces automatically. This endpoint is best for bulk import and migration workflows.
body array required: Array of span log objects. Each object corresponds to a span within a trace. Spans with the same trace_unique_id are grouped into a single trace. Parent-child relationships are inferred via span_parent_id. Aligns with the sample payload in the logs_to_trace example.
trace_unique_id string required: Unique identifier for the trace. All spans with this value are grouped together.
span_unique_id string required: Unique identifier for the span.
span_parent_id string: Parent span ID; omit or set null for root spans.
span_name string: Name of the span (e.g., “openai.chat”, “workflow.start”).
span_workflow_name string: Nearest parent workflow name.
span_path string: Nested path within the workflow (e.g., “joke_creation.store_joke”).
start_time string: RFC3339 UTC start timestamp.
timestamp string: RFC3339 UTC end/event timestamp.
latency number: Latency in seconds for the span operation.
input string: Raw input string or JSON serialized string used by the span.
output string: Raw output string or JSON serialized string produced by the span.
model string: Model name used by the span (e.g., “gpt-3.5-turbo”, “gpt-4o-mini”).
encoding_format string: Embedding encoding format for spans that generate embeddings (e.g., “float”).
provider_id string: LLM or service provider ID (e.g., “openai”).
prompt_tokens integer: Number of prompt tokens used (if applicable).
completion_tokens integer: Number of completion tokens used (if applicable).
cost float: Cost associated with the span (if applicable).
metadata object: Custom attributes as a key-value object.
warnings string: Warnings or notes captured during span execution.
disable_log boolean: Set true to disable logging for the span in observability system.
disable_fallback boolean: Disable fallback behavior for the span if supported.
respan_params object: Additional Respan parameters (e.g., has_webhook, environment).
temperature number: Controls randomness for LLM spans; typical range 0.0–1.0.
presence_penalty number: Presence penalty parameter used in some LLM requests.
frequency_penalty number: Frequency penalty parameter used in some LLM requests.
max_tokens integer: Maximum tokens requested for completion/embedding generations.
stream boolean: Whether streaming was enabled for the span.
prompt_messages array: Array of messages sent to the LLM (each with role and content). Present for chat spans.
completion_message object: Assistant message returned by the LLM (role/content). Present for chat spans.
Prerequisites for successful ingestion:
API key authentication. Get your API key from https://platform.respan.ai/platform/api-keys