Overview
log_batch_results() takes OpenAI Batch API request/result pairs and logs each as an individual chat completion span in Respan. This gives you the same visibility into batch completions as real-time calls.
Parameters
| Parameter | Type | Description |
|---|---|---|
requests | list[dict] | Original batch request dicts (from input JSONL). Each must have custom_id and body.messages. |
results | list[dict] | Batch result dicts (from output JSONL). Each must have custom_id and response.body. |
trace_id | str | None | Explicit trace ID for linking. Use for async batches where results arrive in a separate process. |
Trace linking
Results are linked to traces in this priority order:- OTEL context — when called inside a
@taskor@workflow, auto-links to the active trace and nests completions under the current span. - Explicit
trace_id— for async batches where results arrive in a separate process (e.g. 24 hours later). Creates abatch_resultstask span in the original trace. - Auto-generated — creates a new standalone trace if neither is available.
Examples
Same process (auto-links to active span)
process_batch task span.
Different process (explicit trace link)
For async batches where results arrive hours later in a separate job:batch_results group.
Standalone (no trace link)
Input format
Request format
Each request dict should match the OpenAI Batch API input format:Result format
Each result dict should match the OpenAI Batch API output format:custom_id.