log_batch_results()

Overview

log_batch_results() takes OpenAI Batch API request/result pairs and logs each as an individual chat completion span in Respan. This gives you the same visibility into batch completions as real-time calls.

1respan.log_batch_results(requests, results)

Parameters

1respan.log_batch_results(
2 requests: list[dict],
3 results: list[dict],
4 trace_id: str | None = None,
5)
ParameterTypeDescription
requestslist[dict]Original batch request dicts (from input JSONL). Each must have custom_id and body.messages.
resultslist[dict]Batch result dicts (from output JSONL). Each must have custom_id and response.body.
trace_idstr | NoneExplicit trace ID for linking. Use for async batches where results arrive in a separate process.

Trace linking

Results are linked to traces in this priority order:

  1. OTEL context — when called inside a @task or @workflow, auto-links to the active trace and nests completions under the current span.
  2. Explicit trace_id — for async batches where results arrive in a separate process (e.g. 24 hours later). Creates a batch_results task span in the original trace.
  3. Auto-generated — creates a new standalone trace if neither is available.

Examples

1from respan import Respan, task
2from respan_instrumentation_openai_agents import OpenAIAgentsInstrumentor
3
4respan = Respan(
5 api_key="your-api-key",
6 instrumentations=[OpenAIAgentsInstrumentor()],
7)
8
9@task(name="process_batch")
10def process_batch(output_file_id: str, original_requests: list):
11 results = download_batch_results(output_file_id)
12 respan.log_batch_results(original_requests, results)

The completions appear nested under the process_batch task span.

For async batches where results arrive hours later in a separate job:

1# At batch submission time — save the trace ID
2trace_id = get_client().get_current_trace_id()
3save_to_db(batch_id, trace_id)
4
5# Hours later, in a separate process
6saved_trace_id = load_from_db(batch_id)
7respan.log_batch_results(requests, results, trace_id=saved_trace_id)

The completions appear in the original trace as a batch_results group.

1# No active span, no explicit trace_id
2# Creates a new standalone trace
3respan.log_batch_results(requests, results)

Input format

Request format

Each request dict should match the OpenAI Batch API input format:

1request = {
2 "custom_id": "req-001",
3 "body": {
4 "model": "gpt-4o-mini",
5 "messages": [
6 {"role": "system", "content": "Summarize the text."},
7 {"role": "user", "content": "The quick brown fox..."},
8 ],
9 },
10}

Result format

Each result dict should match the OpenAI Batch API output format:

1result = {
2 "custom_id": "req-001",
3 "response": {
4 "status_code": 200,
5 "body": {
6 "model": "gpt-4o-mini",
7 "choices": [
8 {"message": {"role": "assistant", "content": "A fox jumped..."}}
9 ],
10 "usage": {
11 "prompt_tokens": 25,
12 "completion_tokens": 10,
13 },
14 "created": 1700000000,
15 },
16 },
17}

Requests and results are matched by custom_id.