Create response
Send a response request through the Respan gateway using the OpenAI Responses API format. Supports streaming, tool use, and prompt management.
Respan parameters can be passed the same way as Create chat completion: top-level fields, nested under respan_params, or via X-Respan-Params header.
Headers
Bearer token. Use Bearer YOUR_API_KEY.
Comma-separated beta feature flags. Available: token-breakdown-2026-03-26, env-scoped-integrations-2026-03-28
Request
Stream the response as server-sent events.
Sampling temperature (0-2).
ID of a previous response for multi-turn conversations.
Per-customer LLM provider credentials.
One-off credential overrides per provider.
Prompt template config. Properties: prompt_id (required), variables, version, echo. See Prompt management.
Retry config. Properties: retry_enabled (boolean), num_retries, retry_after (seconds).
When true, omits input/output from the log. Metrics still recorded.
Custom key-value metadata attached to the span.
User feedback. true = liked, false = disliked.