Skip to main content

Installation

pip install respan-exporter-haystack

Classes

RespanConnector

Haystack component that connects pipelines to Respan for tracing.
from respan_exporter_haystack import RespanConnector
ParameterTypeDefaultDescription
api_keystr | NoneNoneRespan API key. Falls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneAPI base URL. Falls back to RESPAN_BASE_URL.

RespanTracer

Tracing support for Haystack pipeline execution.
from respan_exporter_haystack import RespanTracer

RespanGenerator

Gateway component that routes LLM calls through the Respan gateway.
from respan_exporter_haystack import RespanGenerator
ParameterTypeDefaultDescription
api_keystr | NoneNoneRespan API key.
modelstrModel to use (e.g., "gpt-4o-mini").
base_urlstr | None"https://api.respan.ai/api"Gateway base URL.

RespanChatGenerator

Chat-specific gateway component.
from respan_exporter_haystack import RespanChatGenerator
ParameterTypeDefaultDescription
api_keystr | NoneNoneRespan API key.
modelstrModel to use.
base_urlstr | None"https://api.respan.ai/api"Gateway base URL.

Usage

Tracing

from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator
from respan_exporter_haystack import RespanConnector

pipeline = Pipeline()
pipeline.add_component("respan", RespanConnector(api_key="your-api-key"))
pipeline.add_component("llm", OpenAIGenerator(model="gpt-4o-mini"))
pipeline.connect("respan", "llm")

result = pipeline.run({"respan": {"prompt": "Tell me a joke"}})

Gateway

from haystack import Pipeline
from respan_exporter_haystack import RespanGenerator

pipeline = Pipeline()
pipeline.add_component("llm", RespanGenerator(
    api_key="your-api-key",
    model="gpt-4o-mini",
))

result = pipeline.run({"llm": {"prompt": "Tell me a joke"}})