Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is Haystack?

Haystack is an end-to-end Python framework by deepset for building NLP and LLM pipelines. It provides composable components for retrieval, generation, and processing.

Setup

1

Install packages

2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

4

View your trace

Open the Traces page to see your pipeline trace with individual component spans.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. HaystackInstrumentor()).
is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points.
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from openinference.instrumentation.haystack import HaystackInstrumentor

respan = Respan(
    instrumentations=[HaystackInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "haystack-app", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from respan import Respan, workflow, propagate_attributes
from openinference.instrumentation.haystack import HaystackInstrumentor

respan = Respan(instrumentations=[HaystackInstrumentor()])

@workflow(name="handle_request")
def handle_request(user_id: str, question: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="conv_001",
        metadata={"plan": "pro"},
    ):
        result = pipeline.run({"llm": {"prompt": question}})
        print(result["llm"]["replies"][0])
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.
from respan import Respan, workflow, task
from openinference.instrumentation.haystack import HaystackInstrumentor
from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator

respan = Respan(instrumentations=[HaystackInstrumentor()])

@task(name="generate_response")
def generate(prompt: str) -> str:
    pipeline = Pipeline()
    pipeline.add_component("llm", OpenAIGenerator(model="gpt-4.1-nano"))
    result = pipeline.run({"llm": {"prompt": prompt}})
    return result["llm"]["replies"][0]

@workflow(name="qa_pipeline")
def qa(question: str):
    answer = generate(question)
    print(answer)

qa("What are the benefits of LLM observability?")
respan.flush()

Examples

Basic pipeline

from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator
from haystack.components.builders import PromptBuilder

template = """Answer the question based on your knowledge.
Question: {{question}}
Answer:"""

pipeline = Pipeline()
pipeline.add_component("prompt", PromptBuilder(template=template))
pipeline.add_component("llm", OpenAIGenerator(model="gpt-4.1-nano"))
pipeline.connect("prompt", "llm")

result = pipeline.run({"prompt": {"question": "What is the capital of France?"}})
print(result["llm"]["replies"][0])

Gateway

You can route LLM calls through the Respan gateway by configuring the OpenAI generator:
from haystack.components.generators import OpenAIGenerator

generator = OpenAIGenerator(
    model="gpt-4.1-nano",
    api_key=os.getenv("RESPAN_API_KEY"),
    api_base_url="https://api.respan.ai/api",
)
With the gateway, no OPENAI_API_KEY is needed and you can switch models across providers.