Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is IBM watsonx?

IBM watsonx is IBM’s enterprise AI platform providing access to foundation models for generation, embedding, and more. Respan can auto-instrument all watsonx calls for tracing and observability.
IBM watsonx uses a proprietary API format, so only Tracing setup is available. Gateway routing is not directly supported.

Setup

1

Install packages

pip install respan-ai opentelemetry-instrumentation-watsonx ibm-watsonx-ai python-dotenv
2

Set environment variables

export WATSONX_API_KEY="YOUR_WATSONX_API_KEY"
export WATSONX_PROJECT_ID="YOUR_WATSONX_PROJECT_ID"
export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
3

Initialize and run

import os
from dotenv import load_dotenv

load_dotenv()

from ibm_watsonx_ai.foundation_models import Model
from respan import Respan

# Auto-discover and activate all installed instrumentors (Traceloop)
respan = Respan(is_auto_instrument=True)

# Configure watsonx credentials
credentials = {
    "url": "https://us-south.ml.cloud.ibm.com",
    "apikey": os.getenv("WATSONX_API_KEY"),
}

# Initialize the model
model = Model(
    model_id="ibm/granite-13b-instruct-v2",
    credentials=credentials,
    project_id=os.getenv("WATSONX_PROJECT_ID"),
)

# Calls go directly to watsonx, auto-traced by Respan
response = model.generate_text("Say hello in three languages.")
print(response)
respan.flush()
4

View your trace

Open the Traces page to see your auto-instrumented LLM spans.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points.
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

Attach customer identifiers, thread IDs, and metadata to spans.

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan

respan = Respan(
    is_auto_instrument=True,
    customer_identifier="user_123",
    metadata={"service": "watsonx-api", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from respan import Respan, workflow, propagate_attributes
from ibm_watsonx_ai.foundation_models import Model

respan = Respan(
    is_auto_instrument=True,
    metadata={"service": "watsonx-api", "version": "1.0.0"},
)

@workflow(name="handle_request")
def handle_request(user_id: str, question: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="conv_001",
        metadata={"plan": "enterprise"},
    ):
        response = model.generate_text(question)
        print(response)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.
from ibm_watsonx_ai.foundation_models import Model
from respan import Respan, workflow, task

respan = Respan(is_auto_instrument=True)

credentials = {
    "url": "https://us-south.ml.cloud.ibm.com",
    "apikey": os.getenv("WATSONX_API_KEY"),
}
model = Model(
    model_id="ibm/granite-13b-instruct-v2",
    credentials=credentials,
    project_id=os.getenv("WATSONX_PROJECT_ID"),
)

@task(name="generate_outline")
def outline(topic: str) -> str:
    return model.generate_text(f"Create a brief outline about: {topic}")

@workflow(name="content_pipeline")
def pipeline(topic: str):
    plan = outline(topic)
    response = model.generate_text(f"Write content from this outline: {plan}")
    print(response)

pipeline("Benefits of API gateways")
respan.flush()

Examples

Basic generate

response = model.generate_text("Say hello in three languages.")
print(response)