Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed. {
"mcpServers" : {
"respan-docs" : {
"url" : "https://docs.respan.ai/mcp"
}
}
}
What is Aleph Alpha?
Aleph Alpha is a European AI company providing sovereign AI solutions. The Aleph Alpha Python client supports completions, embeddings, and semantic search. Respan can auto-instrument all Aleph Alpha calls for tracing and observability.
Aleph Alpha uses a proprietary API format, so only Tracing setup is available. Gateway routing is not supported.
Setup
Install packages
pip install respan-ai opentelemetry-instrumentation-alephalpha aleph-alpha-client python-dotenv
Set environment variables
export ALEPH_ALPHA_API_KEY = "YOUR_ALEPH_ALPHA_API_KEY"
export RESPAN_API_KEY = "YOUR_RESPAN_API_KEY"
Initialize and run
import os
from dotenv import load_dotenv
load_dotenv()
from aleph_alpha_client import Client, CompletionRequest, Prompt
from respan import Respan
# Auto-discover and activate all installed instrumentors (Traceloop)
respan = Respan( is_auto_instrument = True )
# Initialize Aleph Alpha client
client = Client( token = os.getenv( "ALEPH_ALPHA_API_KEY" ))
# Calls go directly to Aleph Alpha, auto-traced by Respan
request = CompletionRequest(
prompt = Prompt.from_text( "Say hello in three languages." ),
maximum_tokens = 128 ,
)
response = client.complete(request, model = "luminous-supreme-control" )
print (response.completions[ 0 ].completion)
respan.flush()
View your trace
Open the Traces page to see your auto-instrumented LLM spans.
Configuration
Parameter Type Default Description api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var. base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var. is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points. customer_identifierstr | NoneNoneDefault customer identifier for all spans. metadatadict | NoneNoneDefault metadata attached to all spans. environmentstr | NoneNoneEnvironment tag (e.g. "production").
Attributes
Attach customer identifiers, thread IDs, and metadata to spans.
In Respan()
Set defaults at initialization — these apply to all spans.
from respan import Respan
respan = Respan(
is_auto_instrument = True ,
customer_identifier = "user_123" ,
metadata = { "service" : "completion-api" , "version" : "1.0.0" },
)
With propagate_attributes
Override per-request using a context manager.
from aleph_alpha_client import Client, CompletionRequest, Prompt
from respan import Respan, workflow, propagate_attributes
respan = Respan(
is_auto_instrument = True ,
metadata = { "service" : "completion-api" , "version" : "1.0.0" },
)
client = Client( token = os.getenv( "ALEPH_ALPHA_API_KEY" ))
@workflow ( name = "handle_request" )
def handle_request ( user_id : str , question : str ):
with propagate_attributes(
customer_identifier = user_id,
thread_identifier = "conv_001" ,
metadata = { "plan" : "enterprise" },
):
request = CompletionRequest(
prompt = Prompt.from_text(question),
maximum_tokens = 128 ,
)
response = client.complete(request, model = "luminous-supreme-control" )
print (response.completions[ 0 ].completion)
Attribute Type Description customer_identifierstrIdentifies the end user in Respan analytics. thread_identifierstrGroups related messages into a conversation. metadatadictCustom key-value pairs. Merged with default metadata.
Decorators
Use @workflow and @task to create structured trace hierarchies.
from aleph_alpha_client import Client, CompletionRequest, Prompt
from respan import Respan, workflow, task
respan = Respan( is_auto_instrument = True )
client = Client( token = os.getenv( "ALEPH_ALPHA_API_KEY" ))
@task ( name = "generate_outline" )
def outline ( topic : str ) -> str :
request = CompletionRequest(
prompt = Prompt.from_text( f "Create a brief outline about: { topic } " ),
maximum_tokens = 256 ,
)
response = client.complete(request, model = "luminous-supreme-control" )
return response.completions[ 0 ].completion
@workflow ( name = "content_pipeline" )
def pipeline ( topic : str ):
plan = outline(topic)
request = CompletionRequest(
prompt = Prompt.from_text( f "Write content from this outline: { plan } " ),
maximum_tokens = 512 ,
)
response = client.complete(request, model = "luminous-supreme-control" )
print (response.completions[ 0 ].completion)
pipeline( "Benefits of API gateways" )
respan.flush()
Examples
Basic completion
request = CompletionRequest(
prompt = Prompt.from_text( "Say hello in three languages." ),
maximum_tokens = 128 ,
)
response = client.complete(request, model = "luminous-supreme-control" )
print (response.completions[ 0 ].completion)