Guardrails AI

Guardrails AI is a framework for adding structural, type, and quality guarantees to LLM outputs. It validates, corrects, and structures LLM responses to ensure they meet your requirements. Respan gives you full observability over every guard validation, re-ask loop, validator execution, and LLM call — and gateway routing through the OpenAI-compatible Respan endpoint.

Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.

Run npx @respan/cli setup to set up with your coding agent.

Setup

1

Install packages

$pip install respan-ai openinference-instrumentation-guardrails guardrails-ai
2

Set environment variables

$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"

OPENAI_API_KEY is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.

3

Initialize and run

1from guardrails import Guard
2from guardrails.hub import RegexMatch
3from respan import Respan
4from openinference.instrumentation.guardrails import GuardrailsInstrumentor
5
6respan = Respan(instrumentations=[GuardrailsInstrumentor()])
7
8guard = Guard().use(
9 RegexMatch(regex=r"^[A-Z].*\.$", on_fail="reask")
10)
11
12result = guard(
13 model="gpt-4.1-nano",
14 messages=[{
15 "role": "user",
16 "content": "Write a single sentence about the importance of testing."
17 }],
18)
19print(result.validated_output)
20respan.flush()
4

View your trace

Open the Traces page to see your Guardrails workflow with validation passes, re-ask loops, and LLM calls.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. GuardrailsInstrumentor()).
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.

1from respan import Respan
2from openinference.instrumentation.guardrails import GuardrailsInstrumentor
3
4respan = Respan(
5 instrumentations=[GuardrailsInstrumentor()],
6 customer_identifier="user_123",
7 metadata={"service": "guardrails-api", "version": "1.0.0"},
8)

With propagate_attributes

Override per-request using a context scope.

1from respan import Respan, propagate_attributes
2from openinference.instrumentation.guardrails import GuardrailsInstrumentor
3
4respan = Respan(instrumentations=[GuardrailsInstrumentor()])
5
6def handle_request(user_id: str, prompt: str):
7 with propagate_attributes(
8 customer_identifier=user_id,
9 thread_identifier="conv_abc_123",
10 metadata={"plan": "pro"},
11 ):
12 result = guard(model="gpt-4.1-nano", messages=[{"role": "user", "content": prompt}])
13 print(result.validated_output)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.