Guardrails AI

Trace Guardrails AI validation workflows with Respan.
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://docs.respan.ai/mcp"
5 }
6 }
7}

What is Guardrails AI?

Guardrails AI is a framework for adding structural, type, and quality guarantees to LLM outputs. It validates, corrects, and structures LLM responses to ensure they meet your requirements.

Setup

1

Install packages

$pip install respan-ai openinference-instrumentation-guardrails guardrails-ai
2

Set environment variables

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

1import os
2from dotenv import load_dotenv
3
4load_dotenv()
5
6from respan import Respan
7from respan_instrumentation_openinference import OpenInferenceInstrumentor
8from openinference_instrumentation_guardrails import GuardrailsInstrumentor
9from guardrails import Guard
10from guardrails.hub import RegexMatch
11
12# Initialize Respan with Guardrails instrumentation
13respan = Respan(
14 instrumentations=[
15 OpenInferenceInstrumentor(instrumentor=GuardrailsInstrumentor())
16 ]
17)
18
19# Create a guard with validators
20guard = Guard().use(
21 RegexMatch(regex=r"^[A-Z].*\.$", on_fail="reask")
22)
23
24# Run the guard with an LLM call
25result = guard(
26 model="gpt-4o-mini",
27 messages=[{
28 "role": "user",
29 "content": "Write a single sentence about the importance of testing."
30 }],
31)
32
33print(result.validated_output)
34respan.flush()
4

View your trace

Open the Traces page to see your Guardrails workflow with validation passes, re-ask loops, and LLM calls.

What gets traced

All Guardrails AI operations are auto-instrumented:

  • Guard validation passes and failures
  • LLM calls with model, tokens, and input/output
  • Re-ask loops when validation fails
  • Individual validator execution
  • Output parsing and correction

Traces appear in the Traces dashboard.

Learn more