Guardrails AI
Guardrails AI is a framework for adding structural, type, and quality guarantees to LLM outputs. It validates, corrects, and structures LLM responses to ensure they meet your requirements. Respan gives you full observability over every guard validation, re-ask loop, validator execution, and LLM call — and gateway routing through the OpenAI-compatible Respan endpoint.
Set up Respan
Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.
Run npx @respan/cli setup to set up with your coding agent.
Example projects
Tracing
Gateway
Setup
Set environment variables
OPENAI_API_KEY is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.
View your trace
Open the Traces page to see your Guardrails workflow with validation passes, re-ask loops, and LLM calls.
Configuration
Attributes
In Respan()
Set defaults at initialization — these apply to all spans.
With propagate_attributes
Override per-request using a context scope.