RubyLLM
RubyLLM provides a unified Ruby interface for GPT, Claude, Gemini, and more. Since Respan is OpenAI-compatible, you can route all RubyLLM requests through the Respan gateway by pointing the OpenAI base URL to Respan and get full observability automatically.
Set up Respan
Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.
Run npx @respan/cli setup to set up with your coding agent.
Gateway
Setup
Set environment variables
No provider key needed — the Respan gateway handles provider authentication.
View your trace
Open the Traces page to see your gateway-routed calls with prompts, tokens, and cost.
Switch models
For OpenAI models it works directly. For non-OpenAI models (Claude, Gemini, etc.), add provider: :openai and assume_model_exists: true to route them through the Respan gateway.
provider: :openai doesn’t mean the model is from OpenAI — it tells RubyLLM to use the OpenAI API protocol to send the request. Without it, RubyLLM would call the provider directly, bypassing Respan. assume_model_exists: true skips RubyLLM’s local model registry check.
See the full model list.
Streaming
Multi-tenancy with contexts
Use RubyLLM contexts to isolate per-tenant configuration.
Rails integration
Set your Respan config in an initializer.
Use acts_as_chat as normal — all LLM calls will be routed through Respan.