Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is Instructor?

Instructor is a Python library for getting structured, validated outputs from LLMs using Pydantic models. It patches LLM clients (OpenAI, Anthropic, etc.) to return typed responses with automatic retries on validation failures. The Respan integration uses the OpenInference instrumentor to capture all Instructor calls, retries, and validations as traced spans.

Setup

1

Install packages

pip install respan-ai openinference-instrumentation-instructor instructor openai python-dotenv
2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

import os
from dotenv import load_dotenv

load_dotenv()

from respan import Respan
from openinference.instrumentation.instructor import InstructorInstrumentor
from openai import OpenAI
import instructor
from pydantic import BaseModel

# Initialize Respan with Instructor instrumentation
respan = Respan(instrumentations=[InstructorInstrumentor()])

# Patch OpenAI client with Instructor
client = instructor.from_openai(OpenAI())


class User(BaseModel):
    name: str
    age: int


user = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=User,
    messages=[
        {"role": "user", "content": "Extract: John is 30 years old."}
    ],
)
print(f"Name: {user.name}, Age: {user.age}")
respan.flush()
4

View your trace

Open the Traces page to see your Instructor extraction calls, validations, and retries as traced spans.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. InstructorInstrumentor()).
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from openinference.instrumentation.instructor import InstructorInstrumentor

respan = Respan(
    instrumentations=[InstructorInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "extraction-api", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from respan import Respan, propagate_attributes
from openinference.instrumentation.instructor import InstructorInstrumentor

respan = Respan(instrumentations=[InstructorInstrumentor()])

def extract_for_user(user_id: str, text: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="session_001",
        metadata={"plan": "pro"},
    ):
        user = client.chat.completions.create(
            model="gpt-4o-mini",
            response_model=User,
            messages=[{"role": "user", "content": f"Extract: {text}"}],
        )
        return user
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Examples

Basic structured extraction

Extract typed data from unstructured text with automatic validation.
from pydantic import BaseModel
import instructor
from openai import OpenAI

client = instructor.from_openai(OpenAI())


class ContactInfo(BaseModel):
    name: str
    email: str
    phone: str


contact = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=ContactInfo,
    messages=[
        {
            "role": "user",
            "content": "Reach out to Sarah at sarah@example.com or call 555-0123.",
        }
    ],
)
print(f"{contact.name}: {contact.email}, {contact.phone}")
respan.flush()

Nested models

Use nested Pydantic models for complex, hierarchical extractions.
from pydantic import BaseModel, Field
from typing import List
import instructor
from openai import OpenAI

client = instructor.from_openai(OpenAI())


class Ingredient(BaseModel):
    name: str
    quantity: str
    unit: str


class Recipe(BaseModel):
    title: str
    servings: int
    prep_time_minutes: int
    ingredients: List[Ingredient]
    steps: List[str] = Field(description="Ordered cooking steps")


recipe = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=Recipe,
    messages=[
        {
            "role": "user",
            "content": "Give me a simple recipe for chocolate chip cookies.",
        }
    ],
)
print(f"{recipe.title} (serves {recipe.servings})")
for i, step in enumerate(recipe.steps, 1):
    print(f"  {i}. {step}")
respan.flush()

Retries with validation

Instructor automatically retries when validation fails. All retry attempts are captured in the trace.
from pydantic import BaseModel, field_validator
import instructor
from openai import OpenAI

client = instructor.from_openai(OpenAI(), max_retries=3)


class SentimentResult(BaseModel):
    text: str
    sentiment: str
    confidence: float

    @field_validator("sentiment")
    @classmethod
    def validate_sentiment(cls, v: str) -> str:
        allowed = {"positive", "negative", "neutral"}
        if v.lower() not in allowed:
            raise ValueError(f"Sentiment must be one of {allowed}")
        return v.lower()

    @field_validator("confidence")
    @classmethod
    def validate_confidence(cls, v: float) -> float:
        if not 0.0 <= v <= 1.0:
            raise ValueError("Confidence must be between 0.0 and 1.0")
        return v


result = client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=SentimentResult,
    messages=[
        {
            "role": "user",
            "content": "Analyze sentiment: 'This product is absolutely amazing!'",
        }
    ],
)
print(f"Sentiment: {result.sentiment} ({result.confidence:.0%})")
respan.flush()