AgentSpec

Trace AgentSpec agent workflows with Respan.
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://docs.respan.ai/mcp"
5 }
6 }
7}

What is AgentSpec?

AgentSpec is a specification and framework for defining and running AI agents. It provides a declarative way to describe agent capabilities, tools, and workflows.

Setup

1

Install packages

$pip install respan-ai openinference-instrumentation-agentspec agentspec
2

Set environment variables

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

1import os
2from dotenv import load_dotenv
3
4load_dotenv()
5
6from respan import Respan
7from respan_instrumentation_openinference import OpenInferenceInstrumentor
8from openinference_instrumentation_agentspec import AgentSpecInstrumentor
9from agentspec import AgentSpec, Tool
10
11# Initialize Respan with AgentSpec instrumentation
12respan = Respan(
13 instrumentations=[
14 OpenInferenceInstrumentor(instrumentor=AgentSpecInstrumentor())
15 ]
16)
17
18# Define an agent specification
19spec = AgentSpec(
20 name="research-assistant",
21 model="gpt-4o-mini",
22 instructions="You are a research assistant that helps find and summarize information.",
23 tools=[
24 Tool(name="search", description="Search for information on a topic"),
25 ],
26)
27
28# Run the agent
29result = spec.run("What are the key principles of AI safety?")
30print(result)
31
32respan.flush()
4

View your trace

Open the Traces page to see your AgentSpec workflow with specification execution, tool calls, and LLM generations.

What gets traced

All AgentSpec operations are auto-instrumented:

  • Agent specification execution
  • Tool calls and results
  • LLM calls with model, tokens, and input/output
  • Workflow orchestration

Traces appear in the Traces dashboard.

Learn more