BeeAI

Trace BeeAI agent workflows with Respan.
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://docs.respan.ai/mcp"
5 }
6 }
7}

What is BeeAI?

BeeAI is an open-source framework for building, deploying, and serving production-ready AI agents. It provides a structured approach to agent development with built-in memory, tools, and LLM integration.

Setup

1

Install packages

$pip install respan-ai openinference-instrumentation-beeai beeai-framework
2

Set environment variables

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

1import os
2from dotenv import load_dotenv
3
4load_dotenv()
5
6from respan import Respan
7from respan_instrumentation_openinference import OpenInferenceInstrumentor
8from openinference_instrumentation_beeai import BeeAIInstrumentor
9from beeai import Agent
10
11# Initialize Respan with BeeAI instrumentation
12respan = Respan(
13 instrumentations=[
14 OpenInferenceInstrumentor(instrumentor=BeeAIInstrumentor())
15 ]
16)
17
18# Create and run an agent
19agent = Agent(
20 model="gpt-4o-mini",
21 instructions="You are a helpful research assistant.",
22)
23
24response = agent.run("Explain the benefits of AI observability in production.")
25print(response)
26
27respan.flush()
4

View your trace

Open the Traces page to see your BeeAI agent trace with execution loops, tool calls, and LLM generations.

What gets traced

All BeeAI operations are auto-instrumented:

  • Agent execution loops
  • Tool calls and results
  • Memory operations
  • LLM calls with model, tokens, and input/output
  • Agent workflows

Traces appear in the Traces dashboard.

Learn more