Vercel AI SDK

  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://docs.respan.ai/mcp"
5 }
6 }
7}

What is AI SDK?

The AI SDK (by Vercel) is a TypeScript toolkit for building AI-powered applications with Next.js, React, and other frameworks. It provides unified APIs for text generation, streaming, tool use, and structured outputs across multiple LLM providers. Respan gives you full observability over every LLM call, tool execution, and multi-step workflow.

Setup

1

Install packages

$npm install ai @ai-sdk/openai @respan/respan @respan/tracing @respan/instrumentation-vercel
2

Set environment variables

$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
3

Create instrumentation file

Create instrumentation.ts in your project root (same level as package.json):

instrumentation.ts
1import { Respan } from "@respan/respan";
2import { VercelAIInstrumentor } from "@respan/instrumentation-vercel";
3
4export async function register() {
5 const respan = new Respan({
6 apiKey: process.env.RESPAN_API_KEY,
7 appName: "my-app",
8 instrumentations: [new VercelAIInstrumentor()],
9 });
10 await respan.initialize();
11}
4

Configure Next.js (Next.js only)

Skip this step if you’re not using Next.js.

Add serverExternalPackages to next.config.ts so Next.js loads OpenTelemetry packages as Node.js modules instead of bundling them:

next.config.ts
1import type { NextConfig } from "next";
2
3const nextConfig: NextConfig = {
4 serverExternalPackages: [
5 "@respan/respan",
6 "@respan/tracing",
7 "@respan/respan-sdk",
8 "@respan/instrumentation-vercel",
9 ],
10};
11
12export default nextConfig;
5

Initialize and run

1import { generateText } from "ai";
2import { openai } from "@ai-sdk/openai";
3
4const result = await generateText({
5 model: openai("gpt-4o-mini"),
6 prompt: "Tell me a joke about AI",
7 experimental_telemetry: {
8 isEnabled: true,
9 metadata: { customer_identifier: "user-123" },
10 },
11});
6

View your trace

Open the Traces page to see your AI calls with full input/output, token usage, and cost.

This step applies to Tracing and Both setups. The Gateway-only setup logs requests on the Logs page.

Configuration

ParameterTypeDefaultDescription
apiKeystring | undefinedRESPAN_API_KEY env varRespan API key.
appNamestring | undefined"default"Service name shown in traces.
baseURLstring | undefined"https://api.respan.ai"API base URL.
instrumentationsRespanInstrumentation[][]Plugin instrumentations to activate.

Attributes

With experimental_telemetry metadata

Pass metadata directly on each AI SDK call. The instrumentation maps these to Respan fields automatically:

1const result = await generateText({
2 model: provider("gpt-4o-mini"),
3 prompt: "Hello",
4 experimental_telemetry: {
5 isEnabled: true,
6 metadata: {
7 customer_identifier: "user-123",
8 customer_name: "John Doe",
9 customer_email: "john@example.com",
10 thread_identifier: "thread-abc",
11 trace_group_identifier: "onboarding-flow",
12 },
13 },
14});

With propagateAttributes

Override per-request using a context scope. All AI SDK calls within the scope inherit these attributes:

1import { Respan } from "@respan/respan";
2import { VercelAIInstrumentor } from "@respan/instrumentation-vercel";
3
4const respan = new Respan({
5 instrumentations: [new VercelAIInstrumentor()],
6});
7await respan.initialize();
8
9async function handleRequest(userId: string, message: string) {
10 return respan.propagateAttributes(
11 {
12 customer_identifier: userId,
13 thread_identifier: "conv_abc_123",
14 metadata: { plan: "pro" },
15 },
16 async () => {
17 const result = await generateText({
18 model: provider("gpt-4o-mini"),
19 prompt: message,
20 experimental_telemetry: { isEnabled: true },
21 });
22 return result.text;
23 }
24 );
25}
AttributeTypeDescription
customer_identifierstringIdentifies the end user in Respan analytics.
thread_identifierstringGroups related messages into a conversation.
trace_group_identifierstringGroups related traces into a workflow group.
customer_namestringCustomer display name.
customer_emailstringCustomer email.
metadataRecord<string, string>Custom key-value pairs attached to spans.

Decorators (optional)

Decorators are not required. All generateText, streamText, tool calls, and agent steps are auto-traced by the instrumentation. Use withWorkflow and withTask to add structure when you want to group AI calls into named workflows with nested tasks.

1import { createOpenAI } from "@ai-sdk/openai";
2import { generateText } from "ai";
3import { Respan, withWorkflow, withTask, propagateAttributes } from "@respan/respan";
4import { VercelAIInstrumentor } from "@respan/instrumentation-vercel";
5
6const respan = new Respan({
7 apiKey: process.env.RESPAN_API_KEY,
8 instrumentations: [new VercelAIInstrumentor()],
9});
10await respan.initialize();
11
12const provider = createOpenAI({
13 apiKey: process.env.RESPAN_API_KEY!,
14 baseURL: "https://api.respan.ai/api",
15});
16
17await propagateAttributes(
18 { customer_identifier: "user-123" },
19 () =>
20 withWorkflow({ name: "joke_pipeline" }, async () => {
21 const intent = await withTask({ name: "classify_intent" }, () =>
22 generateText({
23 model: provider("gpt-4o-mini"),
24 prompt: 'Classify this intent in one word: "Tell me a joke"',
25 experimental_telemetry: { isEnabled: true },
26 })
27 );
28
29 const joke = await withTask({ name: "generate_joke" }, () =>
30 generateText({
31 model: provider("gpt-4o-mini"),
32 prompt: `The intent is "${intent.text}". Tell a short joke.`,
33 experimental_telemetry: { isEnabled: true },
34 })
35 );
36
37 console.log(joke.text);
38 })
39);
40
41await respan.flush();

Examples

Streaming with tools

1import { streamText, tool } from "ai";
2import { z } from "zod";
3
4export const maxDuration = 30;
5
6export async function POST(req: Request) {
7 const { messages } = await req.json();
8
9 const result = streamText({
10 model: provider("gpt-4o-mini"),
11 messages,
12 tools: {
13 getWeather: tool({
14 description: "Get weather for a city",
15 parameters: z.object({ city: z.string() }),
16 execute: async ({ city }) => `${city}: sunny, 72°F`,
17 }),
18 },
19 maxSteps: 5,
20 experimental_telemetry: {
21 isEnabled: true,
22 metadata: { customer_identifier: "user-123" },
23 },
24 });
25
26 return result.toTextStreamResponse();
27}

Multi-step workflow

1import { withWorkflow, withTask, withTool, propagateAttributes } from "@respan/respan";
2
3export async function POST(req: Request) {
4 const { message } = await req.json();
5
6 return propagateAttributes(
7 { customer_identifier: "user-123", thread_identifier: `thread_${Date.now()}` },
8 () =>
9 withWorkflow({ name: "support_chatbot" }, async () => {
10 // Step 1: Classify intent
11 const intentResult = await withTask({ name: "classify_intent" }, () =>
12 generateText({
13 model: provider("gpt-4o-mini"),
14 prompt: `Classify: "${message}"`,
15 experimental_telemetry: { isEnabled: true },
16 })
17 );
18
19 // Step 2: Execute tool
20 const toolResult = await withTool({ name: "lookup" }, async () => {
21 return { answer: "Found the answer" };
22 });
23
24 // Step 3: Generate response
25 const response = await withTask({ name: "generate_response" }, () =>
26 generateText({
27 model: provider("gpt-4o-mini"),
28 prompt: `Intent: ${intentResult.text}. Data: ${JSON.stringify(toolResult)}. Respond helpfully.`,
29 experimental_telemetry: { isEnabled: true },
30 })
31 );
32
33 return Response.json({ response: response.text });
34 })
35 );
36}

Gateway features

The features below require the Gateway or Both setup from Step 4.

Switch models

Change the model parameter to use 250+ models from different providers through the same gateway:

1const provider = createOpenAI({
2 apiKey: process.env.RESPAN_API_KEY!,
3 baseURL: "https://api.respan.ai/api",
4});
5
6// OpenAI
7const result1 = await generateText({ model: provider("gpt-4o-mini"), prompt: "Hello" });
8
9// Anthropic (via gateway)
10const result2 = await generateText({ model: provider("claude-sonnet-4-5-20250929"), prompt: "Hello" });
11
12// DeepSeek (via gateway)
13const result3 = await generateText({ model: provider("deepseek/deepseek-chat"), prompt: "Hello" });

See the full model list.

Troubleshooting

  1. Verify experimental_telemetry: { isEnabled: true } is set on every AI SDK call
  2. Check that instrumentation.ts is in your project root (same level as package.json)
  3. Ensure RESPAN_API_KEY is set in your environment

For streaming routes (streamText), set maxDuration in your route handler:

1export const maxDuration = 30;

Set the root explicitly in next.config.ts:

1const nextConfig: NextConfig = {
2 turbopack: { root: __dirname },
3};