Vercel AI SDK

The AI SDK (by Vercel) is a TypeScript toolkit for building AI-powered applications with Next.js, React, and other frameworks. It provides unified APIs for text generation, streaming, tool use, and structured outputs across multiple LLM providers. Respan gives you full observability over every LLM call, tool execution, and multi-step workflow — and gateway routing to 250+ models.

Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.

Run npx @respan/cli setup to set up with your coding agent.

Setup

1

Install packages

$npm install ai @ai-sdk/openai @respan/respan @respan/instrumentation-vercel
2

Set environment variables

$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"

OPENAI_API_KEY is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.

3

Initialize Respan

Create instrumentation.ts in your project root (same level as package.json). Next.js calls register() automatically at startup.

instrumentation.ts
1import { Respan } from "@respan/respan";
2import { VercelAIInstrumentor } from "@respan/instrumentation-vercel";
3
4export async function register() {
5 const respan = new Respan({
6 apiKey: process.env.RESPAN_API_KEY,
7 instrumentations: [new VercelAIInstrumentor()],
8 });
9 await respan.initialize();
10}

Then add serverExternalPackages to next.config.ts:

next.config.ts
1import type { NextConfig } from "next";
2
3const nextConfig: NextConfig = {
4 serverExternalPackages: [
5 "@respan/respan",
6 "@respan/instrumentation-vercel",
7 ],
8};
9
10export default nextConfig;
4

Initialize and run

1import { generateText } from "ai";
2import { openai } from "@ai-sdk/openai";
3
4const result = await generateText({
5 model: openai("gpt-4.1-nano"),
6 prompt: "Tell me a joke about AI",
7 experimental_telemetry: {
8 isEnabled: true,
9 metadata: { customer_identifier: "user-123" },
10 },
11});
12
13console.log(result.text);
5

View your trace

Open the Traces page to see your AI calls with full input/output, token usage, tool calls, and cost.

Configuration

ParameterTypeDefaultDescription
apiKeystring | undefinedRESPAN_API_KEY env varRespan API key.
baseURLstring | undefined"https://api.respan.ai"API base URL.
instrumentationsRespanInstrumentation[][]Plugin instrumentations to activate.

Attributes

With experimental_telemetry metadata

Pass metadata directly on each AI SDK call. The instrumentation maps these to Respan fields automatically.

1const result = await generateText({
2 model: openai("gpt-4.1-nano"),
3 prompt: "Hello",
4 experimental_telemetry: {
5 isEnabled: true,
6 metadata: {
7 customer_identifier: "user-123",
8 thread_identifier: "thread-abc",
9 trace_group_identifier: "onboarding-flow",
10 },
11 },
12});

With propagateAttributes

Override per-request using a context scope. All AI SDK calls within the scope inherit these attributes.

1import { Respan } from "@respan/respan";
2import { VercelAIInstrumentor } from "@respan/instrumentation-vercel";
3
4const respan = new Respan({
5 instrumentations: [new VercelAIInstrumentor()],
6});
7await respan.initialize();
8
9async function handleRequest(userId: string, message: string) {
10 return respan.propagateAttributes(
11 {
12 customer_identifier: userId,
13 thread_identifier: "conv_abc_123",
14 metadata: { plan: "pro" },
15 },
16 async () => {
17 const result = await generateText({
18 model: openai("gpt-4.1-nano"),
19 prompt: message,
20 experimental_telemetry: { isEnabled: true },
21 });
22 return result.text;
23 }
24 );
25}
AttributeTypeDescription
customer_identifierstringIdentifies the end user in Respan analytics.
thread_identifierstringGroups related messages into a conversation.
metadataRecord<string, string>Custom key-value pairs attached to spans.

Decorators (optional)

Decorators are not required. All generateText, streamText, tool calls, and agent steps are auto-traced by the instrumentor. Use withWorkflow and withTask to add structure when you want to group AI calls into named workflows with nested tasks.

1import { generateText } from "ai";
2import { openai } from "@ai-sdk/openai";
3import { Respan, withWorkflow, withTask } from "@respan/respan";
4import { VercelAIInstrumentor } from "@respan/instrumentation-vercel";
5
6const respan = new Respan({
7 apiKey: process.env.RESPAN_API_KEY,
8 instrumentations: [new VercelAIInstrumentor()],
9});
10await respan.initialize();
11
12await withWorkflow({ name: "joke_pipeline" }, async () => {
13 const intent = await withTask({ name: "classify_intent" }, () =>
14 generateText({
15 model: openai("gpt-4.1-nano"),
16 prompt: 'Classify this intent in one word: "Tell me a joke"',
17 experimental_telemetry: { isEnabled: true },
18 })
19 );
20
21 const joke = await withTask({ name: "generate_joke" }, () =>
22 generateText({
23 model: openai("gpt-4.1-nano"),
24 prompt: `The intent is "${intent.text}". Tell a short joke.`,
25 experimental_telemetry: { isEnabled: true },
26 })
27 );
28
29 console.log(joke.text);
30});
31
32await respan.flush();

Examples

Streaming with tools

1import { streamText, tool } from "ai";
2import { openai } from "@ai-sdk/openai";
3import { z } from "zod";
4
5const result = streamText({
6 model: openai("gpt-4.1-nano"),
7 messages: [{ role: "user", content: "What's the weather in Paris?" }],
8 tools: {
9 getWeather: tool({
10 description: "Get weather for a city",
11 parameters: z.object({ city: z.string() }),
12 execute: async ({ city }) => `${city}: sunny, 72F`,
13 }),
14 },
15 maxSteps: 5,
16 experimental_telemetry: { isEnabled: true },
17});
18
19for await (const chunk of result.textStream) {
20 process.stdout.write(chunk);
21}