Vertex AI

Trace Vertex AI LLM calls with Respan.

Setup

$pip install respan-ai openinference-instrumentation-vertexai

Set your environment variables:

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.respan.ai/api"
$export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer $RESPAN_API_KEY"

The instrumentation auto-patches Vertex AI SDK calls and sends traces to Respan via OpenTelemetry.

What gets traced

  • Model name and provider
  • Prompt and completion tokens
  • Input/output content
  • Latency and cost

Traces appear in the Traces dashboard.

Learn more