Vertex AI
Trace Vertex AI LLM calls with Respan.
Setup
Set your environment variables:
The instrumentation auto-patches Vertex AI SDK calls and sends traces to Respan via OpenTelemetry.
What gets traced
- Model name and provider
- Prompt and completion tokens
- Input/output content
- Latency and cost
Traces appear in the Traces dashboard.
Learn more
- OpenTelemetry integration - How Respan processes OTel spans
- Respan Python SDK - Full SDK reference