Metrics
Set up Respan
- Sign up — Create an account at platform.respan.ai
- Create an API key — Generate one on the API keys page
- Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Use AI
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
Why observability matters
Performance monitoring tracks response times and model performance to ensure optimal operation.
Cost management identifies expensive prompts and optimizes spending across LLM providers.
Quality assurance detects issues and unexpected outputs before they reach users.
Debugging enables quick problem identification through complete session examination.
Without proper observability, LLM applications become expensive black boxes that are impossible to systematically improve.
What are LLM usage metrics?
LLM usage metrics provide comprehensive monitoring for your AI applications. Track key indicators like total requests, token usage, errors, latency, and costs.
Break down analytics by model, user, API key, and prompt for complete visibility into your operations.
Need help?
Join our discord — we’ll help you pick the best fit.