Moda is a monitoring and reliability platform purpose-built for AI agents, positioned as "Datadog for agent workflows." Part of YC W2026, it was founded by Mohammad Al-Rasheed and Pranav Bedi, both University of Waterloo dropouts with AI agent production experience at Shopify, Notion, and Clio.
In production, AI agents fail silently: tool calls error or time out, agents claim completed actions without executing them, prompt injections cause data leakage, and long conversations hide the real failure point. Traditional APM tools miss these behavioral failures entirely. Moda detects hallucinations, tool misuse, dropped conversations, forgotten context, and user frustration signals.
Teams define custom monitoring criteria in plain language (e.g., "Flag when the agent promises a timeline it cannot verify") without writing code. The platform includes real-time alerting via Slack and webhooks, agent replay for editing and replaying conversation steps, batch testing of failure patterns, and built-in security monitoring for prompt injection, jailbreak attempts, and RAG poisoning.
Teams monitoring conversational AI agents
Moda monitors AI agent behavior while Respan monitors the underlying LLM calls. Together they provide both behavioral observability and infrastructure-level monitoring.
Top companies in Observability, Prompts & Evals you can use instead of Moda.
Companies from adjacent layers in the AI stack that work well with Moda.
Last verified: March 27, 2026