Cascade builds custom evaluation infrastructure that makes AI agents reliable by learning from their real production behavior. Part of YC W2026, it was founded by Adam AlSayyad (CEO) and Haluk Cem Demirhan (CTO), both researchers from the Berkeley AI Research (BAIR) Lab — the same lab behind Databricks and Perplexity.
Most deployed AI agents remain static after launch, with teams manually adjusting prompts and inspecting logs without reliable ways to measure alignment. Cascade solves this by observing real production runs, training evaluator models that learn what "correct" looks like for a company's specific workflows, and converting those judgments into training signal for continuous improvement.
The platform targets the gap between general-purpose LLMs and enterprise-specific operational needs, helping organizations develop specialized models aligned to their unique data and processes. Cascade is already deployed in legal reasoning workflows and high-volume customer support. The AI guardrails market they target is projected to grow from /bin/zsh.7B (2024) to B by 2034.
Teams wanting proprietary model quality at lower cost
Cascade evaluates and improves AI agent behavior over time, while Respan provides real-time monitoring of the LLM calls driving those agents. Together they enable both continuous improvement and real-time observability.
Top companies in Foundation Models you can use instead of Cascade.
Companies from adjacent layers in the AI stack that work well with Cascade.
Last verified: March 27, 2026