Patronus AI is a San Francisco startup founded by former Meta machine learning experts Anand Kannappan and Rebecca Qian, focused on automatically detecting costly and dangerous LLM mistakes at scale. The company raised USD 17 million in Series A funding led by Notable Capital, bringing total funding to USD 20 million. Patronus AI developed a first-of-its-kind automated evaluation platform that identifies errors like hallucinations, copyright infringement, and safety violations in LLM outputs. The platform uses pay-as-you-go pricing starting at USD 10-20 per 1,000 API calls, with USD 5 in free credits for new users. Trusted by companies like OpenAI, HP, Pearson, AngelList, and Etsy, Patronus AI has processed millions of requests, catching hundreds of thousands of hallucinations. Customers praise the research-first approach and 20% better evaluation performance than competing methods, though as a startup-stage company, many processes are still being built.
Free trial available
AI teams that need rigorous, automated quality evaluation and safety testing
Integrate Patronus AI's evaluation platform with Respan to automatically detect hallucinations, copyright violations, and safety issues in your LLM outputs. Add comprehensive quality gates to AI workflows with research-backed evaluation. With Respan orchestrating Patronus AI alongside your LLM providers, ensure reliable, safe AI applications.
Top companies in Observability, Prompts & Evals you can use instead of Patronus AI.
Companies from adjacent layers in the AI stack that work well with Patronus AI.
Last verified: March 10, 2026