Promptfoo
VerifiedOpen-source CLI and library for testing and evaluating LLM prompts.
About Promptfoo
"Test your LLM app before it breaks in production"
Promptfoo is an open-source framework for evaluating, testing, and red-teaming LLM applications that helps AI developers systematically measure prompt quality, detect regressions, and identify safety vulnerabilities before deployment. It supports automated testing across multiple LLM providers simultaneously, comparison of different model versions, and adversarial probing for jailbreaks and harmful outputs. AI engineering teams use Promptfoo as their CI/CD layer for prompt engineering — ensuring that every change to prompts or models is rigorously evaluated before reaching production users.
Key Features
6Best For
4 use casesOfficial Links
Similar to Promptfoo
6Opik
Open-source LLM evaluation and observability platform by Comet.
BentoML
Open-source platform for AI model deployment
SambaNova Cloud
Ultra-fast inference for large frontier AI models on custom dataflow processors
Replicate
Run AI models in the cloud via API
Firecrawl
Turn any website into clean data for AI applications
Aider in Browser
Aider AI coding assistant as a web application
