Langfuse
VerifiedOpen-source LLM observability platform for debugging AI applications.
About Langfuse
"Debug, trace, and improve your LLM app"
Langfuse is an open-source LLM observability and evaluation platform that helps AI engineering teams trace, monitor, evaluate, and debug their language model applications in production. It captures detailed traces of every LLM call, prompt version, and user interaction — providing the visibility needed to understand latency, cost, quality regressions, and user satisfaction across complex AI pipelines. AI product teams at startups and enterprises use Langfuse to systematically improve their LLM applications post-deployment, catching quality issues before they impact users and informing data-driven improvements to prompts and models.
Key Features
- LLM application tracing
- Prompt version management
- Cost and latency monitoring
- Evaluation pipelines
- User feedback collection
- Open-source and self-hostable
Best For
Official Links
Replicate
Run AI models in the cloud via API
Forethought AI
AI customer support platform with human-like resolution automation.
Langbase
Serverless AI platform for building and deploying LLM pipelines and agents at scale
Hugging Face
The GitHub of AI — models, datasets, and spaces
Devin
The world's first fully autonomous AI software engineer
Flowise
Open-source drag-and-drop AI workflow builder
