Langfuse
VerifiedOpen-source LLM observability platform for debugging AI applications.
About Langfuse
"Debug, trace, and improve your LLM app"
Langfuse is an open-source LLM observability and evaluation platform that helps AI engineering teams trace, monitor, evaluate, and debug their language model applications in production. It captures detailed traces of every LLM call, prompt version, and user interaction — providing the visibility needed to understand latency, cost, quality regressions, and user satisfaction across complex AI pipelines. AI product teams at startups and enterprises use Langfuse to systematically improve their LLM applications post-deployment, catching quality issues before they impact users and informing data-driven improvements to prompts and models.
Key Features
6Best For
4 use casesOfficial Links
Similar to Langfuse
6Replicate
Run AI models in the cloud via API
Intercom Fin AI
Fin is Intercom's AI customer service agent built on GPT-4. It instantly resolves customer support questions by reading your help center content, support articles, and documentation — with human-quality answers.
Paradox
Paradox's Olivia is an AI recruiting assistant that handles candidate screening, scheduling, onboarding, and FAQ conversations via text and chat — automating the most repetitive parts of high-volume hiring.
Forethought AI
AI customer support platform with human-like resolution automation.
Langbase
Serverless AI platform for building and deploying LLM pipelines and agents at scale
Hugging Face
The GitHub of AI — models, datasets, and spaces
