Langfuse

Langfuse

Verified

Open-source LLM observability platform for debugging AI applications.

About Langfuse

"Debug, trace, and improve your LLM app"

Langfuse is an open-source LLM observability and evaluation platform that helps AI engineering teams trace, monitor, evaluate, and debug their language model applications in production. It captures detailed traces of every LLM call, prompt version, and user interaction — providing the visibility needed to understand latency, cost, quality regressions, and user satisfaction across complex AI pipelines. AI product teams at startups and enterprises use Langfuse to systematically improve their LLM applications post-deployment, catching quality issues before they impact users and informing data-driven improvements to prompts and models.

Key Features

  • LLM application tracing
  • Prompt version management
  • Cost and latency monitoring
  • Evaluation pipelines
  • User feedback collection
  • Open-source and self-hostable

Best For

Debugging LLM application issuesTracking prompt performance over timeMonitoring AI costs in productionA/B testing prompts

Official Links

Tool Details

Pricing
Freemium
Free plan available
Verified Tool
Reviewed by our team
Last verified
Feb 18, 2026
Visit Langfuse
Advertisement
Your ad hereAdvertise with us
Nextool.ai

Discover 10,000+ curated AI tools across every category.

Browse all categories