Docs
Tracing

Tracing

LLM apps use increasingly complex abstractions (chains, agents with tools, advanced prompts). The nested traces in Langfuse help to understand what is going on and get to the root cause of problems.

Introduction

Langfuse objects:

  • Each backend execution is logged with a single trace.
  • Each trace can contain multiple observations to log the individual steps of the execution.
    • Observations are of different types:
      • Events are the basic building block. They are used to track discrete events in a trace.
      • Spans represent durations of units of work in a trace.
      • Generations are spans which are used to log generations of AI models. They contain additional attributes about the model, the prompt/completion. For generations, token usage is automatically calculated.
    • Observations can be nested.

Follow the integration docs to send traces to Langfuse. You can use:

Example

Automatically traced with Langchain integration:

Agent trace

Detect and fix problems

  1. Collect user feedback from the frontend
  2. Filter down to executions that had poor quality
  3. Use the debugging UI to get to the root cause of the problem

Share via public link

You can share a trace with anyone via a public link. The link is read-only.

Example: https://cloud.langfuse.com/public/traces/lf.docs.conversation.u6Wl2hG (opens in a new tab)

Share trace via public link

Get trace url in SDK

Sometimes, it is useful to get the trace URL directly in the SDK. E.g. to add it to your logs or interactively look at it when running experiments in notebooks.

# trace object
trace.get_trace_url()
# Langchain callback handler
handler.get_trace_url()

Was this page useful?

Questions? We're here to help