SDKs

The Langfuse SDKs are the recommended way to integrate with Langfuse.

Exception: if you use Langchain, use the integration for automated tracing of your Langchain chains/agents.

Properties:

  • Fully async requests, using Langfuse adds almost no latency
  • Accurate latency tracking using synchronous timestamps
  • IDs available for downstream use
  • Great DX when nesting observations
  • Cannot break your application, all errors are caught and logged

JS/TS

Github repository langfuse/langfuse-jsCI test statusnpm langfusenpm langfuse-node
npm install langfuse
 
# Node.js < 18
npm install langfuse-node
  • Fully typed
  • Edge-ready, e.g., Vercel, Cloudflare, Deno
  • Works client-side to report user feedback (with only the public_key)

→ For more information, see JS/TS (Node.js, Edge) and JS/TS (Web) docs.

Python

Github repository langfuse/langfuse-pythonCI test statusPyPi langfuse
pip install langfuse
  • Uses Pydantic for typing and validation
  • Langchain callback handler

→ For more information, see Python docs.

Example

This is an example implementation of the JS/TS and Web SDK to illustrate how to use the SDKs to instrument an application and add scores from the frontend (e.g., user feedback).

An integration with your application might look very different depending on the backend logic and the scores you want to capture. In case of questions, join the Discord to discuss your use case.

1. Backend tracing

Monitoring LLM applications requires including the context. This can be the full user session of a chat application, retrieval results of a QA-chain, or the full execution trace of an agent. Langfuse was designed to capture the full context, be flexibly extendible while being incrementally adoptable.

Example: Chat application

Chat conversation with repeated user interactions and LLM completions.

Full reference integration: route.ts (opens in a new tab) in Vercel ai-chatbot (TypeScript, NextJs, streaming responses from edge).

Integration

route.ts
import { Langfuse } from "langfuse";
// more imports
 
const langfuse = new Langfuse({ publicKey, privateKey });
 
export async function POST(req: Request) {
  const { messages, conversationId, userId } = await req.json();
 
  const langfuseConversation = langfuse.trace({
    id: `conversation_${conversationId}`, // creates/upserts trace
    userId,
  });
 
  const execution = langfuseConversation.span({
    name: "single-response",
    input: messages.slice(-1),
  });
 
  const additionalContext = await getContext(messages);
  execution.event({
    name: "context-retrieved",
    output: additionalContext,
  });
 
  const res = await openai.createChatCompletion({
    model: "gpt-3.5-turbo",
    messages,
  });
 
  const stream = OpenAIStream(res, {
    async onCompletion(completion) {
      execution.generation({
        name: "chat-completion",
        prompt: messages,
        completion,
      });
      await langfuse.flush();
    },
  });
 
  return new StreamingTextResponse(stream);
}

Trace

https://cloud.langfuse.com/...
Trace
Id: conversation <conversation_id>
Span: single-response
Input: What do users like about Langfuse?
Event: context-retrieved
Output: <output>
Generation: chat-completion
prompt: [ ... ]
completion: <completion>
Output: That it makes chat interactions easily observable
Span: single-response
Input: What features of Langfuse are helpful for developers of chat aplications?
Event: context-retrieved
Output: <output>
Generation: chat-completion
prompt: [ ... ]
completion: <completion>
Output: (1) Grouping of executions into traces (sessions), (2) nested tracking of intermediary steps that help with debugging, (3) SDKs for simple intergation with their application.

2. Add scores (via user feedback)

In this example we add a score based on user feedback in the frontend. We use the Langfuse Web SDK in a React application. The score is associated to the trace using the traceId.

User feedback on individual responses

Chat application

Integration

UserFeedbackComponent.tsx
import { LangfuseWeb } from "langfuse";
 
export function UserFeedbackComponent(props: { traceId: string }) {
  const langfuseWeb = new LangfuseWeb({
    publicKey: env.NEXT_PUBLIC_LANGFUSE_PUBLIC_KEY,
  });
 
  const handleUserFeedback = async (value: number) =>
    await langfuseWeb.score({
      traceId: props.traceId,
      name: "user_feedback",
      value,
    });
 
  return (
    <div>
      <button onClick={() => handleUserFeedback(1)}>👍</button>
      <button onClick={() => handleUserFeedback(0)}>👎</button>
    </div>
  );
}

Preview

Loading...

Was this page useful?

Questions? We're here to help