SDKs
The Langfuse SDKs are the recommended way to integrate with Langfuse.
Exception: if you use Langchain, use the integration for automated tracing of your Langchain chains/agents.
Properties:
- Fully async requests, using Langfuse adds almost no latency
- Accurate latency tracking using synchronous timestamps
- IDs available for downstream use
- Great DX when nesting observations
- Cannot break your application, all errors are caught and logged
JS/TS
npm install langfuse
# Node.js < 18
npm install langfuse-node
- Fully typed
- Edge-ready, e.g., Vercel, Cloudflare, Deno
- Works client-side to report user feedback (with only the public_key)
→ For more information, see JS/TS (Node.js, Edge) and JS/TS (Web) docs.
Python
pip install langfuse
- Uses Pydantic for typing and validation
- Langchain callback handler
→ For more information, see Python docs.
Example
This is an example implementation of the JS/TS and Web SDK to illustrate how to use the SDKs to instrument an application and add scores from the frontend (e.g., user feedback).
An integration with your application might look very different depending on the backend logic and the scores you want to capture. In case of questions, join the Discord to discuss your use case.
1. Backend tracing
Monitoring LLM applications requires including the context. This can be the full user session of a chat application, retrieval results of a QA-chain, or the full execution trace of an agent. Langfuse was designed to capture the full context, be flexibly extendible while being incrementally adoptable.
Example: Chat application
Chat conversation with repeated user interactions and LLM completions.
Full reference integration:
route.ts
(opens in a new tab)
in Vercel ai-chatbot (TypeScript, NextJs, streaming responses from edge).
Integration
import { Langfuse } from "langfuse";
// more imports
const langfuse = new Langfuse({ publicKey, privateKey });
export async function POST(req: Request) {
const { messages, conversationId, userId } = await req.json();
const langfuseConversation = langfuse.trace({
id: `conversation_${conversationId}`, // creates/upserts trace
userId,
});
const execution = langfuseConversation.span({
name: "single-response",
input: messages.slice(-1),
});
const additionalContext = await getContext(messages);
execution.event({
name: "context-retrieved",
output: additionalContext,
});
const res = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages,
});
const stream = OpenAIStream(res, {
async onCompletion(completion) {
execution.generation({
name: "chat-completion",
prompt: messages,
completion,
});
await langfuse.flush();
},
});
return new StreamingTextResponse(stream);
}
Trace
https://cloud.langfuse.com/...
2. Add scores (via user feedback)
In this example we add a score based on user feedback in the frontend. We use the Langfuse Web SDK in a React application. The score is associated to the trace using the traceId.
User feedback on individual responses
Chat applicationIntegration
import { LangfuseWeb } from "langfuse";
export function UserFeedbackComponent(props: { traceId: string }) {
const langfuseWeb = new LangfuseWeb({
publicKey: env.NEXT_PUBLIC_LANGFUSE_PUBLIC_KEY,
});
const handleUserFeedback = async (value: number) =>
await langfuseWeb.score({
traceId: props.traceId,
name: "user_feedback",
value,
});
return (
<div>
<button onClick={() => handleUserFeedback(1)}>👍</button>
<button onClick={() => handleUserFeedback(0)}>👎</button>
</div>
);
}
Preview