Docs
Langchain
JS/TS (Node.js, Edge)

JS/TS Langchain SDK

Github repository langfuse/langfuse-jsCI test statusnpm langfuse-langchain

If you are working with Node.js, Deno, or Edge functions, the langfuse library is the simplest way to integrate Langfuse into your Langchain application. The library queues calls to make them non-blocking.

Supported runtimes:

  • Node.js
  • Edge: Vercel, Cloudflare, ...
  • Deno

Want to work without Langchain? Use Langfuse for tracing and LangfuseWeb to capture feedback from the browser.

Installation

npm i langfuse-langchain
# or
yarn add langfuse-langchain
 
# Deno
import CallbackHandler from 'https://esm.sh/langfuse-langchain'

In your application, set the api keys to create a client.

import { CallbackHandler } from "langfuse-langchain";
 
const handler = new CallbackHandler({
  secretKey: process.env.LANGFUSE_SECRET_KEY, // sk-lf-...
  publicKey: process.env.LANGFUSE_PUBLIC_KEY, // pk-lf-...
  // options
});

Options

VariableDescriptionDefault value
baseUrlBaseUrl of the Langfuse API"https://cloud.langfuse.com"
releaseThe release number/hash of the application to provide analytics grouped by release.process.env.LANGFUSE_RELEASE or common system environment names (opens in a new tab)
userIdFor user-level analytics (docs)undefined
ℹ️

In short-lived environments (e.g. serverless functions), make sure to always call handler.shutdownAsync() at the end to flush the queue and await all pending requests. (Learn more)

Create a simple LLM call using Langchain

import { LLMChain } from "langchain/chains";
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
 
import { CallbackHandler } from "langfuse-langchain";
 
// create a handler
const handler = new CallbackHandler({
  publicKey: LF_PUBLIC_KEY,
  secretKey: LF_SECRET_KEY,
});
 
// create a model
const model = new OpenAI({
  temperature: 0,
  openAIApiKey: "sk-...",
});
// create a prompt
const prompt = PromptTemplate.fromTemplate(
  "What is a good name for a company that makes {product}?"
);
// create a chain
const chain = new LLMChain({
  llm: model,
  prompt,
  callbacks: [langfuseHandler],
});
 
// execute the chain
const resA = await chain.call(
  { product: "colorful hockey sticks" },
  { callbacks: [langfuseHandler] }
);

There are two ways to integrate callbacks into Langchain:

  • Constructor Callbacks: Set when initializing an object, like new LLMChain({ ..., callbacks: [langfuseHandler] }). This approach will use the callback for every call made on that specific object. However, it won't apply to its child objects, making it limited in scope.
  • Request Callbacks: Defined when issuing a request, like chain.run(..., { callbacks: [langfuseHandler] }). This not only uses the callback for that specific request but also for any subsequent sub-requests it triggers.

For comprehensive data capture especially for complex chains or agents, it's advised to use the both approaches, as demonstrated above docs (opens in a new tab).

Stateful Langchain callbacks

The Langchain client can also be constructed with Traces or Spans. This allows to nest Langchain executions anywhere in the Trace an hence to add any metadata and ids that are important.

import { OpenAI } from "langchain/llms/openai";
import { CallbackHandler, Langfuse } from "langfuse-langchain";
 
// configure the normal Langchain sdk
const langfuse = new Langfuse({
  publicKey: LF_PUBLIC_KEY,
  secretKey: LF_SECRET_KEY,
  baseUrl: LF_HOST,
});
 
// create a trace and a handler nested into the trace.
const trace = langfuse.trace({ name: "joke-trace" });
const handler = new CallbackHandler({ root: trace });
 
// call the llm with the handler
const llm = new OpenAI({ callbacks: [handler] });
await llm.call("Tell me a joke", { callbacks: [handler] });
 
// create a span within the trace
const span = trace.span({ name: "joke-span" });
const spanHandler = new CallbackHandler({ root: span });
 
// call the llm with the handler
const llmSpan = new OpenAI({ callbacks: [spanHandler] });
await llmSpan.call("Tell me a better joke", { callbacks: [spanHandler] });
 
await handler.flushAsync();

Shutdown

The Langfuse SDKs buffer events and flush them asynchronously to the Langfuse server. You should call shutdown to exit cleanly before your application exits.

handler.shutdown();
// or
await handler.shutdownAsync();

Execution identifier

The SDK provides a function to return the traceId of the trace which is currently used. Similarly, there is another function to expose the langchain runId. This runId is the latest top-level id of a Langchain run which is also used to create Spans or Generations in Langfuse.

Both of these ids can be used to create scores for the correct Span in Langfuse.

import { OpenAI } from "langchain/llms/openai";
import { CallbackHandler, Langfuse } from "langfuse-langchain";
 
// configure the normal Langchain sdk
const langfuse = new Langfuse({
  publicKey: LF_PUBLIC_KEY,
  secretKey: LF_SECRET_KEY,
  baseUrl: LF_HOST,
});
 
// create a trace and a handler nested into the trace.
const trace = langfuse.trace({ id: "special-id" });
const handler = new CallbackHandler({ root: trace });
 
handler.getTraceId(); // returns "special-id"
 
// call the llm with the handler
const llm = new OpenAI({ callbacks: [handler] });
await llm.call("Tell me a joke", { callbacks: [handler] });
 
handler.getLangchainRunId(); // returns the latest run id
 
await llm.call("Tell me a better joke", { callbacks: [handler] });
 
handler.getLangchainRunId(); // returns the latest run id, different from the first one

Flushing, shutdown, and debugging

The Langfuse SDK uses the JS/TS (Node.js, Edge) SDK under the hood and offers the same functionality regarding flushing, shutdown and debugging. Please refer to the JS/TS SDK documentation for more information.

Was this page useful?

Questions? We're here to help