Sept 6, 2023

Langfuse Update — August 2023

Improved data ingestion, integrations and UI

Hi everyone 👋, over the last 4 weeks we doubled down on integrations, and pushing more trace context to Langfuse.

... and many small improvements and bug fixes.


The details 👇

🦜🔗 JS/TS Langchain integration

Last month we released the Python Integration for Langchain and now shipped the same for teams building with JS/TS. We released a new package langfuse-langchain (opens in a new tab) which exposes a CallbackHandler that automatically traces your complex Langchain chains and agents. Simply pass it as a callback.

// Initialize Langfuse handler
import CallbackHandler from "langfuse-langchain";
 
const handler = new CallbackHandler({
  secretKey: process.env.LANGFUSE_SECRET_KEY, // sk-lf-...
  publicKey: process.env.LANGFUSE_PUBLIC_KEY, // pk-lf-...
  // options
});
 
// Setup Langchain
import { OpenAI } from "langchain/llms/openai";
 
const llm = new OpenAI();
 
// Add Langfuse handler as callback
const res = await llm.call("<user-input>", { callbacks: [handler] });

Integration docs

⛓️ Langchain integrations with Trace context

When using the Langchain Python integration, you can now add more context to the traces you create. Thereby you can add userIds, metadata or modify the id to be able to attach scores to the trace.

import uuid
 
from langfuse.client import Langfuse
from langfuse.model import CreateTrace
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
 
# initialise langfuse client
langfuse = Langfuse(ENV_PUBLIC_KEY, ENV_SECRET_KEY, ENV_HOST)
 
# create trace_id for future reference
trace_id = str(uuid.uuid4())
# create the Trace
trace = langfuse.trace(CreateTrace(id=trace_id))
# get a handler bound to the Trace
handler = trace.getNewHandler()
 
# setup Langchain
llm = OpenAI()
chain = LLMChain(llm=llm, prompt=PromptTemplate(...))
 
chain.run("<your-input>", callbacks=[handler])

Docs

📦 Trace context: Releases and versions

When iterating quickly on an LLM app, understanding which change or release led to a certain behavior is crucial. Therefore we added the ability to add information on the releases and versions to traces and observations. This is helpful to understand which version of your app (e.g. git sha) or LLM feature (e.g. prompt version) was used in a given trace.

Releases

Releases are available for all SDKs. They can be added in three ways (in order of precedence):

  1. SDK initialization

    # Python
    langfuse = Langfuse(ENV_PUBLIC_KEY, ENV_SECRET_KEY, ENV_HOST, release='ba7816b')
    // TypeScript
    langfuse = new Langfuse({
      publicKey: ENV_PUBLIC_KEY,
      secretKey: ENV_SECRET_KEY,
      host: ENV_HOST,
      release: "ba7816b",
    });
  2. Via environment variable

    LANGFUSE_RELEASE = "ba7816b..." # <- github sha or other identifier
  3. Automatically from a list of known release environment variables, e.g. Vercel, Heroku, Netlify. See the full list of support environment variables for JS/TS (opens in a new tab) and Python (opens in a new tab).

Picture release in traces table

Learn more → Python, JS/TS

Versions

When making changes to prompts and chains, you can add version parameter to span, generation or event. The version can then be used to understand the effect of changes using Langfuse analytics.

Version on single generation

🔎 Improved traces table

Our users spend a lot of time on the Traces table to find the Traces they want to take a close look at. We added filtering options on metadata and userId to make navigating easier.

User reporting

📈 USD Cost Calculation

We've added USD cost calculation for tokens. This is helpful to understand the cost of your LLM app per execution broken down by different LLM calls in the app. We calculate the cost based on the model and the number of tokens used.

User reporting

Here is a list (opens in a new tab) of all the models we support so far. If you are missing a model, please let us know on Discord.

Token usage chart

We improved our analytics by creating a chart to the dashboard to visualize the token usage by model over time.

User reporting

📊 GET API

Build on top of the data in Langfuse using the new GET API

Usage by user, model and date

Use cases

  • Display usage in product
  • Integrate with billing system to charge users based on consumed tokens
  • Integrate with internal analytics system
GET api/public/metrics/usage
{
   id: userId,
   usage: [
     {day: <2023-08-01>
     model: claude-...,
     promptTokens: 6783,
     completionTokens: 5627,
     totalTokens: 91738},]
}

Other usage APIs

GET /api/public/users
GET /api/public/metrics/usage
GET /api/public/metrics/usage?group_by=trace_name

Raw traces

Use cases

  • Fine tuning
  • Metadata ingestion into data warehouse
GET /api/public/traces
GET /api/public/traces/:traceId
GET /api/public/observations/:observationId

API reference

🐳 Simplified self-hosting via Docker

To reduce the friction of self-hosting, we now publish Docker images for the Langfuse server. You can pull the latest image from GitHub Container Registry.

docker pull ghcr.io/langfuse/langfuse

For detailed instructions, see self-hosting and local setup documentation.

🚢 What's Next?

There is more coming in September. Stay tuned! We'll focus on shipping analytics to all users and further improvements to the UI/DX of the core platform. Anything you'd like to see? Join us on Discord and share your thoughts.

Subscribe to get monthly updates via email:

Follow along on Twitter (@Langfuse (opens in a new tab), @marcklingen (opens in a new tab))

Was this page useful?

Questions? We're here to help