Docs
Token Usage

Token usage

Token usage numbers are used across the Langfuse interface and reports.

generation = {
  ...
  usage: {
    promptTokens: number,
    completionTokens: number,
    totalTokens: number,
  },
  ...
}

Ingestion of usage

When ingesting LLM generations into Langfuse you can add token usage numbers to the generation object. If available in the LLM response, this is the preferred way to track usage with Langfuse.

Built-in token calculation

For ingested generations without usage attributes, Langfuse automatically calculates token amounts. The correct tokenizer is selected based on the model attribute of the generation.

This is helpful for LLM-APIs that do not return usage information in the response, e.g., when streaming OpenAI completions.

ModelTokenizerPackage
gpt*cl100k_basetiktoken (opens in a new tab)
text-davinci*p50k_basetiktoken (opens in a new tab)
claude*claude@anthropic-ai/tokenizer (opens in a new tab)

Need another tokenizer? Create an issue on GitHub.

Was this page useful?

Questions? We're here to help