Tracking token usage
This notebook goes over how to track your token usage for specific calls. This is currently only implemented for the OpenAI API.
Here's an example of tracking token usage for a single Chat model call:
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
const chatModel = new ChatOpenAI({
modelName: "gpt-4",
callbacks: [
{
handleLLMEnd(output) {
console.log(JSON.stringify(output, null, 2));
},
},
],
});
await chatModel.invoke("Tell me a joke.");
/*
{
"generations": [
[
{
"text": "Why don't scientists trust atoms?\n\nBecause they make up everything!",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessage"
],
"kwargs": {
"content": "Why don't scientists trust atoms?\n\nBecause they make up everything!",
"additional_kwargs": {}
}
},
"generationInfo": {
"finish_reason": "stop"
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 13,
"promptTokens": 12,
"totalTokens": 25
}
}
}
*/
API Reference:
- ChatOpenAI from
@langchain/openai
If this model is passed to a chain or agent that calls it multiple times, it will log an output each time.