Function calling
A growing number of chat models, like OpenAI, Mistral, etc., have a function-calling API that lets you describe functions and their arguments, and have the model return a JSON object with a function to invoke and the inputs to that function. Function-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.
LangChain comes with a number of utilities to make function-calling easy. Namely, it comes with:
- simple syntax for binding functions to models
- converters for formatting various types of objects to the expected function schemas
- output parsers for extracting the function invocations from API responses
- chains for getting structured outputs from a model, built on top of function calling
Weβll focus here on the first two points. For a detailed guide on output parsing check out the OpenAI Tools output parsers and to see the structured output chains check out the Structured output guide (OpenAI, Mistral).
Before getting started make sure you have @langchain/core
installed.
- npm
- yarn
- pnpm
npm i @langchain/core
yarn add @langchain/core
pnpm add @langchain/core
Weβll also use zod-to-json-schema
frequently throughout this doc for
converting Zod schemas to JSON schemas. Make sure you have it installed:
- npm
- yarn
- pnpm
npm i zod-to-json-schema
yarn add zod-to-json-schema
pnpm add zod-to-json-schema
A number of models implement helper methods that will take care of formatting and binding different function-like objects to the model. Letβs take a look at how we might take the following Zod function schema and get different models to invoke it:
import { z } from "zod";
/**
* Note that the descriptions here are crucial, as they will be passed along
* to the model along with the class name.
*/
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
Set up dependencies and API keys:
npm install @langchain/openai
OPENAI_API_KEY=your-api-key
We can use the ChatOpenAI.bind()
method to handle converting
calculatorSchema
to an OpenAI function and binding it to the model
(i.e., passing it in each time the model is invoked).
import { ChatOpenAI } from "@langchain/openai";
import { zodToJsonSchema } from "zod-to-json-schema";
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo-0125",
temperature: 0,
});
const llmWithTools = llm.bind({
tools: [
{
type: "function" as const,
function: {
name: "calculator",
description: "A simple calculator tool",
parameters: zodToJsonSchema(calculatorSchema),
},
},
],
});
await llmWithTools.invoke("What is 3 * 12");
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "",
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_cLBi7NjrehSEPoXr21i08NER",
type: "function",
function: [Object]
}
]
}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_cLBi7NjrehSEPoXr21i08NER",
type: "function",
function: {
name: "calculator",
arguments: '{"operation":"multiply","number1":3,"number2":12}'
}
}
]
}
}
See the LangSmith trace here.β
We can add a tool parser to extract the tool calls from the generated message to JSON:
import { JsonOutputToolsParser } from "@langchain/core/output_parsers/openai_tools";
const toolChain = llmWithTools.pipe(new JsonOutputToolsParser());
await toolChain.invoke("What is 3 * 12");
[
{
type: "calculator",
args: { operation: "multiply", number1: 3, number2: 12 }
}
]
See the LangSmith trace here.β
If we wanted to force that a tool is used (and that it is used only
once), we can set the tool_choice
argument:
const llmWithMultiply = llm.bind({
tools: [
{
type: "function" as const,
function: {
name: "calculator",
description: "A simple calculator tool",
parameters: zodToJsonSchema(calculatorSchema),
},
},
],
tool_choice: {
type: "function" as const,
function: {
name: "calculator",
},
},
});
await llmWithMultiply.invoke("What is 3 * 12");
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "",
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_6Zh6rnj4W8pvfrOxzc4720Pw",
type: "function",
function: [Object]
}
]
}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_6Zh6rnj4W8pvfrOxzc4720Pw",
type: "function",
function: {
name: "calculator",
arguments: '{"operation":"multiply","number1":3,"number2":12}'
}
}
]
}
}
Set up dependencies and API keys:
npm install @langchain/mistralai
MISTRAL_API_KEY=your-api-key
We can use the ChatMistralAI.bind()
method to handle converting
calculatorSchema
to a function and binding it to the model (i.e.,
passing it in each time the model is invoked).
import { ChatMistralAI } from "@langchain/mistralai";
import { zodToJsonSchema } from "zod-to-json-schema";
const llm = new ChatMistralAI({
modelName: "mistral-large-latest",
temperature: 0,
});
const llmWithTools = llm.bind({
tools: [
{
type: "function" as const,
function: {
name: "calculator",
description: "A simple calculator tool",
parameters: zodToJsonSchema(calculatorSchema),
},
},
],
});
await llmWithTools.invoke("What is 3 * 12");
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "",
additional_kwargs: {
tool_calls: [ { id: "null", type: "function", function: [Object] } ]
}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
tool_calls: [
{
id: "null",
type: "function",
function: {
name: "calculator",
arguments: '{"operation": "multiply", "number1": 3, "number2": 12}'
}
}
]
}
}
See the LangSmith trace here.β
We can add a tool parser to extract the tool calls from the generated message to JSON:
import { JsonOutputToolsParser } from "@langchain/core/output_parsers/openai_tools";
const toolChain = llmWithTools.pipe(new JsonOutputToolsParser());
await toolChain.invoke("What is 3 * 12");
[
{
type: "calculator",
args: { operation: "multiply", number1: 3, number2: 12 }
}
]
See the LangSmith trace here.β
Set up dependencies and API keys:
npm install @langchain/community
TOGETHER_AI_API_KEY=your-api-key
We can use the ChatTogetherAI.bind()
method to handle converting
calculatorSchema
to a function and binding it to the model (i.e.,
passing it in each time the model is invoked).
import { ChatTogetherAI } from "@langchain/community/chat_models/togetherai";
import { zodToJsonSchema } from "zod-to-json-schema";
const llm = new ChatTogetherAI({
modelName: "mistralai/Mixtral-8x7B-Instruct-v0.1",
temperature: 0,
});
const llmWithTools = llm.bind({
tools: [
{
type: "function" as const,
function: {
name: "calculator",
description: "A simple calculator tool",
parameters: zodToJsonSchema(calculatorSchema),
},
},
],
});
await llmWithTools.invoke("What is 3 * 12");
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "",
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_97uau7pkgam7n25q19fq4htp",
type: "function",
function: [Object]
}
]
}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_97uau7pkgam7n25q19fq4htp",
type: "function",
function: {
name: "calculator",
arguments: '{"operation":"multiply","number1":3,"number2":12}'
}
}
]
}
}
See the LangSmith trace here.β
We can add a tool parser to extract the tool calls from the generated message to JSON:
import { JsonOutputToolsParser } from "@langchain/core/output_parsers/openai_tools";
const toolChain = llmWithTools.pipe(new JsonOutputToolsParser());
await toolChain.invoke("What is 3 * 12");
[
{
type: "calculator",
args: { operation: "multiply", number1: 3, number2: 12 }
}
]
See the LangSmith trace here.β
If we wanted to force that a tool is used (and that it is used only
once), we can set the tool_choice
argument:
const llmWithMultiply = llm.bind({
tools: [
{
type: "function" as const,
function: {
name: "calculator",
description: "A simple calculator tool",
parameters: zodToJsonSchema(calculatorSchema),
},
},
],
tool_choice: {
type: "function" as const,
function: {
name: "calculator",
},
},
});
await llmWithMultiply.invoke("What is 3 * 12");
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "",
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_vcc7nar0r2doz26jnnsojlls",
type: "function",
function: [Object]
}
]
}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
function_call: undefined,
tool_calls: [
{
id: "call_vcc7nar0r2doz26jnnsojlls",
type: "function",
function: {
name: "calculator",
arguments: '{"operation":"multiply","number1":3,"number2":12}'
}
}
]
}
}
Defining functions schemasβ
In case you need to access function schemas directly, LangChain has a built-in converter that can turn Zod schemas with LangChain tools into the OpenAI format JSON schema:
import { StructuredTool } from "@langchain/core/tools";
import { convertToOpenAITool } from "@langchain/core/utils/function_calling";
import { z } from "zod";
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
class CalculatorTool extends StructuredTool {
schema = calculatorSchema;
name = "calculator";
description = "A simple calculator tool";
async _call(params: z.infer<typeof calculatorSchema>) {
return "The answer";
}
}
const asOpenAITool = convertToOpenAITool(new CalculatorTool());
asOpenAITool;
{
type: "function",
function: {
name: "calculator",
description: "A simple calculator tool",
parameters: {
type: "object",
properties: {
operation: {
type: "string",
enum: [Array],
description: "The type of operation to execute."
},
number1: {
type: "number",
description: "The first number to operate on."
},
number2: {
type: "number",
description: "The second number to operate on."
}
},
required: [ "operation", "number1", "number2" ],
additionalProperties: false,
"$schema": "http://json-schema.org/draft-07/schema#"
}
}
}
Next stepsβ
- Output parsing: See OpenAI Tools output parsers to learn about extracting the function calling API responses into various formats.
- Structured output chains: Some models have constructors that handle creating a structured output chain for you (OpenAI, Mistral).
- Tool use: See how to construct chains and agents that actually call the invoked tools in these guides.