featured image

Improved function calling with the Semantic Kernel

A new way to call functions using the latest Semantic Kernel SDK

Yash Worlikar Yash Worlikar Tue Nov 05 2024 5 min read

Semantic Kernel’s latest version revisits how developers handle function calling in their AI applications. While language models excel at generating text, they can’t directly interact with external services. This limitation is where function calling becomes essential - it allows AI models to trigger real-world actions through your code.

In this blog let’s see how you can implement it in your projects. Function calling at its core is a structured way to tell the model: “Here are the tools you can use. Respond with one if you need it” You define function signatures with clear inputs and outputs. The model then chooses when to use these tools and fills in the arguments necessary for the function.

Old way of Function Calling

In previous versions of Semantic Kernel, function calling was implemented using vendor-specific approaches.

For instance, when working with OpenAI, you would use the OpenAIPromptExecutionSettings and the ToolCallBehavior Enum to configure how functions should be handled:

Kernel kernel = Kernel.CreateBuilder();

var settings = new OpenAIPromptExecutionSettings()
{ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions };

Console.WriteLine(await kernel.InvokePromptAsync("Given that it is now the 9th of September 2024, 11:29 AM, what is the likely color of the sky in Boston?", new(settings)));

This approach had limitations:

  • Vendor-specific implementations made it difficult to switch between different AI providers
  • Less flexibility in controlling function execution
  • Inconsistent behavior across different AI services

The New Function Calling

The new Function-calling approach aims to provide a common abstraction across supported models. But since it’s currently an experimental feature, it doesn’t support all providers yet. You can check the latest supported models here: Function Calling behavior supported models

The updated implementation introduces the FunctionChoiceBehavior class, which provides three distinct behaviors:

  1. AutoFunctionChoiceBehavior
  2. RequiredFunctionChoiceBehavior
  3. NoneFunctionChoiceBehavior

The above options determine how a model behaves when function calling is enabled.

Let’s see them through an example. First, we create a new Class named MathPlugin along with a simple function to find if a number is prime or not.

public class MathPlugin
{
 [KernelFunction]
 [Description("Checks if a number is prime")]
    public bool IsPrime(int number)
 {
        if (number <= 1) return false;
        if (number == 2) return true;
        if (number % 2 == 0) return false;

        int boundary = (int)Math.Floor(Math.Sqrt(number));
        for (int i = 3; i <= boundary; i += 2)
 {
            if (number % i == 0) return false;
 }
        return true;
 }
}

Now let’s create a basic kernel and add the above class as a plugin.  

//build the kernel and add any model service that supports the function calling 
IKernelBuilder kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion("MODEL","APIKEY") 
.Build();

//Add the plugin to the kernel
kernel.Plugins.AddFromType<MathPlugin>();

Auto Function Choice Behavior

The Auto behavior gives the AI model the freedom to decide whether to call functions and which ones to call. This is ideal for scenarios where you want the model to choose the most appropriate functions based on the context.

PromptExecutionSettings settings = new() 
{ 
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto() 
};

var result = await kernel.InvokePromptAsync(
    "Is 171 a prime number?", 
    new KernelArguments(settings)
);

OUTPUT

171 is not a prime number

Required Function Choice Behavior

The Required behavior forces the model to use specific functions. This is useful when you want to ensure certain functions are called in a particular context:

//Set the function calling behavior to required along with specifying the function
var settings = new PromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Required(
        functions: [IsPrime]
 )
};

var result = await kernel.InvokePromptAsync(
    "171", 
    new KernelArguments(settings)
);

OUTPUT

171 is not a prime number 

Here the model calls the IsPrime function even without explicitly stating it in the prompt.

None Function Choice Behavior

The None behavior only provides the available functions to the model. Every function attached to our kernel gets sent as function definitions along with the prompt but instead of making a function invocation request like other modes, the model responds with natural language.

PromptExecutionSettings settings = new() 
{ 
    FunctionChoiceBehavior = FunctionChoiceBehavior.None() 
};

var result = await kernel.InvokePromptAsync(
    "Which provided function can determine if 171 is prime?", 
    new KernelArguments(settings)
);

OUTPUT

The function that can determine if a number is prime is `MathPlugin-IsPrime`.

Function Invocation

Whenever the AI model wants to call a function, it sends a function call request as a response. Now it is usually up to the developers how they want to handle this request.

Semantic kernel provides two different ways to handle function invocation: Auto and Manual

Auto Function Invocation

In auto function invocation, the AI model automatically selects and invokes the necessary functions based on the prompt. This mode is fully automated, meaning the Semantic Kernel handles the function calls and integrates the results into the chat history without any manual intervention.

This is the default behavior for FunctionChoiceBehavior.Auto() and FunctionChoiceBehavior.Required() options. (FunctionChoiceBehavior.None() isn’t included as it will never make any function calls)

var settings = new() { FunctionChoiceBehavior = FunctionChoiceBehavior.Auto() };
//var settings = new() { FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(autoInvoke: true) };

Manual Function Invocation

Manual function invocation provides more control over the function execution process. When this mode is enabled, the Semantic Kernel does not automatically invoke the functions chosen by the AI model. Instead, it returns a list of chosen functions to the caller, who can then decide which functions to invoke, handle exceptions, and manage the order of function calls.

We can use manual function calling by setting autoInvoke to false as follows:

var settings = new() { FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(autoInvoke: false) };

Now the functions won’t be called automatically but rather we would receive and function invocation request that we can handle as per our needs.

Wrapping up

Semantic kernel provides a unified and powerful approach to function calling, it enables maintainable AI-powered applications while remaining provider-agnostic.

The combination of different behaviors and execution options gives developers the tools they need to build applications that integrate AI capabilities effectively.

Prev
Using the Waitfor and WaitforCompletion in .NET Aspire 9
Next
Using Structured Outputs with Semantic Kernel