featured image

Why Your AI Agent Isn't Calling Your Tools: Fixing Function Invocation Issues in Semantic Kernel

This blog covers why your Semantic Kernel AI agent may ignore plugins and how to fix it with a simple configuration change.

Aditya Uke Aditya Uke Fri Apr 11 2025 4 min read

You’re building an AI agent using a powerful framework like Microsoft Semantic Kernel. You’ve carefully crafted a plugin with specific functions to give your agent new skills – maybe fetching weather data, accessing a database, or interacting with an API. You write a prompt that clearly requires this plugin… but the agent just responds with plain text, completely ignoring the tool you gave it. Sound familiar?

If you’ve spent hours scratching your head, debugging, and wondering why your agent isn’t using its tools, you’re not alone. This is a common hurdle, especially when diving into agent development, and the solution is often simpler than you think.

This blog will explain this frequent issue, show you why it happens, and provide a straightforward fix within Semantic Kernel.

Learn about Building AI Agent using Semantic Kernel Agent Framework.

What Are AI Agents and Plugins?

Think of an AI Agent as a smart assistant powered by a large language model (LLM). It can understand your requests, reason about them, and plan steps to achieve a goal. However, LLM’s knowledge is generally limited to the data it was trained on. It can’t inherently know today’s weather or access your company’s private database. That’s where Plugins (sometimes called Tools or Functions) come in. Plugins are pieces of code that extend an agent’s capabilities. They are like tools in a toolbox, allowing the agent to interact with the outside world, perform specific calculations, or access real-time information. When you give an agent a plugin, you’re essentially teaching it a new skill.

The Problem: The Agent Ignores Your Plugin

Let’s illustrate with a common scenario using Semantic Kernel. Imagine you have a simple plugin to get the current weather:

public class WeatherPlugin
{
    [KernelFunction, Description("Gets the current weather for a specific location")]
    public string GetWeather([Description("The city name")] string location)
    {
        // (Code here to call a weather API and return the weather)
        return $"The weather in {location} is sunny.";
    }
}

Now, you set up your Semantic Kernel agent and add this plugin:

using Microsoft.SemanticKernel;

var builder = Kernel.CreateBuilder();
builder.AddAzureOpenAIChatCompletion(
    deploymentName: <Your-DeploymentName>,
    endpoint: <Your-Endpoint>,
    apiKey: <Your-ApiKey>);

Kernel kernel = builder.Build();

ChatCompletionAgent agent = new ChatCompletionAgent()
{
    Name = "SK-Agent",
    Instructions = "You are a helpful assistant that can use plugins to perform tasks.",
    Kernel = kernel 
    Arguments = new KernelArguments()  
 };

KernelPlugin weatherPlugin = KernelPluginFactory.CreateFromType<WeatherPlugin>();
agent.Kernel.Plugins.Add(weatherPlugin);

var message = new ChatMessageContent(AuthorRole.User, "What's the weather like in London today?");

await foreach (StreamingChatMessageContent response in weatherInformationAgent.InvokeStreamingAsync(message))
{
  Console.Write($"{response.Content}");
}

You run this, expecting the agent to recognize the need for weather information, call your GetWeather function, and return the result. Instead, it gives a generic LLM response, completely bypassing the plugin. You check your plugin code, the registration, the prompt – everything seems right. Why isn’t it working?

The Solution

The core issue often lies in the execution settings. By default, the agent might not be configured to automatically decide whether to call a function based on the prompt. It needs explicit permission or guidance.

In Semantic Kernel, this guidance is provided through PromptExecutionSettings, specifically the FunctionChoiceBehavior property. To solve our problem, you need to tell the kernel that the agent is allowed to automatically choose and execute a suitable function from its available plugins if the prompt suggests it.

Here’s how you modify the agent invocation call:

// Initialize the PromptExecutionSettings and set FunctionChoiceBehavior to Auto
var executionSettings = new PromptExecutionSettings() 
{
    // This setting allows the agent to auto-select and invoke plugin functions if needed.
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

// Pass these settings when creating the KernelArguments for your agent.
var kernelArguments = new KernelArguments(executionSettings);

// Create the agent with these settings included.
ChatCompletionAgent agent = new ChatCompletionAgent()
{
    Name = "SK-Agent",
    Instructions = "You are a helpful assistant that can use plugins to perform tasks.",
    Kernel = kernel,
    Arguments = kernelArguments
};

It instructs the underlying LLM (that supports function/tool calling) to analyze the prompt and the descriptions of the available functions (plugins). If it determines that calling one or more functions is the best way to fulfill the request, it will output the necessary information to trigger those function calls, which Semantic Kernel then handles.

Conclusion

Building AI agents that can leverage external tools is incredibly powerful, but small configuration details can sometimes lead to significant debugging headaches. If your Semantic Kernel agent isn’t calling its plugins when you expect it to, one of the very first things to check is your PromptExecutionSettings.

Ensuring you set FunctionChoiceBehavior = FunctionChoiceBehavior.Auto() (or ToolChoiceBehavior.Auto) in your execution settings explicitly grants the agent the autonomy to decide when to use its tools, bridging the gap between understanding a request and taking action.

Prev
Building Multi‑Agent AI Workflows with Semantic Kernel Agent Framework in .NET
Next
Building a Model Context Protocol Server with .NET and Semantic Kernel Integration