featured image

Getting Started with Semantic Kernel plugins

Building GPT plugins in dotnet using Semantic kernel

Yash Worlikar Yash Worlikar Mon Jan 29 2024 3 min read

Introduction

While the Large Language Models are able to generate responses with natural language they lack mathematical capabilities and may often hallucinate about knowledge that doesn’t exist.

To overcome this, with proper finetuning and prompt engineering, we can make the model call an external API to complete a specific task. So rather than trying to teach the models math or other domain-specific workflow, we connect it with external API and services.

A plugin in the semantic kernel is just a group of such functions exposing external logic including prompts and native code.

Working with Plugins

Semantic kernel comes with various in-built plugins that can be used right out-of-the-box. Since the semantic kernel SDK follows the OpenAI plugin specification we can also directly import or export the plugins available in the GPT store seamlessly.

We can add plugins to a kernel in multiple ways before or after building a kernel. For example, the below code imports a plugin before we build the kernel.

var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion();

//Import an available plugins
builder.Plugins.AddFromType<MathPlugin>();
var kernel = builder.Build();

We can now invoke the plugin directly using the following code.

var sum = await kernel.InvokeAsync("MathPlugin", "Add", new()
            {
                { "value", "5" },
                { "amount", "3" },
            });

Creating a plugin

The functions can be of two types:

  • Native functions - They are written in native code containing custom logic.
//NATIVE FUNCTION
public class CalculatorPlugin
    {
        [KernelFunction]
        [Description("Add two numbers")]
        public int Add(
         [Description("First number to add")]int num1,
         [Description("Second number to add")]int num2)
        {
            return num1 + num2;
        }
	}
  • Semantic functions - They are prompt templates designed to work as functions
public class WriterPlugin
{
	const string GenerateStoryDefination =
        @"SUBJECT: {{$input}}
	      SUBJETC END
	      Write an interesting short story about the topic provided in 'SUBJECT'.
	      The story must be purely fiction. Do not incorporate real-life events
	      in it.
	      BEGIN STORY:";

	private KernelFunction _generateStoryFunction { get; set; }

	//SEMANTIC FUNCTION
	[KernelFunction, Description("Generate an interesting story")]
	public async Task<string> GenerateStoryAsync(string input, Kernel kernel)
	{
		_generateStoryFunction = KernelFunctionFactory.CreateFromPrompt(GenerateStoryDefination,
				   description: "Generate an interesting story");

		var result = (await _generateStoryFunction.InvokeAsync(kernel, new() { ["input"] = input }).ConfigureAwait(false))
				    .GetValue<string>() ?? string.Empty;

		return result;
	}
}

Native functions typically perform tasks with specific algorithms and business logic in mind, while semantic functions are prompt-based and mimic the behavior of traditional functions.

We can also enable the auto function calling by setting the ToolCallBehavior to AutoInvokeKernelFunctions in OpenAIPromptExecutionSettings and passing it to the kernel.

  // Enable auto function calling
    OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()
    {
        ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
    };

Limitations

While plugins can address certain challenges, they may also give rise to various other issues. One limitation is that their effectiveness is constrained by the quality of the underlying models they rely on. Simply changing a model may improve or break an existing workflow

Additionally, the integration of too many plugins can lead to complications and cause unintended problems. The model may lose track of provided functions or may even start hallucinating functions that aren’t available or don’t even exist.

Another concern is the potential for unnecessary calls, which may impact the overall efficiency and performance of the system thus increasing costs.

Conclusion

Using plugins in the semantic kernel provides a powerful way to extend the capabilities of large language models by leveraging external APIs and native code.

This approach allows for efficient handling of tasks that require specialized knowledge or functionality without burdening the model with unnecessary details while providing seamless integration with the GPT store.

Prev
Working with Azure OpenAI on Semantic Kernel
Next
Prompt engineering with Semantic Kernel