featured image

Yaml prompts with semantic kernel

Working with yaml prompt templates in semantic kernel

Yash Worlikar Yash Worlikar Thu May 02 2024 5 min read

The Semantic kernel SDK allows us to set up our prompts in multiple ways including native functions, GPTs plugin specs, semantic functions, and YAML files.

Today we will be looking at the YAML format for defining our prompt template. Instead of defining parts of a prompt template throughout our codebase, we can instead use the YAML format to better organize our prompts.

We can define a YAML file for our prompt template and it’s related metadata and settings.

Let’s start with a simple console app example in .Net. But before getting started make sure you have the required Microsoft.SemanticKernel and Microsoft.SemanticKernel.Yaml packages

<PackageReference Include="Microsoft.SemanticKernel" Version="1.10.0" />
<PackageReference Include="Microsoft.SemanticKernel.Yaml" Version="1.10.0" />

Once we have the required packages installed let’s define a yaml prompt template named GenerateJoke.yaml. It’s a simple prompt template that will generate a joke for a given subject.

name: GenerateJoke
template: |
  Tell me a joke about {{$subject}}.
template_format: semantic-kernel
description: A function that generates a joke for a given subject.
input_variables:
  - name: subject
    description: The subject of the joke.
    default: cats
    is_required: true
output_variable:
  description: The generated joke.
execution_settings:
  default:
    temperature: 0.8
    max_token : 200
  gpt35Service:
    temperature: 0.5
    max_token : 100
  gpt45turbo:
    temperature: 0.7
    max_token : 50

Let’s look at it’s elements one by one

  • name: The function name that will be used for referencing this template in our codebase.

  • template: The prompt template that will be used to generate prompts and send them to the AI services.

  • template_format: Currently the semantic kernel provides us with two formats. The  default semantic-kernel and handlebars

  • description: A concise function description. This will be sent to the AI service when using function calls.  

  • input_variables: The inputs that are used to modify the prompt template.

    • name: The variable name that is passed through the kernel arguments
    • description: A concise variable description that will be sent to the AI service when using function calling
    • default: The default value if no argument is provided during execution
    • is_required: Whether the given argument is required or not
  • output_variable:

  • description: The output generated by the AI service

  • execution_settings: The execution settings that will be used by the AI service for this prompt template

    • default: The default execution settings for this prompt template
      • temperature: The randomness/Creativity of the model
      • max_token: The max limit to the output tokens that can be generated by the AI service
    • gpt35Service: Custom execution settings for a given service Id. We can add multiple execution settings for different service IDs. If the provided service id is not found it will use the default settings instead.

To use this template let’s build a kernel with an AI service. Here we are using the OpenAI models but we can use other models too.

var builder = Kernel.CreateBuilder();
builder
.AddOpenAIChatCompletion("<Model-Name>", "<API-Key>", "gpt35Service");
//or use the Azure OpenAI service 
//.AddAzureOpenAIChatCompletion("<Deployment-Name>","<EndPoint>","<API-Key", "gpt35Service");

Kernel kernel = builder.Build();

Now we will be reading the yaml file and converting it into a Kernel Function that will be used by our AI services.

string promptYaml  = File.ReadAllText($"Path To Your prompt folder\\GenerateJoke.yaml");
KernelFunction jokeFunc = kernel.CreateFunctionFromPromptYaml(promptYaml);

Finally, we will call our function and print the generated response

KernelArguments kernelArgs = new KernelArguments()
{
    {"subject","tiger"}
};
//invoke the function with the kernel and provide kernelArguments 
FunctionResult results = await jokeFunc.InvokeAsync(kernel, kernelArgs);
string response = results.GetValue<string>();
Console.WriteLine(response);

Since have used the gpt35Service as our serviceID for our AI service, the execution settings assigned for gpt35Service will be used instead of the default one.  

Output

Why don't tigers like fast food?
Because they can never catch it!

Here’s the entire code in one block.

internal class Program
{
    async static Task Main(string[] args)
    {
//Build the kernel with an AI Chat Completion service
    var builder = Kernel.CreateBuilder();
    builder.AddOpenAIChatCompletion("<Model-Name>", "<API-Key>", "gpt35Service");
    Kernel kernel = builder.Build();

//Read the yaml template and convert it into a kernel function
    string promptYaml  = File
                         .ReadAllText($"Path To Your prompt folder\\GenerateJoke.yaml");
    KernelFunction jokeFunc = kernel
                             .CreateFunctionFromPromptYaml(promptYaml);
                             
//Pass in the arguments and call the AI service 
    KernelArguments kernelArgs = new KernelArguments()
    {
        {"subject", "tiger"}
    };
FunctionResult results = await jokeFunc.InvokeAsync(kernel, kernelArgs);
    string response = results.GetValue<string>();
    Console.WriteLine(response);
    }
}

Since we are generating a Kernel function we can use it with more complex workflows like function calling and agents like other native functions.

We can even nest kernel functions inside our prompt(provided they are present in the kernel) to handle more complex business logic. This way we can define flows/plans to better orchestrate the AI services in our application.

The use of YAML prompts not only streamlines the development process but also improves the maintainability of your applications making it a viable alternative.

Prev
The new prompt filter in Semantic Kernel
Next
Manual function calling with Semantic Kernel and OpenAI