Prompt engineering with Semantic Kernel
Using prompts effectively with Semantic Kernel
Yash Worlikar Mon Jan 22 2024 5 min readIntroduction
Large language models are powerful tools that can generate natural responses for multiple domains. However, to get the most out of these models, we need to know how to craft effective prompts that can generate the desired response from the model. This is where prompt engineering comes in.
Prompt engineering is the technique of designing inputs that can guide the LLM to produce high-quality outputs. Prompt engineering is important because prompts can influence the behavior and outputs of the LLM in subtle or significant ways.
For example, changing the wording, tone, format, or length of the prompt can greatly affect the style, content and accuracy of the generated text.
Prompt engineering with Semantic Kernel
Prompt engineering can be challenging and time-consuming, especially when you have to work with multiple LLMs and different parameters. Semantic Kernel provides a common interface allowing you to the quickly compare different prompts with multiple AI services.
As prompts are essential for interacting with the AI models, Semantic Kernel offers a unified approach to prompt engineering, streamlining the process and making it easier for users working with diverse language models and parameters.
Creating and invoking a prompt is as simple as
// Creating a kernel with Azure OpenAI service
var builder = Kernel.CreateBuilder();
builder.AddAzureOpenAIChatCompletion("<DeployementName>","<AzureEnpoint>","<API-KEY>")
//Creating a prompt
string review = "The food was delicious";
string promptTemplate = $"Was the following review positive or negative? {review}";
// Invoking the prompt
Console.WriteLine(await kernel.InvokePromptAsync(promptTemplate));
Output
The review "The food was delicious" is generally positive. The use of the word "delicious" indicates a positive experience with the food.
With just a few lines of code we were able to interact the AI service, but this generated prompt is too verbose and not consistent. Every time we call the AI service we may get a different version of the response.
To enhance the consistency and precision of our prompt, we can make a few adjustments to our prompt:
string promptTemplate = @$"
Was the following review positive or negative? Only reply with a single word. Replay with `Error` if the statement is not valid.
<Example1>
Review: Would recommend
Intent: Positive
</Example1>
<Example2>
Review: Disappointing quality, wont recommend
Intent: Negative
</Example2>
Review: {review}
Intent:
This give us overall better response from our use case. When writing a prompt we can use the following techniques to improve our prompts.
- Be more specific in your instruction. Tell the models exactly what you want
- Provide proper context
- Always use technical terms wherever possible
- Give instructions to handle invalid cases
- Given one or more examples to the model as a reference if necessary
- Provided a proper structure for the response
- Using delimiters like quotes
''
or triple quotes'''
to separate different sections like instructions and examples.
While we can control our outputs for simpler tasks, we must be careful not over engineer prompts or write prompts for complex tasks. Rather we should break down the complex tasks into multiple smaller, simpler tasks to help maintain consistency results through the Application life cycle.
Semantic Functions
While we can directly use our prompt to generate an output, Semantic Kernel provides us with a much simpler way to effectively use our prompts throughout the application through semantic functions and plugins.
Semantic functions are a way of defining a prompt and it’s parameters in Semantic Kernel. Semantic functions allows to specify the input parameters, the output format, and the configuration of the prompt, such as the language model to use , the temperature settings, the max tokens, number of results, etc.
We can define a semantic function using the YAML format
Here’s a simple YAML prompt for generating a story
name: GenerateStory
template: |
Tell a story about {{$input}} that is {{$length}} sentences long.
template_format: semantic-kernel
description: A function that generates a story about a topic.
input_variables:
- name: input
description: The topic of the story.
is_required: true
- name: length
description: The number of sentences in the story.
is_required: true
output_variable:
description: The generated story.
execution_settings:
service1:
model_id: gpt-4
temperature: 0.7
max_tokens: 200
service2:
model_id: gpt-3
temperature: 0.4
default:
temperature: 0.6
We can now import this prompt as a semantic function into the kernel using
var function = kernel.CreateFunctionFromPromptYaml(generateStoryYaml);
Once imported we can call it through the kernel
var output = await kernel.InvokeAsync(function, arguments: new()
{
{ "input", "Minerals" },
{ "length", "5" },
}));
Console.WriteLine(output);
Output
In the mystical realm of Crystalonia, an ancient land hidden deep within the Earth's core, lived sentient minerals known as the Gemfolk. These remarkable beings possessed vibrant hues and unique crystalline structures that defined their personalities and abilities. The leader, Diamondia, with her dazzling brilliance, governed over the realm and maintained harmony among the Gemfolk. One day, a rare meteorite crashed, bringing with it a new mineral named Lumina, whose radiant glow captivated the Gemfolk and brought a surge of newfound energy to Crystalonia. Lumina's arrival marked a turning point, as the Gemfolk united their powers to protect their enchanted land from the unforeseen challenges that lay ahead.
Conclusion
Prompt engineering is a core aspect of maximizing the potential of generative AI models. Semantic Kernel’s unified approach enables users to efficiently navigate the complexities of prompt design, ensuring that they can utilize the full capabilities of different language models across various domains.
By providing a comprehensive interface, Semantic Kernel contributes to the ongoing refinement of prompt engineering techniques in the evolving landscape of natural language processing.
For more information you can check out the following blog