featured image

Using OpenAI vs Azure OpenAI with Semantic Kernel. What's the difference?

Understanding the differences between OpenAI and Azure OpenAI and how to use them with Semantic Kernel

Yash Worlikar Yash Worlikar Wed Dec 11 2024 5 min read

Semantic Kernel is an open-source .NET framework for integrating AI models into your applications. It provides a simple, extensible, and easy-to-use API for calling AI models, and supports multiple AI platforms, including OpenAI and Azure OpenAI, right out of the box. So which one is suitable for you? In this blog, we’ll see the differences between OpenAI and Azure OpenAI Models and how to use them with Semantic Kernel.

OpenAI Overview

OpenAI has earned a reputation for popularizing Generative AI through its advanced language models such as GPT-3.5 and GPT-4, which excel in generating human-like text. Their flagship product, ChatGPT, has significantly contributed to this widespread recognition. Furthermore, OpenAI offers an extensive array of models and tools for diverse AI applications accessible via APIs making it a go-to choice for developers and researchers.

Setting up an OpenAI account is relatively easy. If you already have an OpenAI account, you can start by creating a project and obtaining an API key. If you don’t have an account, you can sign up for one on the OpenAI website.

By default, new accounts have a 5$ free credit after which you would need to set up your payment method.

What is Azure OpenAI?

Azure OpenAI is a managed service for building, deploying, and managing OpenAI models within Azure’s robust cloud infrastructure. It provides a similar set of models and tools as OpenAI, tailored specifically for enterprise use cases. This integration offers enhanced security, compliance, and enterprise-grade features, making it ideal for business applications.

Due to this the requirements for setting up an Azure OpenAI resouce aren’t as simple compared to OpenAI. First of all, you need an Azure account. Next you need to fill up an access request form for using Azure OpenAI via your business email. Only after being given access you can start creating Azure OpenAI resources and deploying the OpenAI models on Azure.

Selecting a model

When integrating these services into your application, both OpenAI and Azure OpenAI provide access to the latest cutting-edge AI models. However, they cater to different needs: OpenAI is focused on the research and development of AI models, while Azure OpenAI emphasizes a managed service approach for deploying AI models in production environments.

However, when choosing a model for your application, there are some key factors to consider based on which AI API service you’re using:

OpenAI

  • OpenAI provides a diverse range of models, including text generation, image generation, and the latest feature releases. For the latest available models and pricing, refer to OpenAI API pricing.
  • OpenAI hosts a single global service without explicit regional deployment options. However, be aware that you will be rate-limited according to your account, as per OpenAI API rate limits.

Azure OpenAI

  • As previously mentioned, Azure OpenAI provides enterprise-grade security and compliance features. Although it has limited model availability compared to OpenAI’s offerings, it does offer region-specific model deployment options. Before creating an Azure OpenAI resource, make sure to check the Azure OpenAI Model Availability for the region you’re targeting.
  • In terms of pricing, Azure OpenAI offers a pay-per-use model similar to OpenAI. However Azure also offers Provisioned deployments providing dedicated GPU resources and minimal latency suitable for large-scale applications. These deployments are charged on an hourly basis and the pricing depends on the selected model. For more details check out Azure Provisioned throughput
  • As a managed service on Azure, it offers seamless integration with Azure ecosystem services like Azure Cognitive Services, Azure Machine Learning, and Azure CosmosDB, among other features.

Using OpenAI and Azure OpenAI with Semantic Kernel

Setting up these services with Semantic Kernel is relatively straightforward and can be done in a few steps:

For example, let’s add chat completion models to our kernel.

IKernelBuilder builder = Kernel.CreateBuilder();

// Add OpenAI to our kernel 
builder.AddOpenAIChatCompletion(modelId: "<modelId>",apiKey:"<apikey>",serviceId: "OpenAI");

// Add Azure OpenAI to our kernel
builder.AddAzureOpenAIChatCompletion(deploymentName: "<deploymentName>", endpoint: "<endpoint>", apiKey: "<apiKey>",serviceId: "AzureOpenAI");

//build our kernel
Kernel kernel = builder.Build();

The serviceId field here is optional. You can use it to get the required chat completion service registered with your kernel. Both these methods also support other overload methods for customizing the behavior of your services like passing your httpclient factories or service-specific clients. Similarly, we could also add image generation and audio models to our kernel using the respective methods. We can now call our chat completion services as demonstrated below:

var openAIClient = kernel.GetRequiredService<IChatCompletionService>("OpenAI");
var azureOpenAIClient = kernel.GetRequiredService<IChatCompletionService>("AzureOpenAI");

// Using OpenAI client to get chat message content
var openAIResponse = await openAIClient.GetChatMessageContentAsync("Hi");
Console.WriteLine(openAIResponse.ToString());

// Using Azure OpenAI client to get chat message content
var azureOpenAIResponse = await azureOpenAIClient.GetChatMessageContentAsync("Hello");
Console.WriteLine(azureOpenAIResponse.ToString());

In this example, I am using the non-streaming method, but you can also use GetStreamingChatMessageContentsAsync for streaming responses.

Additionally, you can configure service-specific settings for the chat completion services using OpenAIPromptExecutionSettings and AzureOpenAIPromptExecutionSettings for the respective services for further customization.

Conclusion

Choosing between OpenAI and Azure OpenAI depends largely on your application’s specific requirements. If you’re looking for the flexibility of direct access to cutting-edge AI models with a simpler setup, OpenAI might be the right choice. On the other hand, if your application demands enterprise-grade features like compliance, enhanced security, and integration with the Azure ecosystem, Azure OpenAI is a better fit.

Both platforms can be seamlessly integrated into applications using Semantic Kernel, making it easier to leverage the strengths of either service. By utilizing Semantic Kernel’s extensible API, developers can streamline the integration process, enabling the use of advanced AI models for text generation, image generation, and more.

Next
Building a Recommendation System Using Text Embeddings and python