Getting started with Open AI and Semantic Kernel
Introduction
This blog will explore OpenAI and Semantic Kernel, guiding you through the setup process and running a sample app. By the end of this getting-started guide, you will have the knowledge to experiment with OpenAI’s features using Semantic Kernel. Before diving into the hands-on part, let’s provide some background information.
Open AI
Open AI for developers provides tools, APIs, and platforms designed to help developers integrate AI into applications and services. It allows developers to leverage the power of AI in various platforms, such as natural language processing, image generation, semantic search, and speech recognition.
Some of the key features of Open AI
1. Open AI API: using Open AI API, you can include AI features in websites, apps and services (GPT) for human-like text, code and DAL-E for image generation
2. Fine Tuning: Fine-tune your custom datasets to fit your business needs
3. Codex: Codex allows you to generate code using simple natural language prompts, helping to speed up coding and assist with debugging. It also powers GitHub Copilot, an AI tool for developers.
4. Platform Integrations: OpenAI provides SDKs and libraries for integration, such as JavaScript. Microsoft also announced the OpenAI library for .Net https://devblogs.microsoft.com/dotnet/openai-dotnet-library/
ref: https://platform.openai.com/docs/overview
Semantic Kernel
1. Semantic Kernel is an SDK that connects large language models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face and supports programming languages such as C# and JavaScript.
2. It allows you to create and use plugins that easily integrate with your existing code.
3. The Semantic Kernel automatically manages the orchestration of these plugins using AI. Its ability to chain multiple plugins together enables AI agents to use them for more complex tasks.
4. It provides a simple interface for using AI services like chatbots, text-to-image generation, audio-to-text conversion, and memory storage.
5. Enterprises prefer Semantic Kernel for its flexibility, modularity, and visibility. It offers robust security features and telemetry support and allows for hooks and filters, making it ideal for building responsible, scalable AI solutions. It is future-proof and easily connects your code to the latest AI models.
ref: https://github.com/microsoft/semantic-kernel
Create / Setup Open AI Account
Following this guide requires an OpenAI account and an API key. If you already have these, you can skip this section. To set up an account, go to https://platform.openai.com and click Sign Up. Once your account is set up, you can access your profile dashboard. From there, navigate to the API Keys section in your account settings to generate an API key. This key will allow you to interact with and integrate OpenAI’s services into your applications.
You need to set your API key as an environment variable on your machine to be easily referenced in your code later.
$Env:OPENAI_API_KEY = "YOUR_KEY"
1. In Visual Studio 2022, click new project => console app.
2. Add dependencies
Tools => NuGet Package Manager => Package Manager Console => run following.
Install-Package Microsoft.Extensions.DependencyInjection
Install-Package Microsoft.Extensions.Logging
Install-Package Microsoft.Extensions.Logging.Console
Install-Package Microsoft.SemanticKernel
3. In Program.cs add Open AI model and API Key
OpenAI offers a wide range of models with varying capabilities and pricing. You can select a model based on your needs and budget. A complete list of models and their prices is available here. https://openai.com/api/pricing/ For this demo, I am using gpt-3.5-turbo.
string _openAIModal = "gpt-3.5-turbo";
string _apiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY", EnvironmentVariableTarget.User);
4. Create Semantic Kernel and add Open AI/enterprise services
In this section, we created the kernel builder and injected the enterprise logging services using dependency injection, which is pretty standard .dotnet core. The logging will provide extra logs to understand how AI interacts with the code. We also added the Open AI chat completion service to build the kernel.
var builder = Kernel.CreateBuilder();
builder.Services.AddLogging(b => b.AddConsole().SetMinimumLevel(LogLevel.Trace));
Kernel kernel = builder
.AddOpenAIChatCompletion(_openAIModal, _apiKey)
.Build();
5. Add chat completion/history
Since we need to manage conversation prompts, it’s important to store the history. This is where ChatHistory becomes
helpful in maintaining the interaction state.
var chatService = kernel.GetRequiredService();
var chat = new ChatHistory();
6. Add Plugin
The plugin allows an AI agent to run your code and retrieve information.
kernel.ImportPluginFromType();
while (true)
{
Console.Write("Q: ");
chat.AddUserMessage(Console.ReadLine());
var r = await chatService.GetChatMessageContentAsync(chat, settings, kernel);
Console.WriteLine(r);
chat.Add(r);
}
You can clone the complete code from the repository here.
Git clone. https://github.com/vizzontech/Getting_Started_OpenAI_And_SemanticKernel.git
Once you see the screen with the prompt “Q:”, you can start interacting with the speaker data by trying out the
following queries:
- Find speakers: This should return a list of all available speakers.
- Tony Robbins: This should return the details for the speaker “Tony Robbins.”
- id = 2 or id is 2: This should return the speaker with ID 2, “Tony Robbins.”
- Search for “motivational speaker” or “television host”: This should return data based on the search terms,
retrieving speakers that match those descriptions.
If you set a breakpoint in the “SpeakerSearchPlugin,” you will see how the semantic kernel intelligently executes
different methods based on user input through a semantic kernel planner.
Core concepts in Semantic Kernel
Three core concepts in components we implemented in the sample app
- Services: Services / Enterprise services (telemetry, logging, custom code)
- Plugins: A piece of code that the AI agent executes, e.g. Search service, ChatGPT, Bing, Custom code, call to
database service. - Planner: AI agent calls/uses plugins using the planner using “function calling”.