Welcome to the Soenneker Semantic Kernel Pool for OpenAI repository! This project provides OpenAI-specific registration extensions for KernelPoolManager
, allowing seamless integration with local Large Language Models (LLMs) via Semantic Kernel.
- Introduction
- Features
- Installation
- Usage
- Configuration Options
- Contributing
- License
- Releases
- Contact
The Semantic Kernel is a powerful framework that helps developers build applications using LLMs. This repository extends its capabilities by integrating OpenAI's offerings into the KernelPoolManager. With these extensions, you can manage multiple OpenAI models effectively, optimizing their usage based on your application needs.
- OpenAI Integration: Connect and manage multiple OpenAI models easily.
- Rate Limiting: Control the rate at which requests are sent to OpenAI, ensuring compliance with API usage limits.
- Flexible Options: Configure various settings to suit your specific requirements.
- Semantic Kernel Support: Leverage the full potential of Semantic Kernel while using OpenAI models.
To install the package, follow these steps:
-
Clone the repository:
git clone https://github.com/cuti24/soenneker.semantickernel.pool.openai.git
-
Navigate to the project directory:
cd soenneker.semantickernel.pool.openai
-
Install the necessary dependencies:
dotnet restore
-
Build the project:
dotnet build
After installation, you can start using the extensions in your project. Here’s a simple example to get you started:
using Soenneker.SemanticKernel.Pool.OpenAI;
var kernelPoolManager = new KernelPoolManager();
kernelPoolManager.RegisterOpenAI("Your-OpenAI-API-Key");
var response = await kernelPoolManager.GetResponseAsync("Your prompt here");
Console.WriteLine(response);
This example shows how to register your OpenAI API key and get a response from the model. Adjust the prompt as needed for your application.
You can customize the behavior of the KernelPoolManager
with several options:
- API Key: Your OpenAI API key for authentication.
- Model Selection: Specify which OpenAI model to use.
- Rate Limiting: Set limits on how many requests can be made per minute.
Here’s an example of how to configure these options:
var options = new KernelPoolOptions
{
ApiKey = "Your-OpenAI-API-Key",
Model = "text-davinci-003",
RateLimit = 60 // requests per minute
};
kernelPoolManager.Configure(options);
We welcome contributions! If you want to help improve this project, please follow these steps:
- Fork the repository.
- Create a new branch:
git checkout -b feature/YourFeature
- Make your changes and commit them:
git commit -m "Add your feature"
- Push to the branch:
git push origin feature/YourFeature
- Create a pull request.
Please ensure that your code follows the existing style and includes tests where applicable.
This project is licensed under the MIT License. See the LICENSE file for details.
For the latest updates and releases, please visit our Releases section. Here, you can find the latest builds and download them for your use.
If you have any questions or suggestions, feel free to reach out:
- Email: [email protected]
- GitHub: your-github-profile
Thank you for your interest in the Soenneker Semantic Kernel Pool for OpenAI! We look forward to seeing what you build with it.
For more information, check the Releases section for updates and downloads.