SoulCalc.OpenAI.API.Moderations 0.1.0

dotnet add package SoulCalc.OpenAI.API.Moderations --version 0.1.0                
NuGet\Install-Package SoulCalc.OpenAI.API.Moderations -Version 0.1.0                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="SoulCalc.OpenAI.API.Moderations" Version="0.1.0" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add SoulCalc.OpenAI.API.Moderations --version 0.1.0                
#r "nuget: SoulCalc.OpenAI.API.Moderations, 0.1.0"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install SoulCalc.OpenAI.API.Moderations as a Cake Addin
#addin nuget:?package=SoulCalc.OpenAI.API.Moderations&version=0.1.0

// Install SoulCalc.OpenAI.API.Moderations as a Cake Tool
#tool nuget:?package=SoulCalc.OpenAI.API.Moderations&version=0.1.0                

OpenAI API .NET/C# Integration

This project is a .NET/C# project that provides an lightweight interface for working with OpenAI's API.

The project is organized according to each category of the APIs OpenAI provides, it's been designed this way so that one can pick the specific endpoints and simply discard the ones that are not needed.

We have developped this project to use as a starting/reference point for other projects that require or fancy to use OpenAI's API.

* Please note that this project and its developers are NOT affiliated with OpenAI in anyway.


Getting Started

You can find the source code and documentation for this project on GitHub.

Quick Start

Here is a quick start peek to send a ChatCompletion request.

// initialize with api key before everything, this only needs to be called once until DeInit().
ApiConfig.Initialize(<api-key>);

// Create a reusable request handler
ChatCompletionRequestHandler requestHandler = new ChatCompletionRequestHandler();

// create and config request data
var requestData = CreateChatCompletionRequestData.Create(model, messages);
// ...

// send the request in async/await fashion
var response = await completionRequestHandler.CreateChatCompletionAsync(requestData);

// or in callback fashion
completionRequestHandler.CreateChatCompletion(requestData, (response)=> {...});

* Details of this can be seen in the Examples section.

Prerequisites

  • Visual Studio (Test in VS2022, but previous versions should work)
  • .NET Standard 2.0 or later
  • OpenAI API Key

Installation & Get Started

  1. Clone the repository:

  2. Open the solution file in Visual Studio (or other IDE).

  3. Build the solution. This will restore the required NuGet packages and build all the projects.

If you don't need access to all OpenAI's API endpoints, build only the desired ones under API/ would be enough. The project is structured with such requirement in mind to keep the footprint as small as possible.

For Unit Test:

Create a new file named testsettings.json in the root directory of the project, and paste in the following JSON code:

{
    "OpenAI": {
        "ApiKey": "<api-key>"
    }
}

Note that you should replace <api-key> with your actual OpenAI API key.

Project Structure

Main part of the project is organized into the following projects:

  • API/Core: Base project for all projects under API/.

  • API/<others>: For requests handling against OpenAI's endpoints.

  • Tests: Contains unit tests.

Documentation

Most classes and methods are documented with XML in the code.

A hosted web version of the doc will be added here when available.

Usage

To use the project, simply reference the appropriate project for the OpenAI model you want to use, and call the relevant methods to interact with the API.

Initialization

Before using anything, make sure to call the static method ApiConfig.Initialize(<openai-api-key>) once. This creates a default ApiConfig for all reqeusts.

Note that you need replace <openai-api-key> with your actual api key.

General Logic

  • For every project that interact directly with the API, a RequestHandler is exposed. For example, ChatCompletionRequestHandler is used to interact with OpenAI's "ChatCompletion" endpoints. Simply create an instance of the handler.

  • The handler provides methods to interact with all OpenAI's endpoints, named after the endpoints themselves exactly (with descriptive words in the method name, such as Async), for instance, CreateChatCompletionAsync.

  • Each one of such methods has two versions to retrieve the response, one that uses the async/await, and the other one uses callbacks. There are also some endpoints that support streamed data. Currently, streamed ones can only be used with callback. See Limitations section.

  • The response data is wrapped in with a IAPIResponse<T>, where T is type for the response data, depending on the endpoint. The IAPIResponse has few properties for checking if the result is good to use. It has a property RawResponse which contains the raw content of the response; It also provide a IsScucess and StatusCode property which is the raw HTTP Status and StatusCode came back with the API request. The Result property, of type T, is the deserialized object of the response data, provided that the request succeeded; In case of a failed response, the Error would contain the resulting information.

tl;dr:

Except Core, every project under API in the solution file is for interaction with some OpenAI API. They all follow the same pattern:

Create instance of re-usable request handler → either use the async/await or the callback methods to interact with the API → get response that's deserialized into a IAPIResponse<T>.

Examples

Create CreateChatCompletion request data

The request data can be created either using new CreateChatCompletionRequestData() and set each property, or use the factory method that goes with all request data classes (except fine-tunes).

Using the provided Create factory classes would make the data handling easier, but currently they are used to get a quick request data to be created, i.e., not all properties of the data class can be set via the factory method.

Using new:


var requestData = new CreateChatCompletionRequestData()
{
    Model = model, // model is the desire model to use. 
    Messages = new List<Message>()
    {
        Message.Create(MessageAuthorRole.User, "Who won the world series in 2020?"),
        Message.Create(MessageAuthorRole.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
        Message.Create(MessageAuthorRole.User, "Where was it played?"),
    },
    N = 1,
    MaxTokens = 100,
}; 

Or the factory method:

var requestData = CreateChatCompletionRequestData.Create(
	model: model,
	messages: new List<Message>() {
		Message.Create(MessageAuthorRole.User, "Who won the world series in 2020?"),
		Message.Create(MessageAuthorRole.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
		Message.Create(MessageAuthorRole.User, "Where was it played?"),
	},
	n: 1,
	maxTokens: 100
);

where the model is the desire model to use.

A list of relavent models are set as constants in each project, for example: OpenAIChatCompletionsModel.GPT_3_5_TURBO.

As seen in the source:

public static class OpenAIChatCompletionsModel
{
	public const string GPT_3_5_TURBO = "gpt-3.5-turbo";
	public const string GPT_3_5_TURBO_0301 = "gpt-3.5-turbo-0301";
	public const string GPT_4 = "gpt-4";
	public const string GPT_4_0314 = "gpt-4-0314";

	... 
}

If you have a fine-tuned model, simply set the model to yours.

Send a CreateChatCompletion request

Here's an example usage of the CreateChatCompletionAsync method to generate chat responses using the OpenAI API:

First, Create request data with the previous example.

var requestData = CreateChatCompletionRequestData.Create(
    // ... set desire options for the request data ...
);

// ... more request configuration ...

Second, create a request handler ChatCompletionRequestHandler

// make sure ApiConfig.Initialize(<api-key>) is caleld before this, 
// or an alternative ApiConfig instance is ready for the constructor to take.

ChatCompletionRequestHandler completionRequestHandler = new ChatCompletionRequestHandler();

* The request handler can be reused.

Finally, send the request. This can be done in two fashions, one with the async/await, or callback.

Using the async/await:

IApiResponse<CreateChatCompletionResponseData> response = await completionRequestHandler.CreateChatCompletionAsync(requestData);

Or, using callback:

completionRequestHandler.CreateChatCompletion(
    requestData,
    onResponse: response =>
    {
        // the response is of type IApiResponse<CreateChatCompletionResponseData>,
    });

Check the result:

Regardless of using async/await or callback, the resulting response is of type IApiResponse<T> where T is the resulting data type depending on the endpoint used, They can be read as the following:

// check if the response is successful by response.IsSuccess
if (response.IsSuccess) {
    // request is a success
    // read the Result property from the IApiResponse<T>.

    CreateChatCompletionResponseData data = response.Result;

    // ... read the response data ...
}
else {
    // request failed.
    // read the Error from the IApiResponse<T>

    IApiError error = response.Error;

    // ... read the error ...

    // the child property, ErrorContent, contains the deserizlied errors from the API.
    // if certain exception occured in the result handling process, 
    // the message will contain the exception message, 
    // however the RawResponse would still contain the original response from API, as a string.

    Console.WriteLine(error.ErrorContent.Message);
}

Some API endpoints allows a streamed option, using two callbacks, one upon reciving each block, and one for completion as a note to stop, regardless of success or not.

completionRequestHandler.CreateChatCompletionStreamed(
    requestData,
    onBlockResponse: response =>
    {
        // this is called whenever a new line of data came in.
        // it can be handled the same way as above examples
    },
    onComplete: isSuccess => {
        // marks an end of the stream
    }
);

The onComplete callback is called upon completion of the entire data stream, marks an ending of the it. the isSuccess parameter come with the callback indicates if we have a successful stop (when true), or it's done without a formal completion (when false).

It's only considered as a success if we recieved [DONE] from OpenAI's API.

Special Handlings

The "string or array" in OpenAI API

There are few endpoints in the API that can take in certain string or strings as either a single string or an array of strings.

Namely, the stop for CreateChatCompletion and CreateCompletion, prompt for CreateCompletion.

These are handled by using type of object rather than strong typed properties. However, setting them without using the factory methods, or to change them after creation, must go through the methods designed for them, as they cannot be set externally directly, such as:

CreateCompletionRequestData reqeustData = CreateCompletionRequestData.Create(...);

reqeustData.WithPrompt(newPrompt);
// and
reqeustData.WithStop(newStop);

This methods can be chained: requestData.WithPrompt(prompt).WithStop(stops);

These methods provide two overlords, one with param string[] and one with List<string>. When used with List<string>, the list send in as the parameter will be assignedd by reference, hence you could hold on to the original reference to the list, and change the content of it.

The factory methods are provided in similar fashion.

However, keep in mind that the reference will be reset when calling these method again.

Limitations

  • Currently, there is no support for Azure APIs, but it is planned.
  • Currently, there is no request data validation, they will be added gradually.
  • The few endpoints that support streamed option is not implemented in async fashion, they are done by callback, this is done on purpose to avoid using language features that are somewhat "too new" (such as C# 8.0). However, this may change in the future.
  • Interaction with Fine-Tunes endpoints are not well tested.

For Unity Projects, although it started supporting C# 8.0 at some point, but it's documentation says the specific feature that are commonly used for streamed data from the API, namely IAsyncEnumerable<T> is not supported.


Contributing

We welcome contributions of code to the OpenAI API Integration Project.

Please make sure that your PR includes a clear description of the problem you are trying to solve, and the changes you have made to the code.

You can find the source code and documentation for this project on GitHub (in case you are not already here)

Reporting Issues

If you encounter a bug or have a feature request, please feel free to reach out to us via GitHub Issues.

Please include as much detail as possible about the problem or feature, including any relevant error messages or screenshots.

Helping to Improve the Project

You can also help improve the OpenAI API Integration Project by:

  • Reviewing and testing pull requests from other contributors.
  • Providing feedback and suggestions for improving the project.
  • Writing documentation or examples to help others use the project.

Licensing

The project is licensed under the MIT License. See the LICENSE file for more information.

Disclaimer

  • This project is NOT affiliated with or endorsed by OpenAI.
  • The developers, group, company, developped and maintaining this project are NOT affiliated with OpenAI.
  • In other words: This project is not in anyway "official".
Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
0.1.0 201 4/23/2023