HillPhelmuth.SemanticKernel.LlmAsJudgeEvals 1.0.4

dotnet add package HillPhelmuth.SemanticKernel.LlmAsJudgeEvals --version 1.0.4                
NuGet\Install-Package HillPhelmuth.SemanticKernel.LlmAsJudgeEvals -Version 1.0.4                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="HillPhelmuth.SemanticKernel.LlmAsJudgeEvals" Version="1.0.4" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add HillPhelmuth.SemanticKernel.LlmAsJudgeEvals --version 1.0.4                
#r "nuget: HillPhelmuth.SemanticKernel.LlmAsJudgeEvals, 1.0.4"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install HillPhelmuth.SemanticKernel.LlmAsJudgeEvals as a Cake Addin
#addin nuget:?package=HillPhelmuth.SemanticKernel.LlmAsJudgeEvals&version=1.0.4

// Install HillPhelmuth.SemanticKernel.LlmAsJudgeEvals as a Cake Tool
#tool nuget:?package=HillPhelmuth.SemanticKernel.LlmAsJudgeEvals&version=1.0.4                

LlmAsJudgeEvals

This library provides a service for evaluating responses from Large Language Models (LLMs) using the LLM itself as a judge. It leverages Semantic Kernel to define and execute evaluation functions based on prompt templates.

Installation

Install the package via NuGet:

nuget install HillPhelmuth.SemanticKernel.LlmAsJudgeEvals

Usage

Built-in Evaluation Functions

// Initialize the Semantic Kernel
var kernel = Kernel.CreateBuilder().AddOpenAIChatCompletion("openai-model-name", "openai-apiKey").Build();

// Create an instance of the EvalService
var evalService = new EvalService(kernel);

// Create an input model for the built-in evaluation function
var coherenceInput = InputModel.CoherenceModel("This is the answer to evaluate.", "This is the question or prompt that generated the answer");

// Execute the evaluation
var result = await evalService.ExecuteEval(inputModel);

Console.WriteLine($"Evaluation score: {result.Score}");

// Execute evaluation with explanation
var resultWithExplanation = await evalService.ExecuteScorePlusEval(inputModel);

Console.WriteLine($"Score: {resultWithExplanation.Score}");
Console.WriteLine($"Reasoning: {resultWithExplanation.Reasoning}");
Console.WriteLine($"Chain of Thought: {resultWithExplanation.ChainOfThought}");

Custom Evaluation Functions

// Initialize the Semantic Kernel
var kernel = Kernel.CreateBuilder().AddOpenAIChatCompletion("openai-model-name", "openai-apiKey").Build();

// Create an instance of the EvalService
var evalService = new EvalService(kernel);

// Add an evaluation function (optional)
evalService.AddEvalFunction("MyEvalFunction", "This is the prompt for my evaluation function.", new PromptExecutionSettings());

// Create an input model for the evaluation function
var inputModel = new InputModel
{
    FunctionName = "MyEvalFunction", // Replace with the name of your evaluation function
    RequiredInputs = new Dictionary<string, string>
    {
        { "input", "This is the text to evaluate." }
    }
};

// Execute the evaluation
var result = await evalService.ExecuteEval(inputModel);

Console.WriteLine($"Evaluation score: {result.Score}");

Features

  • Define evaluation functions using prompt templates: You can define evaluation functions using prompt templates written in YAML.
  • Execute evaluations: The EvalService provides methods for executing evaluations on input data.
  • Score Plus Explanation: Use ExecuteScorePlusEval to get detailed explanations and chain-of-thought reasoning along with scores.
  • Aggregate results: The EvalService can aggregate evaluation scores across multiple inputs.

Built-in evaluation functions: The package includes pre-defined evaluation functions for:

  • Groundedness (1-5): Evaluates factual accuracy and support in context
  • Groundedness2 (1-10): Alternative groundedness evaluation with finer granularity
  • Similarity: Measures response similarity to reference text
  • Relevance: Assesses response relevance to prompt/question
  • Coherence: Evaluates logical flow and consistency
  • Perceived Intelligence: Rates apparent knowledge and reasoning (with/without RAG)
  • Fluency: Measures natural language quality
  • Empathy: Assesses emotional understanding
  • Helpfulness: Evaluates practical value of response
  • Retrieval: Evaluates the retrieved content based on the query

Each evaluation function has a corresponding "Explain" version that provides detailed explanations and chain-of-thought reasoning along with the score. For example:

  • GroundednessExplain
  • CoherenceExplain
  • SimilarityExplain etc.

These evaluation functions can be easily accessed using the InputModel factory methods:

var coherenceInput = InputModel.CoherenceModel(answer, question);
var groundednessInput = InputModel.GroundednessModel(answer, question, context);
var coherenceWithExplanationInput = InputModel.CoherenceExplainModel(answer, question);

Example of Score Plus Explanation Output

{
    "EvalName": "CoherenceExplain",
    "Score": 4,
    "Reasoning": "The answer is mostly coherent with good flow and clear organization. It addresses the question directly and maintains logical connections between ideas.",
    "ChainOfThought": "1. First, I examined how the sentences connect\n2. Checked if ideas flow naturally\n3. Verified if the response stays focused on the question\n4. Assessed overall clarity and organization\n5. Considered natural language use",
    "ProbScore": 3.92
}
Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.0.4 24 1/8/2025
1.0.3 57 1/3/2025
1.0.2 90 12/22/2024
1.0.1 84 12/10/2024
1.0.0-preview 101 10/22/2024
0.1.0-beta 89 10/22/2024
0.0.3-beta 88 10/21/2024
0.0.2-beta 87 9/7/2024
0.0.1-beta 77 9/7/2024