Akka.Discovery.KubernetesApi 1.5.19

The ID prefix of this package has been reserved for one of the owners of this package by NuGet.org. Prefix Reserved
dotnet add package Akka.Discovery.KubernetesApi --version 1.5.19
NuGet\Install-Package Akka.Discovery.KubernetesApi -Version 1.5.19
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Akka.Discovery.KubernetesApi" Version="1.5.19" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Akka.Discovery.KubernetesApi --version 1.5.19
#r "nuget: Akka.Discovery.KubernetesApi, 1.5.19"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install Akka.Discovery.KubernetesApi as a Cake Addin
#addin nuget:?package=Akka.Discovery.KubernetesApi&version=1.5.19

// Install Akka.Discovery.KubernetesApi as a Cake Tool
#tool nuget:?package=Akka.Discovery.KubernetesApi&version=1.5.19

Kubernetes Api Discovery

The Kubernetes API can be used to discover peers and form an Akka Cluster. The KubernetesApi mechanism queries the Kubernetes API server to find pods with a given label. A Kubernetes service isn’t required for the cluster bootstrap but may be used for external access to the application.

To find other pods, this discovery method needs to know how they are labeled, what the name of the target port is, and what namespace they reside in.

Enabling Kubernetes Discovery Using Akka.Hosting

To enable Kubernetes discovery using Akka.Hosting, you can use one of the following extension methods:

  • Simple extension with optional label selector

    builder.WithKubernetesDiscovery("app=myKubernetesAppName");
    
  • Delegate function to modify a KubernetesDiscoverySetup instance

    builder.WithKubernetesDiscovery(setup =>
    {
      setup.PodLabelSelector = "app=myKubernetesAppName";
    });
    
  • Provide an instance of KubernetesDiscoverySetup directly:

    var setup = new KubernetesDiscoverySetup {
      PodLabelSelector = "app=myKubernetesAppName"
    };
    builder.WithKubernetesDiscovery(setup);
    

Enabling Kubernetes Discovery Using HOCON configuration

To enable Kubernetes discovery via HOCON, you will need to modify your HOCON configuration:

akka.discovery.method = kubernetes-api

Below, you’ll find the default configuration. It can be customized by changing these values in your HOCON configuration.

akka.discovery.kubernetes-api {
    class = "Akka.Discovery.KubernetesApi.KubernetesApiServiceDiscovery, Akka.Discovery.KubernetesApi"

    # API server, cert and token information. Currently these are present on K8s versions: 1.6, 1.7, 1.8, and perhaps more
    api-ca-path = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
    api-token-path = "/var/run/secrets/kubernetes.io/serviceaccount/token"
    api-service-host-env-name = "KUBERNETES_SERVICE_HOST"
    api-service-port-env-name = "KUBERNETES_SERVICE_PORT"

    # Namespace discovery path
    #
    # If this path doesn't exist, the namespace will default to "default".
    pod-namespace-path = "/var/run/secrets/kubernetes.io/serviceaccount/namespace"

    # Namespace to query for pods.
    #
    # Set this value to a specific string to override discovering the namespace using pod-namespace-path.
    pod-namespace = "<pod-namespace>"

    # Enable to query pods in all namespaces
    #
    # If this is set to true, the pod-namespace configuration is ignored.
    all-namespaces = true

    # Domain of the k8s cluster
    pod-domain = "cluster.local"

    # Selector value to query pod API with.
    # `{0}` will be replaced with the configured effective name, which defaults to the actor system name
    pod-label-selector = "app={0}"

    # Enables the usage of the raw IP instead of the composed value for the resolved target host
    use-raw-ip = true

    # When set, validate the container is not in 'waiting' state
    container-name = ""
}

Using Discovery Together with Akka.Management and Cluster.Bootstrap

All discovery plugins are designed to work with Cluster.Bootstrap to provide an automated way to form a cluster that is not based on hard wired seeds configuration.

Configuring with Akka.Hosting

Auto-starting Akka Management, Cluster Bootstrap, and Kubernetes discovery

using var host = new HostBuilder()
    .ConfigureServices((context, services) =>
    {
        services.AddAkka("k8sBootstrapDemo", (builder, provider) =>
        {
            builder
                .WithRemoting("", 4053)
                .WithClustering()
                .WithClusterBootstrap(serviceName: "testService")
                .WithKubernetesDiscovery();
        });
    }).Build();

await host.RunAsync();

Manually starting Akka Management, Cluster Bootstrap, and Kubernetes discovery

using var host = new HostBuilder()
    .ConfigureServices((context, services) =>
    {
        services.AddAkka("k8sBootstrapDemo", (builder, provider) =>
        {
            builder
                .WithRemoting("", 4053)
                .WithClustering()
                .WithAkkaManagement()
                .WithClusterBootstrap(
                    serviceName: "testService",
                    autoStart: false)
                .WithKubernetesDiscovery();
            
            builder.AddStartup(async (system, registry) =>
            {
                await AkkaManagement.Get(system).Start();
                await ClusterBootstrap.Get(system).Start();
            });
        });
    }).Build();

await host.RunAsync();

In both samples above, the effective Kubernetes label selector would be "app=testService" because the default label selector is a string interpolation of "app={0}" where "{0}" is the service name taken from Cluster Bootstrap.

NOTE

In order for for Kubernetes Discovery to work, you also need open Akka.Management port on your Kubernetes pods (8558 by default)

Configuring with HOCON Configuration

Some HOCON configuration is needed to make discovery work with Cluster.Bootstrap:

akka.discovery.method = kubernetes-api
akka.management.http.routes = {
    cluster-bootstrap = "Akka.Management.Cluster.Bootstrap.ClusterBootstrapProvider, Akka.Management.Cluster.Bootstrap"
}

You then start the cluster bootstrapping process by calling:

await AkkaManagement.Get(system).Start();
await ClusterBootstrap.Get(system).Start();

A more complete example:

var config = ConfigurationFactory
    .ParseString(File.ReadAllText("application.conf"))
    .WithFallback(ClusterBootstrap.DefaultConfiguration())
    .WithFallback(AkkaManagementProvider.DefaultConfiguration());

var system = ActorSystem.Create("my-system", config);
var log = Logging.GetLogger(system, this);

await AkkaManagement.Get(system).Start();
await ClusterBootstrap.Get(system).Start();

var cluster = Cluster.Get(system);
cluster.RegisterOnMemberUp(() => {
  var upMembers = cluster.State.Members
      .Where(m => m.Status == MemberStatus.Up)
      .Select(m => m.Address.ToString());

  log.Info($"Current up members: [{string.Join(", ", upMembers)}]")
});

NOTE

In order for for Kubernetes Discovery to work, you also need open Akka.Management port on your Kubernetes pods (8558 by default)

Role-Based Access Control

If your Kubernetes cluster has Role-Based Access Control (RBAC) enabled, you’ll also have to grant the Service Account that your pods run under access to list pods. The following configuration can be used as a starting point. It creates a Role, pod-reader, which grants access to query pod information. It then binds the default Service Account to the Role by creating a RoleBinding. Adjust as necessary.

#
# Create a role, `pod-reader`, that can list pods and
# bind the default service account in the namespace
# that the binding is deployed to to that role.
#

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
subjects:
  # Uses the default service account.
  # Consider creating a dedicated service account to run your
  # Akka Cluster services and binding the role to that one.
- kind: ServiceAccount
  name: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Configuration

Kubernetes YAML Configuration

Below is an example of a YAML example taken from our integration sample.

apiVersion: v1
kind: Namespace
metadata:
  name: clusterbootstrap
---
apiVersion: v1
kind: Service
metadata:
  name: clusterbootstrap
  namespace: clusterbootstrap
  labels:
    app: clusterbootstrap
spec:
  clusterIP: None
  ports:
  - port: 4053
    name: akka-remote
  - port: 8558 
    name: management
  selector:
    app: clusterbootstrap
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: clusterbootstrap
  name: clusterbootstrap
  labels:
    app: clusterbootstrap
spec:
  serviceName: clusterbootstrap
  replicas: 10
  selector:
    matchLabels:
      app: clusterbootstrap
  template:
    metadata:
      labels:
        app: clusterbootstrap
    spec:
      terminationGracePeriodSeconds: 35
      containers:
      - name: clusterbootstrap
        image: akka.cluster.bootstrap:0.1.0
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: CLUSTER_IP
          value: "$(POD_NAME).clusterbootstrap"
        livenessProbe:
          tcpSocket:
            port: akka-remote
        ports:
        - containerPort: 8558
          protocol: TCP
          # When akka.management.cluster.bootstrap.contact-point-discovery.port-name
          # is defined, it must correspond to this name:
          name: management
        - containerPort: 4053
          protocol: TCP
          name: akka-remote
Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.5.19 473 4/17/2024
1.5.18-beta2 482 3/27/2024
1.5.18-beta1 117 3/20/2024
1.5.17.1 3,232 3/4/2024
1.5.15 4,574 1/12/2024
1.5.7 24,909 5/23/2023
1.5.5 1,679 5/8/2023
1.5.0 3,134 3/2/2023
1.5.0-beta6 108 3/1/2023
1.5.0-alpha4 114 2/17/2023
1.0.3 910 2/13/2023
1.0.2 403 2/8/2023
1.0.1 719 1/31/2023
1.0.0 527 1/18/2023
1.0.0-beta2 183 1/7/2023
1.0.0-beta1 124 1/6/2023
0.3.0-beta4 145 12/2/2022
0.3.0-beta3 155 11/7/2022
0.3.0-beta2 238 10/20/2022
0.3.0-beta1 130 10/6/2022
0.2.5-beta4 170 8/31/2022
0.2.5-beta3 404 8/16/2022
0.2.5-beta2 141 8/8/2022
0.2.5-beta1 154 8/1/2022
0.2.4-beta3 675 5/5/2022
0.2.4-beta2 213 4/14/2022
0.2.4-beta1 10,488 12/2/2021
0.2.3-beta2 804 10/5/2021
0.2.3-beta1 218 10/4/2021
0.2.2-beta1 237 9/29/2021

Update to [Akka.NET v1.5.19](https://github.com/akkadotnet/akka.net/releases/tag/1.5.19)
[Discovery.KubernetesApi: Add option to query pods in all namespaces](https://github.com/akkadotnet/Akka.Management/pull/2421)
[Coordination.KubernetesApi: Change lease expiration calculation to be based on DateTime.Ticks instead of DateTime.TimeOfDay.TotalMilliseconds](https://github.com/akkadotnet/Akka.Management/pull/2474)
[Coordination.KubernetesApi: Fix KubernetesSettings configuration bug](https://github.com/akkadotnet/Akka.Management/pull/2475)
[Management: Fix host name IPV6 detection](https://github.com/akkadotnet/Akka.Management/pull/2476)
Update dependency NuGet package versions to latest versions
[Bump Akka.Hosting to 1.5.19](https://github.com/akkadotnet/Akka.Management/pull/2478)
[Bump Google.Protobuf to 3.26.1](https://github.com/akkadotnet/Akka.Management/pull/2436)
[Bump KubernetesClient to 13.0.26](https://github.com/akkadotnet/Akka.Management/pull/2405)
[Bump Petabridge.Cmd to 1.4.1](https://github.com/akkadotnet/Akka.Management/pull/2418)
[Bump AWSSDK.S3 to 3.7.307](https://github.com/akkadotnet/Akka.Management/pull/2412)
[Bump AWSSDK.CludFormation to 3.7.305.4](https://github.com/akkadotnet/Akka.Management/pull/2430)
[Bump AWSSDK.ECS to 3.7.305.21](https://github.com/akkadotnet/Akka.Management/pull/2414)
[Bump AWSSDK.EC2 to 3.7.318](https://github.com/akkadotnet/Akka.Management/pull/2417)
Breaking Change Warning**
This release introduces a breaking change on how `Akka.Coordination.KubernetesApi` calculates lease expiration.**
If you're upgrading `Akka.Coordination.KubernetesApi` from v1.5.18-beta2 or lower to 1.5.19, do not attempt to do a Kubernetes cluster rolling update. Instead, you will have to down the whole Akka.NET cluster (or scale everything to 0) first, then deploy the newly upgraded nodes.