Kubernetes 1.31, also refered to as 'Elli', is the latest release from the Kubernetes project, introduces several significant updates that enhance the orchestration capabilities of the platform. This release continues the trend of evolving Kubernetes into a more robust, scalable, and secure system for managing containerized applications across diverse environments. Below, we delve into the key updates in Kubernetes 1.31.

Kubernetes v1.31 will be available through on Civo shortly. Follow us on LinkedIn or check out our website for more updates.

Networking Enhancements

1. Improved Ingress Connectivity Reliability for Kube-Proxy

In Kubernetes 1.31, kube-proxy’s reliability in handling ingress connectivity, particularly for load balancers, has been enhanced. This feature ensures better synchronization between components, reducing the chances of traffic drops during node termination. It’s enabled by default, so your services should benefit from this improvement without additional configuration.

Here’s an example of a service configuration that takes advantage of this improvement:

apiVersion: v1
kind: Service
metadata:
  name: my-loadbalancer-service
spec:
  type: LoadBalancer
  externalTrafficPolicy: Cluster
  selector:
    app: my-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376

This service setup doesn’t require any specific changes to benefit from the improved ingress connectivity reliability. The enhancement is automatically applied for services using these configurations.

2. nftables Backend for Kube-Proxy (Beta)

The transition from iptables to nftables as the backend for kube-proxy is a significant enhancement in Kubernetes 1.31. Nftables offers better performance and scalability, particularly in large clusters with thousands of services. The NFTablesProxyMode feature gate is enabled by default, allowing you to start using nftables provided your nodes meet the requirements (Linux kernel 5.13 or later).

To configure kube-proxy to use nftables, you can use the following configuration:

apiVersion: kubeproxy.config.k8s.io/v1beta1
kind: KubeProxyConfiguration
mode: "nftables"

While NFTablesProxyMode is enabled by default, nftables is still relatively new and might not be fully compatible with all network plugins. It is crucial to consult the documentation for your specific network plugin before migrating. Additionally, some features, particularly around NodePort services, may behave differently in nftables mode compared to iptables mode. Ensure you check the migration guide to understand any necessary configuration overrides.

3. Multiple Service CIDRs (Beta)

Addressing the problem of IP exhaustion in large clusters, Kubernetes 1.31 introduces support for multiple Service CIDRs. This beta feature, which is disabled by default, allows administrators to dynamically modify Service CIDR ranges without causing downtime, making IP management more flexible and resilient over time.

To configure multiple Service CIDRs, you might use the following kubeadm configuration:

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  serviceSubnet: "10.96.0.0/12,10.100.0.0/16"

When you enable the MultiCIDRServiceAllocator feature gate and the networking.k8s.io/v1alpha1 API group in Kubernetes 1.31, the control plane transitions from using an internal global allocation map to managing Service IPs via IPAddress and ServiceCIDR objects. This change lifts previous size limitations on IP address ranges for Services, allowing unrestricted IPv4 ranges and broader IPv6 ranges (down to /64 netmasks). The new allocator also provides an API for inspecting assigned IP addresses, which can be leveraged by Kubernetes extensions like the Gateway API to enhance networking capabilities.

4. Traffic Distribution for Services (Beta)

The trafficDistribution field in the Service specification now allows better control over how traffic is routed across services. This enhancement is particularly useful in scenarios where fine-grained control over traffic flow is necessary, such as balancing traffic across multiple backends or managing traffic in multi-tenant environments.

Here’s an example of how you might configure a service with hypothetical traffic distribution settings (note that this configuration might vary based on the actual implementation and Kubernetes version):

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  trafficDistribution:
    policy: Weighted
    weight: 70

Security Improvements

1. AppArmor Support Reaches GA

With Kubernetes 1.31, AppArmor support is now generally available (GA), moving from annotations to the securityContext field in the pod specification. This change makes it easier to apply and manage AppArmor profiles consistently.

Here’s how you can apply an AppArmor profile to a pod:

apiVersion: v1
kind: Pod
metadata:
  name: hello-apparmor
spec:
  securityContext:
    appArmorProfile:
      type: Localhost
      localhostProfile: k8s-apparmor-example-deny-write
  containers:
  - name: hello
    image: busybox:1.28
    command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]

Learn more about it here.

2. Bound Service Account Token Improvements (Beta)

In Kubernetes 1.31, the ServiceAccountTokenNodeBinding feature is promoted to beta, allowing tokens to be bound specifically to nodes. This improves security by ensuring that tokens include node-specific information, which reduces the risk of unauthorized access.

Requesting a node-bound token can be done with the following YAML:

apiVersion: authentication.k8s.io/v1
kind: TokenRequest
metadata:
  name: node-bound-token
spec:
  audiences:
  - kubelet
  boundObjectRef:
    kind: Node
    apiVersion: v1
    name: node-name
  expirationSeconds: 3600

In this example, the token is bound to the node specified by name: node-name, and it is valid for 3600 seconds (1 hour). This token can be used to authenticate requests to the kubelet or other components that require node-specific security contexts.

3. Finer-Grained Authorization Based on Selectors (Alpha)

This alpha feature allows for more granular control over authorization by enabling webhook authorizers to use label and field selectors. This can help tighten security by restricting access based on specific resource attributes.

For instance, with this feature, an authorizer could be configured to allow a user to list only the pods that are scheduled on a specific node (.spec.nodeName), or to watch only the Secrets in a namespace that do not have the label confidential: true. This level of control ensures that users can only access the resources they are explicitly permitted to, enhancing the overall security posture of the Kubernetes cluster.

4. Restrictions on Anonymous API Access (Alpha)

Kubernetes 1.31 introduces the ability to restrict anonymous API access, helping to protect against potential RBAC misconfigurations. This feature is enabled by the AnonymousAuthConfigurableEndpoints feature gate.

To enable this feature, update your API server configuration:

--feature-gates=AnonymousAuthConfigurableEndpoints=true

Storage Innovations

1. Persistent Volume Last Phase Transition Time (GA)

The addition of the .status.lastTransitionTime field in Persistent Volumes (PV) is now GA in Kubernetes 1.31. This field logs the last time a PV transitioned between phases, providing useful data for monitoring and troubleshooting.

Here's an example of how this field appears in a PV status, automatically populated by Kubernetes:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
status:
  phase: Bound
  lastTransitionTime: "2024-08-12T10:00:00Z"

2. Changes to Reclaim Policy for PersistentVolumes (Beta)

The Always Honor PersistentVolume Reclaim Policy feature ensures that the reclaim policy is respected even after the PVC is deleted. This beta feature addresses potential inconsistencies by ensuring that storage resources are cleaned up as expected.

Here’s a sample configuration for a PV with a delete reclaim policy:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: example-storage-class
  hostPath:
    path: "/mnt/data"

3. Kubernetes VolumeAttributesClass ModifyVolume (Beta)

The VolumeAttributesClass API, which allows for dynamic modification of volume parameters, is now in beta. This feature is useful for workloads that need to adjust volume IO dynamically.

Example of a VolumeAttributesClass definition:

apiVersion: storage.k8s.io/v1beta1
kind: VolumeAttributesClass
metadata:
  name: fast-io
parameters:
  provisionedIO: "1000"
  volumeType: "io1"

AI/ML and Hardware Management

1. New DRA APIs for Better Hardware Management (Alpha)

Kubernetes 1.31 introduces updates to the dynamic resource allocation (DRA) API, focusing on structured parameters for better hardware management and enabling features like cluster autoscaling.

Here’s how you might define a resource claim using the updated DRA API:

apiVersion: resource.k8s.io/v1alpha1
kind: ResourceClaim
metadata:
  name: ai-workload
spec:
  resourceClassName: nvidia-gpu
  parameters:
    gpuType: "A100"
    memory: "16Gi"
    compute: "2"

The alpha support for using OCI images as native volumes in pods is a step forward for AI/ML use cases. This feature allows for more flexible storage solutions, particularly for applications that need to handle large datasets. This feature requires enabling the ImageVolume feature gate.

3. Exposing Device Health Information Through Pod Status (Alpha)

This alpha feature allows Kubernetes to expose device health information through the Pod status, which is crucial for managing hardware failures in AI/ML environments.

When this feature is enabled via the appropriate feature gate, Kubernetes will automatically add an allocatedResourcesStatus field to the status of each container within a Pod. This field provides real-time health information about the devices allocated to the container, allowing for more effective monitoring and management of hardware resources.

While you don’t manually set this field in your Pod specification, here’s an example of what the Pod status might look like after enabling this feature:

apiVersion: v1
kind: Pod
metadata:
  name: device-health-pod
spec:
  containers:
  - name: gpu-container
    image: ai-gpu-workload:latest
    resources:
      requests:
        nvidia.com/gpu: 1
status:
  containerStatuses:
  - name: gpu-container
    state:
      running: {}
    allocatedResourcesStatus:
      nvidia.com/gpu:
        health: "Healthy"

Summary

With the release of Kubernetes v1.31, users will see some significant enancements surrounding networking, security, storage, and hardware management. While we can expect these updates to improve performance and reliability within workflows, I am excited for the security updates which will continue to make Kubernetes more user-friendly and efficient.

In this blog, we have covered only a handful of the enhancements made with the release of Kubernetes v1.31. For more announcements about Kubernetes v1.31 check these official resources: