July 29, 2024

Kubernetes v1.30: What's New and Improved?

Tania Duggal
Technical Writer

Kubernetes 1.30, "Uwubernetes," marks the first release of 2024. This version brings 45 enhancements to the table, including 10 new or improved Alpha features, 18 Beta features enabled by default, and 17 features graduating to Stable status. The nickname "Uwubernetes" is a fusion of Kubernetes and "Uwu," an emoticon symbolizing happiness and cuteness, celebrating the community's efforts.

This article highlights key enhancements and updates introduced in Kubernetes v1.30. Let's explore:

Stable Features

1. Container Resource-Based Pod Autoscaling

Feature-group: sig-autoscaling #1610

Kubernetes Horizontal Pod Autoscaler (HPA) is a controller that automatically adjusts the number of pods based on observed CPU utilization or other metrics.  The goal of Kubernetes HPA  is to ensure that applications can handle varying loads efficiently by scaling out (increasing the number of pods) during high demand and scaling in (decreasing the number of pods) during low demand. Previously, Kubernetes HPA could scale based on CPU, memory, and custom metrics which did not provide fine-grained control in complex applications.

In Kubernetes v1.30, the Horizontal Pod Autoscaler (HPA) now supports container resource hpa metrics. This means you can scale your applications based on the resource usage of individual containers within your pods. This feature improves resource management by enabling better scaling decisions, especially in sidecar-heavy architectures, and multi-container pods with varying resource patterns.

This is how you can define container resource metrics:

type: ContainerResource
containerResource:
  name: cpu
  container: application
  target:
    type: Utilization
    averageUtilization: 60

In the above configuration, the K8s HPA will scale the pods to maintain an average CPU utilization of 60% for the application container.

2. Robust VolumeManager reconstruction after kubelet restart

Feature-group: sig-storage #3756

The Kubernetes VolumeManager is a component responsible for managing the lifecycle of volumes attached to pods. It ensures proper mounting, unmounting, and cleanup of volumes as pods are created, destroyed, or moved between nodes. Previously, the VolumeManager had limitations in handling volume states during kubelet restarts or node reboots leading to orphaned volumes or inconsistent states.

In Kubernetes v1.30, the Robust VolumeManager reconstruction feature graduates to stable. This feature allows the kubelet to capture and preserve detailed information about mounted volumes during its startup process. Doing so ensures a more accurate reconstruction of volume states after unexpected system events, such as kubelet crashes or node reboots.

The goal of this enhancement is to improve the overall stability and dependability of storage operations in Kubernetes clusters. It addresses scenarios where volumes might be left in an inconsistent state due to unexpected terminations, reducing the risk of data corruption. The feature was developed using a gated approach, allowing for easy rollback if needed. Now that it's stable, the feature gate (NewVolumeManagerReconstruction) is locked and enabled by default.

3. Pod Scheduling Readiness

Feature-group: sig-scheduling #3521

Pod Scheduling in Kubernetes has been an immediate process. As soon as a Pod was created, the scheduler would attempt to find a suitable node for it. This approach could lead to inefficiencies in certain scenarios. For example, Pods that required specific resources not yet available in the cluster would continuously churn the scheduler, impacting performance and causing unnecessary scaling events.

Kubernetes v1.30 introduces Pod Scheduling Readiness as a stable feature. This enhancement allows users to control when a Pod is considered ready for scheduling by using scheduling gates. By specifying .spec.schedulingGates, you can effectively put a Pod into a "holding pattern" until certain conditions are met. This feature provides better resource management and allows for more controlled pod deployment strategies, reducing unnecessary scheduler load and improving cluster performance.

Here's an example of  using the scheduling gates:

 apiVersion: v1
kind: Pod
metadata:
  name: gated-pod
spec:
  schedulingGates:
  - name: resources.com/foo
  - name: network.com/bar
  containers:
  - name: main-app
    image: myapp:v1

In this configuration, the Pod won't be considered for scheduling until both scheduling gates are removed.

4. Add Interactive flag for kubectl delete Command

Feature-group: sig-cli #3896

The kubectl delete command is a simple yet powerful command. Its ability to permanently erase resources can be dangerous to us. Unintended removals can trigger severe operational disruptions.

Kubernetes 1.30 has the interactive mode feature for the kubectl delete command. This enhancement introduces crucial safety by mandating user verification prior to executing any deletion tasks.

kubectl delete pod mainpod -n ns --interactive

The command will request explicit confirmation from the user before proceeding with the removal of the specified pod.

Alpha Features

5. Job Success/Completion Policy 

Feature-group: sig-apps #3998

Kubernetes Jobs provides a way to run batch processes or one-off tasks. Previously, Kubernetes Jobs were marked as complete only when all pods succeeded, which wasn't always necessary for certain use cases. 

Kubernetes Job Success Policy is a new feature introduced in the v1.30 version. You can now specify a success policy for Indexed Jobs using the .spec.successPolicy field. This policy allows you to declare a Job as successful based on specific conditions, such as the success of particular indexed pods or a certain number of succeeded pods.  It allows users to define custom success criteria, enabling more flexibility and improving resource management by terminating lingering Pods once the success criteria are met. This feature is useful when you work with leader-worker patterns, simulations with different parameters, or any scenario where the success of specific pods is more critical than others.

This is how you can define the Job Succes Policy:

apiVersion: batch/v1
kind: Job
metadata:
  name: data-processing-job
spec:
  completions: 6
  parallelism: 3
  completionMode: Indexed
  successPolicy:
    rules:
      - succeededIndexes: "0-2,4,5"
        succeededCount: 3
  template:
    spec:
      containers:
      - name: data-processor
        image: data-processor:v2
        command: ["process-data"]

In the above configuration, the Job will be considered successful when at least 3 pods from the indexes 0, 1, 2, 4, and 5 are completed successfully.

6.  Traffic distribution for services

Feature-group: sig-network #4444

Previously, Kubernetes Services distributed traffic evenly across all available endpoints, regardless of their location relative to the client. This approach could lead to suboptimal routing in geographically distributed clusters.

Traffic Distribution in Kubernetes Services is a new feature introduced in v1.30 as an alpha release. This feature allows more granular control over how traffic is routed to Service endpoints, particularly in multi-zone cluster environments. The new spec.trafficDistribution field in the Service specification expresses preferences for traffic routing. The primary option introduced is PreferClose, which instructs the system to prioritize sending traffic to endpoints within the same zone as the client. It optimizes the network performance, reduces latency, and lowers costs by directing traffic to topologically closer endpoints.

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    App: main-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8988
  trafficDistribution:
    PreferClose: true

In this configuration, the Service attempts to route traffic to endpoints in the same zone as the client. If no endpoints are available in the client's zone, traffic will be routed cluster-wide.

7. Recursive Read-only (RRO) mounts 

Feature-group: sig-node #3857

Kubernetes recursive read-only mounts provide true read-only protection for volumes and their submounts. Previously, while a volume could be mounted as read-only, its submounts might still allow write access, creating security risks.

In Kubernetes v1.30, a new alpha feature introduces the recursiveReadOnly option for volume mounts. This ensures that all submounts within a volume are also set to read-only, enhancing data protection and security. It provides better control over volume permissions useful in multi-tenant clusters or when working with shared storage solutions.

To use this feature, you can define your Pod spec like this:

volumeMounts:
  - name: data
    mountPath: /data
    readOnly: true
    recursiveReadOnly: Enabled

This configuration makes the /data mount and all its submounts truly read-only.

8. SELinux Label Optimization (Alpha)

Feature-group: sig-storage, sig-node  #1710

SELinux (Security-Enhanced Linux) is a security module for Linux that provides access control for various subsystems of the kernel. In Kubernetes, SELinux labels are used to enforce security policies on volumes and containers. Previously, applying SELinux labels to volumes was a time-consuming process, especially for volumes with numerous files and directories.

In Kubernetes v1.30, an improvement has been made to the SELinux labeling process for volumes. The optimization works by using a mount option instead of recursively applying labels to each file and directory. It extends the support for the SELinux mount option to all volume types (alpha stage) and introduces a new feature gate i.e. SELinuxMount.

 apiVersion: v1
kind: Pod
metadata:
  name: mysecure-pod
spec:
  securityContext:
    seLinuxOptions:
      level: s0:c123,c456
  containers:
    - name: nginx
      image: nginx
      securityContext:
        seLinuxOptions:
          level: s0:c123,c456
      volumeMounts:
        - mountPath: "/path"
          name: mysecure-vol
  volumes:
    - name: mysecure-vol
      persistentVolumeClaim:
        claimName: mysecure-pvc

This configuration applies specific SELinux settings for enhanced security to the pod and container.

Beta Features

9. Node log query 

Feature-group: sig-windows #2258

Previously, accessing the node-level logs required manual SSH access to each node.

Kubernetes Node Log Query is a feature introduced in Kubernetes v1.27 that allows to view logs of services running on nodes. In Kubernetes v1.30, this feature entered beta status, providing more robust functionality for log retrieval. It simplifies direct access to node-level logs without requiring SSH access to individual nodes. With this feature, users can now retrieve logs directly through the Kubernetes API, improving the debugging process.

To use Node Log Query, the following conditions must be met:

a. The NodeLogQuery feature gate is enabled for the target node.

b. Kubelet configuration options enableSystemLogHandler and enableSystemLogQuery are set to true.

c. The user has proper authorization to interact with node objects.

This is how to use Node Log Query:

Fetch kubelet logs from a node named node-4.example:

kubectl get --raw "/api/v1/nodes/node-4.example/proxy/logs/?query=kubelet"

10. Node Memory Swap Support

Feature-group: sig-node #2400

Swap memory is a portion of hard drive space used as virtual memory when the physical RAM is full, allowing the system to temporarily offload less frequently used data from RAM to disk. Previously, Kubernetes had limited support for swap memory, with the NodeSwap feature gate disabled by default.

Kubernetes v1.30 introduces the LimitedSwap mode, which allows controlled use of swap space by pods, up to their defined memory limits. This approach enhances memory utilization without compromising node stability.

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
  NodeSwap: true
memorySwap:
  swapBehavior: LimitedSwap

In the above configuration, the NodeSwap feature is enabled, and the swap behavior is set to LimitedSwap, allowing for controlled swap usage by pods within their memory limits.

Kubernetes 1.30 brings a total of 45 Kubernetes Enhancement Proposals (KEPs) implemented. These enhancements include Kubernetes functionalities, storage improvements, networking advancements, security measures, and more.

Beyond the major changes we've discussed, there are other features added by the k8s team. We encourage you to have a look at the Kubernetes v1.30 release notes and check this for more details.

Ready to optimize your Kubernetes cluster management  to the next level? With PerfectScale, you can enhance efficiency and scalability by intelligently managing your Kubernetes resources. Our advanced algorithms and machine learning techniques ensure your workloads are optimally scaled, reducing waste and cutting costs without compromising performance. Join forward-thinking companies who have already streamlined their Kubernetes environments with PerfectScale. Sign up and Book a demo to experience the immediate benefits of automated Kubernetes cost optimization and resource management.

PerfectScale Lettermark

Reduce your cloud bill and improve application performance today

Install in minutes and instantly receive actionable intelligence.
Subscribe to our newsletter
This article highlights key enhancements and updates introduced in Kubernetes v1.30. Let's explore:
This is some text inside of a div block.
This is some text inside of a div block.

About the author

This is some text inside of a div block.
more from this author
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.