Kubernetes v1.33, codenamed "Octarine," marks a vibrant leap into the platform’s second decade, introducing 47 enhancements: 14 features graduating to stable, 17 moving to beta, and 16 new alpha features. The name "Octarine," inspired by Terry Pratchett’s Discworld series, symbolizes the magical essence of this release, reflecting the community's dedication to continuous innovation and improvement.
These are the features we are most excited about in Kubernetes v1.33: In-place resource resize, nftables, DRA enhancements, HPA tolerance etc. These enhancements significantly improve Kubernetes' capabilities in performance optimization, efficient scaling, and effective resource management
Let's discuss the major enhancements in Kubernetes v1.33:
Kubernetes v1.33 Stable Features
1. Sidecar Containers
Feature Group: SIG Node | KEP: #753
Kubernetes v1.33 brings native support for sidecar containers. Previously, implementing sidecar containers required workarounds, such as using regular containers with complex lifecycle management or relying on external tools for injection.
With v1.33, Kubernetes introduces a native approach by allowing initContainers to have a restartPolicy set to Always. This change enables these containers to start before the main application containers and continue running alongside them throughout the Pod's lifecycle. They also support probes (startup, readiness, liveness) to signal their operational state, and their Out-Of-Memory (OOM) score adjustments are aligned with primary containers to prevent premature termination under memory pressure. It simplifies the implementation of auxiliary tasks like logging, monitoring, and proxying within Pods.
2. Backoff Limits Per Index for Indexed Jobs
Feature Group: SIG Apps | KEP: #3850
Kubernetes v1.33 introduces an enhancement to job execution with the graduation of per-index backoff limits for Indexed Jobs. Previously, the backoffLimit parameter applied globally to the entire job, meaning that if the cumulative number of pod failures exceeded this limit, the entire job would be marked as failed, even if some individual tasks (indexes) were still viable.
With the new backoffLimitPerIndex feature, each index within an Indexed Job can have its own retry limit. This control ensures that the failure of specific indexes does not prematurely terminate the entire job, allowing other indexes to continue processing independently. Additionally, the maxFailedIndexes parameter allows users to specify the maximum number of failed indexes before the entire job is considered failed, providing further control over job execution.
In this configuration, the job is set to complete 10 tasks (completions: 10) with a maximum of 3 running in parallel (parallelism: 3). Each index is allowed up to 2 retries (backoffLimitPerIndex: 2). If more than 4 indexes fail (maxFailedIndexes: 4), the entire job will be marked as failed.
3. Job Success Policy
Feature Group: SIG Apps | KEP: #3998
In Kubernetes v1.33, the Job Success Policy feature has graduated to stable. Previously, a Job was marked as complete only when all Pods succeeded, which was limiting for scenarios like simulations or leader-worker patterns where partial success is acceptable.
With the introduction of .spec.successPolicy, users can define custom success criteria using succeededIndexes, succeededCount, or a combination of both. Once the specified conditions are met, the Job is marked as complete, and any remaining Pods are terminated, resulting in optimizing resource utilization.
In this configuration, the Job will be considered successful when at least two of the Pods with indexes 0, 2, 3, or 4 complete successfully. This flexibility allows for more nuanced success criteria, accommodating to diverse workload requirements.
4. Subresource Support in kubectl
Feature Group: SIG CLI | KEP: #2590
Kubernetes v1.33 introduces stable support for the --subresource flag in kubectl commands, enhancing the ability to interact with subresources like status, scale, and finalize across various resources. Previously, managing subresources required complex workarounds or direct API interactions. With this enhancement, users can now seamlessly perform operations on subresources using familiar kubectl commands.
For example:
Retrieve specific subresources directly:
5. Bound ServiceAccount Token Security Improvements
Feature Group: SIG Auth | KEP: #4193
Kubernetes v1.33 introduces enhancements to the security of ServiceAccount tokens to stable status. Historically, ServiceAccount tokens were long-lived and not tightly bound to specific workloads or nodes, posing potential security risks if compromised. With the latest update, tokens now include a unique identifier (JWT ID Claim or JTI) and embed node information, enabling more precise validation and auditing.
Previously, tokens could be used across different nodes, increasing the risk of misuse. The new node-specific restrictions ensure that tokens are only valid on designated nodes, reducing the attack surface. Additionally, the inclusion of the JTI allows for better traceability, linking token usage back to its origin, which is invaluable for auditing purposes.
These improvements also support the creation of time-limited tokens bound to specific nodes. For example, using the kubectl create token command, administrators can generate a token for a ServiceAccount that is valid only on a particular node:
This token remains valid until it expires or until the associated Node or ServiceAccount is deleted.
6. Multiple Service CIDRs
Feature Group: SIG Network | KEP: #1880
Kubernetes v1.33 introduces the general availability of Multiple Service CIDRs. Previously, clusters were constrained to a single, static CIDR range for allocating ClusterIP addresses to Services. This limitation posed challenges in large-scale or dual-stack environments, where the predefined range could be exhausted or insufficient.
With this enhancement, Kubernetes now supports the dynamic addition of multiple CIDR ranges for service IP allocation through the introduction of two stable API resources: ServiceCIDR and IPAddress. The ServiceCIDR object allows administrators to define additional CIDR blocks from which ClusterIP addresses can be allocated, while the IPAddress resource tracks individual IP allocations, ensuring uniqueness and preventing conflicts. This new mechanism enables clusters to expand their service IP pools without downtime or complex reconfiguration.
7. Topology-Aware Routing with trafficDistribution: PreferClose
Feature Group: SIG Network | KEPs: #2433, #4444
Kubernetes v1.33 marks the general availability (GA) of topology-aware routing, It enhances the traffic distribution in multi-zone clusters. This feature introduces the trafficDistribution field in the Service specification, allowing services to prefer routing traffic to endpoints that are topologically closer to the client, thereby reducing latency and cross-zone data transfer costs.
Previously, Kubernetes relied on annotations like service.kubernetes.io/topology-mode: Auto to influence traffic routing based on topology. However, this approach lacked flexibility and clarity. With the introduction of the trafficDistribution field, administrators can now explicitly define routing preferences. The PreferClose option within this field directs traffic to the nearest available endpoints based on network topology, typically favoring endpoints within the same zone as the client.
This enhancement builds upon the EndpointSlice API, where the controller populates hints indicating the preferred zones for each endpoint. The components like kube-proxy utilize these hints to make informed routing decisions, ensuring traffic is directed to the most appropriate endpoints.
8. nftables backend for kube-proxy
Feature Group: SIG Network | KEP: #3866
Kubernetes v1.33 introduces the nftables backend for kube-proxy as a stable feature. While iptables remains the default on Linux nodes for compatibility reasons, administrators can opt into nftables mode for better efficiency.
Previously, kube-proxy primarily used iptables or ipvs backends, which had limitations in terms of performance and scalability, especially in large clusters. The nftables backend addresses these issues by providing faster processing of service endpoint changes and more efficient packet handling in the kernel.
To enable nftables mode, ensure your Linux nodes are running kernel version 5.13 or later. You can configure kube-proxy to use nftables by setting the --proxy-mode flag to nftables or by specifying mode: nftables in the kube-proxy configuration file. This new backend offers improved performance and scalability for service implementations within Kubernetes clusters.
Note: Some features like NodePort Interfaces, Localhost Access etcbehave differently in nftables mode compared to iptables mode.
Kubernetes v1.33 Beta Features
9. In-place resource resize for vertical scaling of Pods
Feature Group: SIG Node, SIG Autoscaling | KEP: #1287
Kubernetes v1.33 introduces an enhancement with the beta release of in-place resource resizing for Pods. This feature allows you to adjust CPU and memory allocations for running Pods without restarting them.
Previously, modifying a Pod's resource requests or limits necessitated deleting and recreating the Pod, resulting in disruptions, especially for stateful applications. With the in-place resize capability, you can now update these resources dynamically, providing greater flexibility and efficiency in managing workloads.
This enhancement is beneficial for scenarios where workloads experience fluctuating demands. For example, you can allocate higher resources during peak times and scale down during off-peak periods without affecting the application's availability.
To utilize this feature, you can patch the Pod's resource specifications using the /resize subresource.
This command updates the CPU requests and limits for the container named my-container within the Pod my-pod. The changes are applied in-place, avoiding the need to restart the Pod.
>> To dive deeper into the benefits and limitations of in-place resource resizing, check out Ant Weiss’ walkthrough “We Can Resize Pods without Restarts! Or Can't We?”
10. Dynamic Resource Allocation (DRA) for Network Interfaces
Feature Group: SIG Network, SIG Node, WG Device Management | KEP: #4817
In Kubernetes v1.33, the Dynamic Resource Allocation (DRA) framework has been enhanced to include standardized support for network interfaces, progressing to beta status.
Previously, DRA was primarily utilized for managing resources like GPUs, with limited support for networking devices. The lack of standardized reporting for network interfaces meant that integrating and managing these resources required custom solutions.
Now, With the introduction of standardized fields in the ResourceClaim.Status, DRA can now handle network interfaces more effectively. This includes reporting specific device attributes such as interface names, MAC addresses, and IP configurations. These enhancements provides better observability, debugging, and integration with network services. For example, administrators can define DeviceClass objects to specify selection criteria for network devices, and ResourceClaimTemplate objects to request devices matching certain attributes. Pods can then reference these claims to allocate the necessary network interfaces.
This feature is part of the ongoing efforts to make Kubernetes more adaptable to diverse workloads and hardware configurations.
11. Structured Parameter Support in Dynamic Resource Allocation (DRA)
Feature Group: SIG Node, SIG Scheduling, SIG Autoscaling | KEP: #4381
Kubernetes v1.33 brings enhancements to the Dynamic Resource Allocation (DRA) framework by improving structured parameter support. This update introduces a new v1beta2 version of the resource.k8s.io API, simplifying the process of defining and managing resource claims with structured parameters. Notably, regular users with the namespaced cluster edit role can now utilize DRA, expanding its accessibility beyond cluster administrators.
Previously, in earlier versions, DRA's structured parameter support was limited, and managing resource claims often required complex configurations. Additionally, upgrading kubelet and associated drivers could lead to the deletion and re-creation of ResourceSlices, causing potential disruptions.
Now, With the v1beta2 API, defining resource claims has become more straightforward, allowing for clearer specification of resource requirements. The kubelet now supports seamless upgrades, enabling drivers deployed as DaemonSets to utilize rolling updates without necessitating the deletion of ResourceSlices. Furthermore, a 30-second grace period has been introduced before the kubelet cleans up after unregistering a driver, providing better support for drivers that do not use rolling updates.
In this example, a DeviceClass named custom-device.example.com is defined with a selector for devices from "example-vendor". A ResourceClaimTemplate named custom-device-claim-template then requests a device of model "X1000" from this class. This setup allows for precise allocation of resources based on structured parameters.
12. Handle Unscheduled Pods Early When Active Queue Is Empty
Feature Group: SIG Scheduling | KEP: #5142
Kubernetes v1.33 enhances the scheduler's efficiency by introducing a mechanism to handle unscheduled pods more promptly. Previously, when the scheduler's active queue (activeQ) was empty, it would become idle, even if there were pods in the backoff queue (backoffQ) that were not in a backoff state due to errors. This behavior led to unnecessary delays in pod scheduling.
With the new update, the scheduler now checks the backoffQ when the activeQ is empty and pops pods that are ready to be scheduled. This proactive approach ensures that the scheduler remains active and continues to schedule pods without unnecessary idle periods, improving overall scheduling efficiency.
13. Asynchronous Preemption in the Kubernetes Scheduler
Feature Group: SIG Scheduling | KEP: #4832
Kubernetes v1.33 elevates Asynchronous Preemption to beta status, enhancing the scheduler's efficiency in handling high-priority pods. Traditionally, when a high-priority pod needed to preempt lower-priority ones, the scheduler would synchronously delete the lower-priority pods before proceeding, potentially causing delays. With Asynchronous Preemption, these deletions occur in parallel, allowing the scheduler to continue scheduling other pods without waiting.
Previously, the scheduler's throughput could be hindered during preemption events, especially in clusters with high pod churn. Now, by decoupling the preemption process from the main scheduling cycle, the scheduler maintains higher throughput and responsiveness.
>> Take a look at Preemptible Pods: Optimize Kubernetes Node Utilization
14. ClusterTrustBundles
Feature Group: SIG Auth | KEP: #3257
A ClusterTrustBundle is a Kubernetes resource that contains a set of X.509 trust anchors. These bundles can be associated with specific signers, allowing for organized and controlled distribution of certificates.
Kubernetes v1.33 promotes the ClusterTrustBundle feature to beta. This feature introduces a cluster-scoped resource that allows in-cluster certificate signers to publish and share trust anchors with workloads more efficiently.
Before this update, distributing root certificates within a cluster often required manual configurations or external tools, leading to potential inconsistencies and increased complexity. With ClusterTrustBundles, Kubernetes provides a native mechanism to store and manage these certificates, ensuring that workloads can access the necessary trust anchors securely and consistently.
15. SupplementalGroups control
Feature Group: SIG Node | KEP: #3619
Kubernetes v1.33 brings the SupplementalGroups control feature to beta. This feature, initially introduced in v1.31, allows administrators to specify how supplemental groups are applied, offering better security and control over group permissions.
Previously, Kubernetes would automatically merge supplemental groups defined in a Pod's securityContext with those present in the container image's /etc/group file. This behavior could unintentionally grant containers access to unintended file permissions, posing potential security risks.
With the new enhancement, a supplementalGroupsPolicy field is introduced in the Pod's securityContext, supporting two policies:
Merge (default): It maintains the existing behavior by combining specified groups with those from the container image.
Strict: It applies only the explicitly defined groups, ignoring any from the container image.
By setting the policy to Strict, administrators can ensure that only the intended supplemental groups are applied, mitigating the risk of unintended file access.
This configuration ensures that the container runs only with the supplemental group 1001, without inheriting any additional groups from the container image.
16. Support for mounting images as volumes
Feature Group: SIG Node and SIG Storage | KEP: #4639
Kubernetes v1.33 introduces a beta feature that allows mounting Open Container Initiative (OCI) images directly as volumes within Pods. This enhancement enables users to specify an image reference as a volume, facilitating the reuse of image content across containers in a Pod without embedding it into the main application image. Such an approach simplifies image creation, reduces potential vulnerabilities, and promotes better separation of concerns.
Previously, incorporating shared data across containers often required embedding the data into the application image or using external storage solutions, which could complicate image management and increase the attack surface. With the new image volume feature, data can be packaged separately and mounted as a read-only volume, ensuring consistency and reducing redundancy.
To utilize this feature, ensure that the ImageVolume feature gate is enabled in your cluster.
This YAML defines a Pod named image-volume-pod that mounts an OCI image (my-registry.io/data-image:1.0) as a read-only volume (shared-data) at the path /app/data inside the container. This utilizes Kubernetes' image volume feature, allowing the Pod to access data packaged in a container image without including it in the main application image.
17. Support for Direct Server Return (DSR) in Windows kube-proxy
Feature Group: SIG Windows | KEP: #5100
Kubernetes v1.33 promotes support for Direct Server Return (DSR) in Windows kube-proxy to beta status. DSR is a networking technique that allows return traffic from a service to bypass the load balancer and go directly back to the client, reducing latency and load on the load balancer.
In traditional load-balanced scenarios, both incoming and outgoing traffic pass through the load balancer, which can become a bottleneck. With DSR, only incoming traffic goes through the load balancer, while the response takes a direct path back to the client.
This feature is beneficial for high-throughput applications on Windows nodes, as it enhances performance and scalability. To utilize DSR, ensure that your environment supports it and that the necessary configurations are applied to both the load balancer and the Windows nodes.
18. Support for user namespaces within Linux Pods
Feature Group: SIG Node | KEP: #127
Kubernetes v1.33 introduces default support for Linux user namespaces in Pods. This feature allows containers to run as root within their namespace while being mapped to non-root users on the host, reducing the risk of privilege escalation.
Previously, enabling user namespaces required setting hostUsers: false in the Pod specification. With v1.33, this feature is enabled by default, though existing Pods remain unaffected unless explicitly configured.
How It Works:
By setting hostUsers: false, Kubernetes assigns a unique UID/GID range to the Pod, ensuring that even if a container process escapes, it lacks elevated privileges on the host. This isolation mitigates vulnerabilities and enhances overall cluster security.
19. Zero value for Sleep Action of PreStop Hook
Feature Group: SIG Node | KEP: #4818
In Kubernetes v1.33, the PreStop lifecycle hook's Sleep action has been enhanced to accept a zero-second duration. This update allows users to define a no-operation (no-op) PreStop hook, which can be useful in scenarios where a PreStop hook is required by policy or tooling but no actual delay is desired.
Previously, specifying a zero-second duration in the Sleep action would result in a validation error, as the system required a positive integer value. With the introduction of the PodLifecycleSleepActionAllowZero feature gate, this restriction has been lifted, permitting zero as a valid value. When enabled, this feature allows the Sleep action to execute immediately without any delay.
This enhancement is beneficial for users who utilize admission webhooks or other automated systems that inject PreStop hooks into Pods.
To utilize this feature, ensure that the PodLifecycleSleepActionAllowZero feature gate is enabled on the kube-apiserver. Once enabled, you can define a PreStop hook with a zero-second Sleep action.
20. Declarative Validation with validation-gen
Feature Group: SIG API Machinery | KEP: #5073
Kubernetes v1.33 introduces validation-gen, an internal tool that enhances the robustness and maintainability of API validations by enabling developers to specify validation constraints declaratively.
Before this enhancement, Kubernetes relied on manual coding for validation rules, which could lead to inconsistencies and errors. Validation logic was embedded within the application code, making it challenging to maintain and prone to human error.
With the introduction of validation-gen, Kubernetes now allows for declarative specification of validation rules. This tool generates validation code based on declarative constraints. This shift improves the maintainability and reliability of the validation logic within Kubernetes.
21. ProcMount Option for Improved Pod Isolation
Feature Group: SIG Node | KEP: #4265
Kubernetes v1.33 introduces an improvement to Pod isolation with the procMount option. Previously, Kubernetes used strict /proc mount settings, which could complicate scenarios where nested containers or specific workloads needed more flexibility in terms of /proc access.
The procMount option, first introduced in Kubernetes v1.12 as an alpha feature, became an off-by-default beta in v1.31. Now, with v1.33, it has become an on-by-default beta feature. The primary benefit of this feature is that it lets users specify how the /proc filesystem is mounted within the Pod's container environment, including options to mask or mark certain paths as read-only. This is especially useful for unprivileged containers or when containers are running inside user namespaces, which may need more relaxed access to /proc paths compared to the default setup.
This feature allows users to better control access to the /proc filesystem inside Pods, which is important for enhancing security and supporting unprivileged containers running in user namespaces.
Kubernetes 1.33 Alpha Features
22. Configurable Tolerance for HorizontalPodAutoscalers
Feature Group: SIG Autoscaling | KEP: #4951
Kubernetes v1.33 introduces an alpha feature that allows setting custom tolerance values for HorizontalPodAutoscalers (HPAs). Previously, the HPA controller used a fixed global tolerance of 0.1 (10%) to determine when to scale pods. This meant that minor fluctuations in metrics wouldn't trigger scaling, which could be limiting for applications requiring more responsive scaling behavior.
With this new feature, you can specify custom tolerance values directly within the HPA configuration, providing finer control over scaling sensitivity. This is useful for applications with unique scaling requirements, such as those with long startup times or those that are sensitive to load variations.
>> Take a look at guide of Horizontal Pod Autoscaler(HPA)
23. Container Restart Delays in Kubernetes
Feature Group: SIG Node | KEP: #4603
Kubernetes v1.33 introduces an alpha feature that allows administrators to configure the delay between container restart attempts. Traditionally, when a container failed and entered a CrashLoopBackOff state, Kubernetes would implement a backoff strategy starting with a 10-second delay, doubling each time up to a maximum of 5 minutes (300 seconds). This approach aimed to prevent rapid, repeated restarts that could strain system resources.
With the introduction of the KubeletCrashLoopBackOffMax feature gate, administrators can now customize the maximum backoff delay per node. By adjusting the maxContainerRestartPeriod in the kubelet configuration, the maximum delay can be set anywhere between 1 second and 300 seconds. This flexibility allows for tailored restart behaviors based on specific application needs or operational requirements.
Additionally, the ReduceDefaultCrashLoopBackOffDecay feature gate modifies the default backoff strategy, starting with a 1-second delay and doubling up to a maximum of 60 seconds. When both feature gates are enabled, the per-node configuration takes precedence, allowing for even more granular control.
To set a maximum container restart delay of 50 seconds on a node, the kubelet configuration would include:
24. Custom Container Stop Signals
Feature Group: SIG Node | KEP: #4960
Prior to Kubernetes v1.33, customizing the stop signal sent to containers during termination required modifying the container image itself, which wasn't always feasible, especially when using third-party images.
Kubernetes v1.33 addresses this limitation by introducing the ContainerStopSignals feature gate, allowing users to specify custom stop signals directly in the Pod specification.
By adding the stopSignal field under the container's lifecycle, and specifying the operating system in spec.os.name, users can define which signal should be sent to the container process upon termination. This is useful for applications that need to perform specific cleanup tasks when receiving certain signals.
In this example, when the container is terminated, it will receive the SIGUSR1 signal instead of the default SIGTERM.
25. Enhanced Image Pull Authentication for 'IfNotPresent' and 'Never' Policies
Feature Group: SIG Auth | KEP: #2535
In Kubernetes v1.33, a new alpha feature enhances the security of image pulls by ensuring authentication checks are performed even when the image pull policy is set to IfNotPresent or Never. Traditionally, Kubernetes would skip authentication if the image was already present on the node, potentially allowing unauthorized access if credentials changed.
With this feature enabled, Kubernetes verifies the provided credentials against the image registry regardless of the image's presence on the node. This ensures that any updates to credentials are respected, and unauthorized access is prevented.
To utilize this feature, enable the EnsureImagePullCredentialVerification feature gate on the kubelet.
26. New Configuration Option for kubectl with .kuberc for User Preferences
Feature Group: SIG CLI | KEP: #3104
In Kubernetes v1.33, a new alpha feature introduces the .kuberc configuration file, allowing users to define personal preferences for kubectl commands separately from cluster-specific settings.
Previously, customizing kubectl behavior required altering the kubeconfig file or creating custom shell scripts. This approach was cumbersome and not easily portable across different environments. With the introduction of the .kuberc file, users can now specify aliases, default flags (such as enabling server-side apply), and other preferences in a dedicated configuration file, thereby keeping cluster credentials and host information separate.
To enable this feature, set the environment variable KUBECTL_KUBERC=true and create a .kuberc file, typically located at ~/.kube/kuberc. Alternatively, specify a custom location using the --kuberc flag:
27. Secret-less Image Pulls with Kubelet
Feature Group: SIG Auth | KEP: #4412
Kubernetes v1.33 introduces an enhancement to image pulling mechanisms by enabling secret-less image pulls through the kubelet.
Before this enhancement, Kubernetes relied on static image pull secrets defined in the imagePullSecrets field of pod specifications. These secrets were long-lived and persisted in the Kubernetes API, posing potential security risks and complicating credential rotation. Additionally, the kubelet's on-disk credential provider did not support fetching KSA tokens, limiting the ability to authenticate image pulls using pod-specific identities.
With the introduction of this feature, the kubelet's on-disk credential provider now supports optional KSA token fetching. When configured, the kubelet provisions a KSA token bound to the current pod and its service account. This token, along with required annotations on the KSA, is sent to the credential provider plugin, enabling image pulls to be authenticated using the pod's own identity. This approach reduces the reliance on static secrets and enhances security by ensuring that image pull credentials are tightly scoped and ephemeral.
28. PSI (Pressure Stall Information) Metrics for Scheduling Improvements
Feature Group: SIG Node | KEP: #4205
Kubernetes v1.33 introduces support for Pressure Stall Information (PSI) metrics on Linux nodes using cgroupv2. PSI provides real-time data on resource contention, specifically CPU, memory, and I/O, which is important for understanding and managing system performance under load.
Before this update, Kubernetes lacked native support for PSI metrics. While PSI was available at the kernel level, Kubernetes did not utilize this data, leading to less precise resource management and potential performance issues under heavy load.
With the introduction of PSI metrics in v1.33, Kubernetes now exposes these metrics through cgroupv2. This integration allows for monitoring and responding to resource pressures in a more granular and timely manner.
29. Node Topology Labels via Downward API
Feature Group: SIG Node | KEP: #4742
Kubernetes v1.33 introduces an alpha feature that allows pods to access node topology labels directly through the Downward API.
Before this update, applications that needed node topology information had to implement custom solutions, such as using init containers with elevated permissions to query the Kubernetes API for node details. These methods were not only cumbersome but also posed potential security risks.
With the new feature, Kubernetes introduces the PodTopologyLabels admission plugin. When enabled, this plugin automatically copies specific standard node labels like topology.kubernetes.io/zone, topology.kubernetes.io/region, and kubernetes.io/hostname to the pod's metadata upon scheduling. These labels can then be accessed within the pod using the Downward API, either as environment variables or through mounted volumes.
This enhancement simplifies the process for applications to obtain information about the node's topology, such as its zone or region, without requiring additional privileges or complex.
Kubernetes 1.33 brings a total of 47 Kubernetes Enhancement Proposals (KEPs) implemented. These enhancements include Kubernetes functionalities, storage improvements, networking advancements, security measures, and more.
Beyond the major changes we've discussed, there are other features added by the k8s team. We encourage you to have a look at the Kubernetes v1.33 release notes and check this for more details.
