August 14, 2024

Preemptible Pods: Optimizing Kubernetes Node Utilization

Adeolu Oyinlola
Technical Writer

Have you ever faced the challenge of managing resource contention in a Kubernetes cluster? Critical applications slowing down as less important workloads consume valuable CPU and memory is a nightmare scenario for any DevOps engineer. Fortunately, there's a way to ensure your essential services always get the resources they need.

Kubernetes cluster

In this blog, we'll explore preemptible pods – a powerful feature in Kubernetes that allows you to optimize node utilization by prioritizing critical workloads and preempting less important ones. We'll delve into the basics of pod priority and preemption, provide step-by-step instructions on setting up PriorityClasses, and walk through a practical example illustrating how to implement and observe pod preemption in action.

By the end of this post, you’ll understand how leveraging pod priority and preemption can enhance your cluster's efficiency and reliability, ensuring your critical applications always come first. Let’s dive in and discover how to keep your Kubernetes cluster running smoothly, even under heavy load.

What and Why Pod Priority and Preemption Matter

Pod priority is a mechanism that lets you define the importance of a pod relative to others. By assigning priority classes, you can control which pods should be scheduled first when resources are limited. While, preemption is the process where higher-priority pods can evict lower-priority pods to obtain the resources they need. This ensures that the most important workloads are always running.

But why Priority and Preemption?

Priority and Preemption are chosen above others strategies of kubernetes node utilization for specific scenarios where:

  1. Critical Workloads:
    • Ensuring that high-priority workloads always have the necessary resources, even at the expense of less critical ones.
    • Vital for applications with strict SLAs or those that are mission-critical.
  2. Resource Scarcity:
    • Effective in environments with resource constraints where optimal resource utilization is crucial.
    • Prevents resource starvation for important services by evicting lower-priority pods.
  3. Dynamic Workload Management:
    • Ideal for clusters with highly dynamic workloads where the priority of applications can change frequently.
    • Allows for a more responsive and adaptable resource allocation strategy.

While Priority and Preemption are powerful tools for managing node utilization and ensuring that critical workloads have the resources they need, there are several other strategies to optimize Kubernetes node utilization. Each approach has its own benefits and use cases, and choosing the right one depends on your specific requirements and environment. Like; Horizontal Pod Autoscaling (HPA), Vertical Pod Autoscaling (VPA), Cluster Autoscaling and Resource Quotas and Limits.

Walkthrough of a practical example

Step 1. Setting Up PriorityClasses - Define PriorityClasses

Let's create two PriorityClasses: one for high-priority pods and one for low-priority pods.

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000
globalDefault: false
description: "This priority class is for high-priority pods."

---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: low-priority
value: 100
globalDefault: false
description: "This priority class is for low-priority pods."
  • high-priority: Used for critical pods.
  • low-priority: Used for less critical pods.

Apply these PriorityClasses:

kubectl apply -f priority-classes.yaml

Step 2. Creating Pods with PriorityClasses

Now, let's create two pods, one with each PriorityClass.

High-Priority Pod: 

apiVersion: v1
kind: Pod
metadata:
  name: high-priority-pod
spec:
  priorityClassName: high-priority
  containers:
  - name: busybox
    image: busybox
    command: ['sh', '-c', 'sleep 3600']

Low-Priority Pod:

apiVersion: v1
kind: Pod
metadata:
  name: low-priority-pod
spec:
  priorityClassName: low-priority
  containers:
  - name: busybox
    image: busybox
    command: ['sh', '-c', 'sleep 3600']

Apply these pods:

kubectl apply -f high-priority-pod.yaml
kubectl apply -f low-priority-pod.yaml

Step 3. Simulating Preemption

To see preemption in action, we need to simulate a scenario where the cluster is under resource pressure.

In this setup, we will use a Kubernetes cluster with one control plane and one worker node. This environment is ideal for this learning. Ensure your system has sufficient resources to run both the control plane and worker node. Typically, this would involve at least 2GB of RAM and 2 CPU cores.

Alternatively, you can follow along using the free Kubernetes playground provided by Killercoda.

  1. Create Resource Constraints:
    • Deploy a workload that consumes significant resources.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: resource-hog
spec:
  replicas: 1
  selector:
    matchLabels:
      app: resource-hog
  template:
    metadata:
      labels:
        app: resource-hog
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ['sh', '-c', 'while true; do echo "Resource Hog"; sleep 10; done']
        resources:
          requests:
            memory: "1Gi"
            cpu: "1000m"

Apply this deployment:

kubectl apply -f resource-hog.yaml

2. Deploy another High-Priority Pod:

  • This will trigger preemption as the cluster is already under resource pressure.
apiVersion: v1
kind: Pod
metadata:
  name: new-high-priority-pod
spec:
  priorityClassName: high-priority
  containers:
  - name: busybox
    image: busybox
    command: ['sh', '-c', 'sleep 3600']

Apply this pod:

kubectl apply -f new-high-priority-pod.yaml

Step 3. Observing Preemption and Pod Eviction

To observe the preemption:

kubectl get pods -o wide
Observing Preemption and Pod Eviction

You should see that the low-priority-pod gets evicted to make room for the new-high-priority-pod.

How Candidates for Eviction Are Chosen

Kubernetes uses several factors to determine which pods to evict:

  1. Priority: Pods with lower priority are considered first.
  2. Quality of Service (QoS):
    • Guaranteed: Pods with requests and limits set for all containers.
    • Burstable: Pods with requests and limits set for some containers.
    • BestEffort: Pods without any resource requests or limits.

Guaranteed pods are the most stable and are evicted last, while BestEffort pods are the least stable and are evicted first. In our example, if both high-priority-pod and low-priority-pod are of BestEffort QoS, the low-priority-pod gets evicted due to its lower priority.

Impact of Pod Eviction on Kubernetes Node Utilization

Pod eviction plays a crucial role in optimizing Kubernetes node utilization. By preempting lower-priority pods, Kubernetes ensures that high-priority applications have the necessary resources to run smoothly. This mechanism prevents critical services from being starved of resources, thus maintaining the overall performance and reliability of the cluster.

Impact of Pod Eviction on Kubernetes Node Utilization

Moreover, pod eviction helps in efficient resource management by reclaiming resources from non-essential or less important workloads. This proactive approach maximizes the use of available resources, reduces wastage, and maintains a balanced load across the nodes. Consequently, Kubernetes can handle varying workloads more effectively, ensuring that the most important applications remain operational even during peak demand.

Conclusion and Best Practises

Implementing pod priority and preemption in Kubernetes is a powerful way to ensure that your critical applications always receive the resources they need, enhancing the overall efficiency and reliability of your cluster. By prioritizing workloads effectively, you can maintain optimal performance and prevent resource contention issues that could disrupt essential services.

To make the most out of these features, regularly review and adjust your PriorityClasses based on changing workload demands. Monitor your cluster's performance to identify and resolve any potential bottlenecks. By following these best practices, you can leverage Kubernetes' advanced scheduling capabilities to keep your infrastructure resilient and responsive, even under heavy load.

Best Practises

Ready to optimize your Kubernetes cluster and ensure your critical applications always have the resources they need? With PerfectScale, you can intelligently manage your Kubernetes resources, prioritizing essential workloads and minimizing resource contention. Our advanced algorithms and machine learning techniques help you achieve optimal node utilization, reducing waste and cutting costs without compromising performance. Join forward-thinking companies who have already enhanced their Kubernetes environments with PerfectScale. Sign up and Book a demo to experience the immediate benefits of automated Kubernetes resource management and optimization. Keep your critical applications running smoothly and efficiently, even under heavy load.

PerfectScale Lettermark

Reduce your cloud bill and improve application performance today

Install in minutes and instantly receive actionable intelligence.
Subscribe to our newsletter
Learn how preemptible pods can prioritize critical workloads and optimize node utilization. How to set up PriorityClasses and implement pod preemption for cluster.
This is some text inside of a div block.
This is some text inside of a div block.

About the author

This is some text inside of a div block.
more from this author
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.