September 25, 2024

Taints and Tolerations in Kubernetes

Adeolu Oyinlola
Technical Writer

Did you know that using taints and tolerations in Kubernetes can boost your resource use by 30%? It can also cut infrastructure costs by 20%. This shows how crucial it is to know and use these features well. As Kubernetes continues to dominate the container orchestration landscape, the need for fine-grained control over pod scheduling becomes more critical. We'll dive into how they help with scheduling pods and managing resources in your containerized apps.

Key Takeaways

  • Understand the limitations of Kubernetes' default scheduling process and how taints and tolerations address these challenges.
  • Understand the purpose and mechanics of taints and tolerations in Kubernetes
  • Learn how to use taints and tolerations in your kubernetes cluster
  • Explore practical use cases for taints and tolerations

Understanding Kubernetes Scheduling

To fully comprehend the significance of taints and tolerations in Kubernetes, understanding of the default scheduling process and its inherent limitations is crucial. The Kubernetes scheduling mechanism is tasked with the critical function of deploying pods onto suitable nodes within the cluster. This process ensures optimal resource allocation and the efficient distribution of workloads.

The default Kubernetes scheduling process encompasses several pivotal steps:

  1. Resource requirements: The scheduler meticulously assesses the compute, memory, and other resource demands outlined in the pod's manifest.
  2. Node selection: The scheduler identifies a collection of nodes that can fulfill the pod's resource prerequisites.
  3. Node scoring: The scheduler employs a series of pre-defined scoring algorithms to rank the eligible nodes. These factors include available resources, node affinity, and others.
  4. Pod assignment: The scheduler then selects the node with the highest score and assigns the pod to that node.

Limitations of Default Scheduling

While the default Kubernetes scheduling process is adept for many workloads, it encounters challenges in more sophisticated scenarios. The primary limitations include:

  • Lack of node specialization: The default scheduler is incapable of considering node specialization, such as hardware configuration or software environment, when assigning pods.
  • Limited node isolation: The default scheduler does not provide a mechanism to isolate specific nodes for certain workloads, leading to potential resource contention and performance issues.
  • Insufficient workload segregation: The default scheduler does not offer a method to segregate different types of workloads (e.g., production vs. development) onto separate node pools.

To overcome these limitations and facilitate more sophisticated scheduling capabilities, Kubernetes introduces the concepts of taints and tolerations. 

scheduler uses set of rules
How Scheduler Uses a set of rules to determine Eligible Nodes

Understanding Kubernetes Taints and Tolerations

Kubernetes taints and tolerations are two complementary mechanisms that ensure pods are scheduled onto appropriate nodes within a cluster. A taint is applied to a node and prevents pods from being scheduled on that node unless they have a corresponding toleration. This mechanism allows you to isolate workloads, reserve nodes for specific purposes, and control how resources are allocated across your cluster.

What are Taints?

Taints are special labels on Kubernetes nodes that keep certain pods away. If a node is tainted, pods without the right tolerations won't go there. This is great for setting aside nodes for certain tasks that need special resources or hardware. Kubernetes delineates three primary taint effects, each exerting a distinct influence on pod scheduling:

  • NoSchedule: Pods devoid of a congruent toleration are precluded from being scheduled on tainted nodes.
  • PreferNoSchedule: The system endeavors to avert the scheduling of pods lacking a congruent toleration on tainted nodes, though it cannot ensure their complete exclusion.
  • NoExecute: Pods without a congruent toleration are barred from being scheduled on tainted nodes, and those already running on such nodes are evicted.
K8s Taints
K8s Taints

Example of a taint:


kubectl taint nodes node1 key=value:NoSchedule

In this example, the node node1 is tainted with key=value:NoSchedule, meaning no pods will be scheduled on node1 unless they have a matching toleration.

What are Tolerations?

Tolerations are the opposite of taints. They let pods ignore certain taints on nodes, so they can run there. This flexibility helps control where pods go, making sure they're in the best spot for the job.

K8s Tolerations
K8s Tolerations

Why use Taints and Tolerations in Kubernetes

Taints and tolerations are an important safe-guard to make the pod scheduling and resource allocation better. They let us control where pods go. By setting taints on nodes, we can make sure certain pods don't go there. This is great for using special hardware like GPUs or high-memory nodes for certain tasks only.

  • Ensure appropriate pod placement based on node capabilities and resource requirements
  • Manage node maintenance operations, such as reboots or upgrades, without disrupting running workloads
  • Workload Isolation: You can ensure that specific workloads are kept separate from others, such as keeping production and development environments on different nodes.
  • Resource Reservation: Taints can reserve nodes for critical applications, ensuring that these nodes are not overwhelmed by less critical workloads.
  • Fault Tolerance: By tainting nodes that are experiencing issues, you can prevent new pods from being scheduled there, thus maintaining the health of your applications.

Using taints and tolerations in Kubernetes makes our containerized applications run better. It leads to better performance and reliability overall.

Implementing Taints and Tolerations in Your Kubernetes Cluster

To implement taints and tolerations, follow these steps:

  1. Identify and apply the nodes that need to be tainted in your Kubernetes cluster.
  2. Determine which pods need to be scheduled on tainted nodes.
  3. Add the appropriate tolerations to the pod specifications.
  4. Deploy the pods with the configured tolerations.
  5. Monitor the pod scheduling and adjust tolerations as needed.

Applying a Taint to a Node:

To taint a node, use the kubectl taint command as shown:


kubectl taint nodes  key=value:
Taint to a Node
Taint to a Node

kubectl taint nodes node01 app=ssd:NoSchedule

This command prevents pods from being scheduled on node01 unless they tolerate the app=ssd taint.

Adding a Toleration to a Pod:

To allow a pod to be scheduled on a tainted node, you must define a corresponding toleration in the pod's YAML configuration:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mycontainer
    image: nginx
  tolerations:
  - key: "app"
    operator: "Equal"
    value: "ssd"
    effect: "NoSchedule"
Toleration to a Pod
Toleration to a Pod

This configuration allows mypod to be scheduled on any node with the app=ssd:NoSchedule taint.

Advanced Taint and Toleration Strategies

It is a normal experience to frequently aim to refine the scheduling strategies. Let us take a look into advanced techniques that amalgamate taints and node affinity to elevate your Kubernetes scheduling prowess.

Combining Taints and Node Affinity

Taints and tolerations offer a robust framework for governing pod placement, yet their efficacy can be significantly augmented by integrating node affinity. This synergy enables the formulation of scheduling rules that are meticulously aligned with your infrastructure and workload demands.

Consider a scenario where certain nodes are earmarked for a specific workload, such as GPU-accelerated machine learning tasks. By tainting these nodes with a custom taint, like gpu=true:NoSchedule, and crafting a pod specification that tolerates this taint, while also adhering to a node affinity rule, you can ensure the pod's deployment on nodes with the requisite GPU capabilities.

  1. Taint the nodes with the essential hardware: kubectl taint nodes node1 gpu=true:NoSchedule

In the pod specification, incorporate a toleration for the bespoke taint:

tolerations:
- key: "gpu"
  operator: "Equal"
  value: "true" 
  effect: "NoSchedule"

Furthermore, embed a node affinity rule within the pod specification to guarantee scheduling on the apt nodes:

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
          - key: gpu
            operator: Exists

This strategic integration of taints and node affinity enables a highly refined scheduling approach. It ensures that specialized workloads are deployed on the most suitable nodes, thereby preventing any potential conflicts with other applications within your cluster.

Monitoring and Troubleshooting Tainted Nodes

Ensuring the health and performance of your Kubernetes cluster is paramount. How do we manage insights into monitoring tainted nodes and troubleshooting any emergent issues.

Identifying Issues with Tainted Nodes

It is imperative to monitor the status of tainted nodes to maintain the seamless operation of your Kubernetes deployment. Observing these nodes closely enables swift identification and resolution of any arising problems.

Effective monitoring of tainted nodes involves the following strategies:

  • Regularly check the node status using the Kubernetes command-line interface (CLI) or dashboard to identify any nodes that have been tainted. You can use the following command:

kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, taints: .spec.taints}'

Alternatively, if you just want a simple list of nodes and whether they have taints, you can use:


kubectl describe nodes | grep -E "Name:|Taints:"
  • Set up alerts and notifications to receive timely updates on the status of tainted nodes, allowing you to respond promptly to any issues that may arise.
  • Review the event logs for your Kubernetes cluster to gain insights into the reasons behind the tainting of nodes, such as resource constraints or specific workload requirements.

When troubleshooting tainted nodes, consider the following steps:

  1. Analyze the taint effects and understand how they are impacting the scheduling and deployment of your Kubernetes workloads.
  2. Investigate the root causes of the tainting, such as resource depletion, hardware failures, or changes in node configurations.
  3. Determine the appropriate actions to resolve the underlying issues, which may involve scaling resources, applying additional tolerations, or even removing the taint from the affected nodes.
ThresholdImplication
Tainted NodesMore than 10% of total nodesPotential scheduling bottleneck or resource constraints
CPU Utilization on Tainted NodesAbove 80%Insufficient resources for workloads, leading to potential issues
Memory Utilization on Tainted NodesAbove 80%Insufficient resources for workloads, leading to potential issues


By diligently monitoring and troubleshooting tainted Kubernetes nodes, you can ensure the optimal performance and reliability of your containerized applications. This approach addresses any resource or scheduling challenges that may emerge.

Practical Use Cases for Taints and Tolerations

It’s important to note that taints aren’t only used by engineers for smarter pod placement. They are also used under the hood by the Kubernetes control plane to manage cluster configuration and node state transitions.

The Kubernetes Node Controller uses taints to manage the state of nodes by controlling which pods can or cannot be scheduled on specific nodes, especially during node failures or maintenance. Here are real-life examples of how taints are used for node state management:

1. Node Readiness and Eviction (Unreachable Node)

When a node becomes unreachable or fails a health check, the Node Controller taints the node with node.kubernetes.io/unreachable or node.kubernetes.io/not-ready. This prevents new pods from being scheduled on the node while tolerating pods that can handle short-term unavailability.

apiVersion: v1
kind: Pod
metadata:
  name: pod-example
spec:
  containers:
  - name: container-example
    image: nginx
  tolerations:
  - key: "node.kubernetes.io/unreachable"
    operator: "Exists"
    effect: "NoExecute"
    tolerationSeconds: 300

In this case, the pod tolerates an unreachable node for 300 seconds, giving it time to recover before eviction.

2. Node Undergoing Maintenance

Before performing node maintenance (such as upgrading or restarting), the Node Controller can taint the node with node.kubernetes.io/maintenance, ensuring that no new pods are scheduled while tolerating the temporary unavailability of certain workloads.

Example:


kubectl taint nodes  key=node.kubernetes.io/maintenance:NoSchedule

This ensures that the node won’t receive new workloads, allowing admins to drain and prepare it for maintenance without interruptions.

In the node draining process, Kubernetes automatically adds the taint node.kubernetes.io/unschedulable:NoSchedule to prevent new pods from being placed on the node while existing ones are safely evicted.

When you initiate the drain command (kubectl drain), the node is marked as unschedulable. This prevents the Kubernetes scheduler from placing any new pods on the node.


kubectl drain  --ignore-daemonsets --delete-local-data
  • The node.kubernetes.io/unschedulable taint is automatically added.
  • The node evicts non-DaemonSet pods, and the scheduler moves them to other nodes in the cluster that do not have the taint applied.
  • Pods that tolerate the taint or are part of a DaemonSet remain running on the node.

After the node is drained, it remains unschedulable, and new pods won’t be placed on it until the node is untainted or made schedulable again with the command:


kubectl uncordon 

This ensures that workloads are safely relocated while allowing maintenance on the node without disrupting the cluster's operations.

3. Eviction for Insufficient Resources

If a node runs out of disk space or other critical resources, the Node Controller may taint the node with node.kubernetes.io/out-of-disk. This prevents any new pods from being scheduled while giving existing pods time to be evicted.

apiVersion: v1
kind: Pod
metadata:
  name: pod-example
spec:
  containers:
  - name: container-example
    image: nginx
  tolerations:
  - key: "node.kubernetes.io/out-of-disk"
    operator: "Exists"
    effect: "NoSchedule"

In this example, the pod is scheduled elsewhere since the node is marked as out of disk space.

These examples demonstrate how the Node Controller dynamically uses taints to manage node availability and scheduling, ensuring reliable operation of Kubernetes clusters by coordinating node failures, maintenance, or resource exhaustion.

Recap

Our comprehension of Kubernetes' default scheduling mechanisms and its constraints has highlighted the indispensable role of taints and tolerations. By applying this configuration we can establish specialized node pools, assign specific hardware resources, and tailor our cluster's behavior to the intricate needs of our applications.

Looking ahead, the principles of kubernetes optimization, taints and tolerations in kubernetes scheduling will remain pivotal. By adopting these methodologies, we can fully harness the capabilities of our Kubernetes deployments. This ensures superior performance, dependability, and scalability for our critical workloads.

Want to make sure your clusters aren’t just reliable but also cost-effective? Start with PerfectScale for free today or Book a demo and save up to 50% of your K8S costs while improving reliability. 

PerfectScale Lettermark

Reduce your cloud bill and improve application performance today

Install in minutes and instantly receive actionable intelligence.
Subscribe to our newsletter
Boost resource use by 30% and cut costs by 20% with Kubernetes Taints and Tolerations
This is some text inside of a div block.
This is some text inside of a div block.

About the author

This is some text inside of a div block.
more from this author
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.