You are viewing documentation for Kubernetes version: v1.25

Kubernetes v1.25 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date information, see the latest version.

Handling retriable and non-retriable pod failures with Pod failure policy

FEATURE STATE: Kubernetes v1.25 [alpha]

This document shows you how to use the Pod failure policy, in combination with the default Pod backoff failure policy, to improve the control over the handling of container- or Pod-level failure within a Job.

The definition of Pod failure policy may help you to:

  • better utilize the computational resources by avoiding unnecessary Pod retries.
  • avoid Job failures due to Pod disruptions (such preemption, API-initiated eviction or taint-based eviction).

Before you begin

You should already be familiar with the basic use of Job.

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

Your Kubernetes server must be version v1.25. To check the version, enter kubectl version.

Using Pod failure policy to avoid unnecessary Pod retries

With the following example, you can learn how to use Pod failure policy to avoid unnecessary Pod restarts when a Pod failure indicates a non-retriable software bug.

First, create a Job based on the config:

apiVersion: batch/v1
kind: Job
metadata:
  name: job-pod-failure-policy-failjob
spec:
  completions: 8
  parallelism: 2
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: main
        image: docker.io/library/bash:5
        command: ["bash"]
        args:
        - -c
        - echo "Hello world! I'm going to exit with 42 to simulate a software bug." && sleep 30 && exit 42
  backoffLimit: 6
  podFailurePolicy:
    rules:
    - action: FailJob
      onExitCodes:
        containerName: main
        operator: In
        values: [42]

by running:

kubectl create -f job-pod-failure-policy-failjob.yaml

After around 30s the entire Job should be terminated. Inspect the status of the Job by running:

kubectl get jobs -l job-name=job-pod-failure-policy-failjob -o yaml

In the Job status, see a job Failed condition with the field reason equal PodFailurePolicy. Additionally, the message field contains a more detailed information about the Job termination, such as: Container main for pod default/job-pod-failure-policy-failjob-8ckj8 failed with exit code 42 matching FailJob rule at index 0.

For comparison, if the Pod failure policy was disabled it would take 6 retries of the Pod, taking at least 2 minutes.

Clean up

Delete the Job you created:

kubectl delete jobs/job-pod-failure-policy-failjob

The cluster automatically cleans up the Pods.

Using Pod failure policy to ignore Pod disruptions

With the following example, you can learn how to use Pod failure policy to ignore Pod disruptions from incrementing the Pod retry counter towards the .spec.backoffLimit limit.

  1. Create a Job based on the config:

    apiVersion: batch/v1
       kind: Job
       metadata:
         name: job-pod-failure-policy-ignore
       spec:
         completions: 4
         parallelism: 2
         template:
           spec:
             restartPolicy: Never
             containers:
             - name: main
               image: docker.io/library/bash:5
               command: ["bash"]
               args:
               - -c
               - echo "Hello world! I'm going to exit with 0 (success)." && sleep 90 && exit 0
         backoffLimit: 0
         podFailurePolicy:
           rules:
           - action: Ignore
             onPodConditions:
             - type: DisruptionTarget
       

    by running:

    kubectl create -f job-pod-failure-policy-ignore.yaml
    
  2. Run this command to check the nodeName the Pod is scheduled to:

    nodeName=$(kubectl get pods -l job-name=job-pod-failure-policy-ignore -o jsonpath='{.items[0].spec.nodeName}')
    
  3. Drain the node to evict the Pod before it completes (within 90s):

    kubectl drain nodes/$nodeName --ignore-daemonsets --grace-period=0
    
  4. Inspect the .status.failed to check the counter for the Job is not incremented:

    kubectl get jobs -l job-name=job-pod-failure-policy-ignore -o yaml
    
  5. Uncordon the node:

    kubectl uncordon nodes/$nodeName
    

The Job resumes and succeeds.

For comparison, if the Pod failure policy was disabled the Pod disruption would result in terminating the entire Job (as the .spec.backoffLimit is set to 0).

Cleaning up

Delete the Job you created:

kubectl delete jobs/job-pod-failure-policy-ignore

The cluster automatically cleans up the Pods.

Alternatives

You could rely solely on the Pod backoff failure policy, by specifying the Job's .spec.backoffLimit field. However, in many situations it is problematic to find a balance between setting a low value for .spec.backoffLimit to avoid unnecessary Pod retries, yet high enough to make sure the Job would not be terminated by Pod disruptions.

Last modified March 12, 2024 at 8:26 AM PST: Merge pull request #45495 from steve-hardman/fix-1.25 (8eb33af)