How to automatically remove completed Kubernetes Jobs created by a CronJob?

Is there a way to automatically remove completed Jobs besides making a CronJob to clean up completed Jobs?

The K8s Job Documentation states that the intended behavior of completed Jobs is for them to remain in a completed state until manually deleted. Because I am running thousands of Jobs a day via CronJobs and I don't want to keep completed Jobs around.


You can now set history limits, or disable history altogether, so that failed or successful CronJobs are not kept around indefinitely. See my answer here. Documentation is here.

To set the history limits:

The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish.

The config with 0 limits would look like:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  successfulJobsHistoryLimit: 0
  failedJobsHistoryLimit: 0
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

This is possible from version 1.12 Alpha with ttlSecondsAfterFinished. An example from Clean Up Finished Jobs Automatically:

apiVersion: batch/v1
kind: Job
metadata:
  name: pi-with-ttl
spec:
  ttlSecondsAfterFinished: 100
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

I've found the below to work

To remove failed jobs:

kubectl delete job $(kubectl get jobs | awk '$3 ~ 0' | awk '{print $1}')

To remove completed jobs:

kubectl delete job $(kubectl get jobs | awk '$3 ~ 1' | awk '{print $1}')

Another way using a field-selector:

kubectl delete jobs --field-selector status.successful=1 

That could be executed in a cronjob, similar to others answers.

  1. Create a service account, something like my-sa-name
  2. Create a role with list and delete permissions for the resource jobs
  3. Attach the role in the service account (rolebinding)
  4. Create a cronjob that will use the service account that will check for completed jobs and delete them
# 1. Create a service account

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sa-name
  namespace: default

---

# 2. Create a role

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: my-completed-jobs-cleaner-role
rules:
- apiGroups: [""]
  resources: ["jobs"]
  verbs: ["list", "delete"]

---

# 3. Attach the role to the service account

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: my-completed-jobs-cleaner-rolebinding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: my-completed-jobs-cleaner-role
subjects:
- kind: ServiceAccount
  name: my-sa-name
  namespace: default

---

# 4. Create a cronjob (with a crontab schedule) using the service account to check for completed jobs

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: jobs-cleanup
spec:
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: my-sa-name
          containers:
          - name: kubectl-container
            image: bitnami/kubectl:latest
            # I'm using bitnami kubectl, because the suggested kubectl image didn't had the `field-selector` option
            command: ["sh", "-c", "kubectl delete jobs --field-selector status.successful=1"]
          restartPolicy: Never


i'm using wernight/kubectl's kubectl image

scheduled a cron deleting anything that is

  • completed
  • 2 - 9 days old (so I have 2 days to review any failed jobs)

it runs every 30mins so i'm not accounting for jobs that are 10+ days old

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cleanup
spec:
  schedule: "*/30 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: kubectl-runner
            image: wernight/kubectl
            command: ["sh", "-c", "kubectl get jobs | awk '$4 ~ /[2-9]d$/ || $3 ~ 1' | awk '{print $1}' | xargs kubectl delete job"]
          restartPolicy: Never