X Xerobit

Kubernetes CronJob — Schedule Recurring Tasks in Kubernetes

Kubernetes CronJob runs containers on a cron schedule inside your cluster. Learn the CronJob spec, concurrency policy, job history limits, timezone support, and debugging...

Mian Ali Khalid · · 5 min read
Use the tool
Cron Builder
Build and parse cron expressions with human-readable explanations.
Open Cron Builder →

Kubernetes CronJob runs batch workloads on a cron schedule inside your cluster. Use it for database backups, report generation, cache warming, and cleanup tasks.

Build the cron schedule with the Cron Builder.

Basic CronJob spec

apiVersion: batch/v1
kind: CronJob
metadata:
  name: database-backup
  namespace: production
spec:
  schedule: "0 2 * * *"     # 2 AM UTC daily
  timeZone: "America/New_York"  # Requires k8s 1.27+
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: OnFailure  # Never, OnFailure, or Always
          containers:
            - name: backup
              image: postgres:16
              command:
                - /bin/sh
                - -c
                - pg_dump $DATABASE_URL | gzip > /backup/db-$(date +%Y%m%d).sql.gz
              env:
                - name: DATABASE_URL
                  valueFrom:
                    secretKeyRef:
                      name: db-secret
                      key: url
              volumeMounts:
                - name: backup-storage
                  mountPath: /backup
          volumes:
            - name: backup-storage
              persistentVolumeClaim:
                claimName: backup-pvc

Concurrency policy options

spec:
  schedule: "*/5 * * * *"
  
  # Allow (default): run multiple jobs simultaneously if previous hasn't finished
  concurrencyPolicy: Allow
  
  # Forbid: skip new job if previous is still running
  concurrencyPolicy: Forbid
  
  # Replace: cancel running job and start new one
  concurrencyPolicy: Replace

Best practice: Use Forbid for jobs that should not overlap (backups, migrations). Use Allow only if the job is truly idempotent and stateless.

Job history and cleanup

spec:
  schedule: "0 * * * *"
  successfulJobsHistoryLimit: 3   # Keep last 3 successful jobs (default: 3)
  failedJobsHistoryLimit: 1       # Keep last 1 failed job (default: 1)
  startingDeadlineSeconds: 300    # Job must start within 5 minutes or skip
  jobTemplate:
    spec:
      activeDeadlineSeconds: 3600 # Kill job after 1 hour
      backoffLimit: 3             # Retry 3 times before marking failed
      template:
        spec:
          restartPolicy: OnFailure
          containers:
            - name: worker
              image: myapp:latest
              command: ["node", "scripts/cleanup.js"]

Timezone support (Kubernetes 1.27+)

spec:
  schedule: "0 9 * * 1-5"        # 9 AM in specified timezone
  timeZone: "Europe/London"       # IANA timezone name
  # Without timeZone: runs in UTC (kube-controller-manager timezone)

For clusters < 1.27, handle timezones in your application code or use a UTC offset.

Apply and manage CronJobs

# Apply:
kubectl apply -f cronjob.yaml

# List CronJobs:
kubectl get cronjob -n production

# Check status:
kubectl describe cronjob database-backup -n production

# List jobs created by the CronJob:
kubectl get jobs --selector=batch.kubernetes.io/controller-uid=<uid> -n production

# Manually trigger a job (create from CronJob template):
kubectl create job --from=cronjob/database-backup manual-backup-$(date +%Y%m%d) -n production

# Suspend a CronJob (pause scheduling):
kubectl patch cronjob database-backup -p '{"spec":{"suspend":true}}' -n production

# Resume:
kubectl patch cronjob database-backup -p '{"spec":{"suspend":false}}' -n production

Debug failed CronJobs

# List recent jobs:
kubectl get jobs -n production --sort-by=.metadata.creationTimestamp

# Get pod from failed job:
kubectl get pods --selector=job-name=database-backup-12345 -n production

# View logs:
kubectl logs <pod-name> -n production

# Describe the pod for events:
kubectl describe pod <pod-name> -n production

# Check job conditions:
kubectl describe job database-backup-12345 -n production

Alerting on failed CronJobs

# Prometheus alert rule for failed CronJobs:
groups:
  - name: cronjob_alerts
    rules:
      - alert: CronJobFailed
        expr: kube_job_failed > 0
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "CronJob {{ $labels.job_name }} failed"
          description: "CronJob in namespace {{ $labels.namespace }} has failed."

Complete example: cache warming CronJob

apiVersion: batch/v1
kind: CronJob
metadata:
  name: cache-warmer
spec:
  schedule: "*/30 * * * *"      # Every 30 minutes
  concurrencyPolicy: Forbid     # Don't overlap
  successfulJobsHistoryLimit: 2
  failedJobsHistoryLimit: 2
  jobTemplate:
    spec:
      backoffLimit: 2
      template:
        spec:
          restartPolicy: OnFailure
          serviceAccountName: cache-warmer
          containers:
            - name: warmer
              image: myapp:latest
              command: ["node", "scripts/warm-cache.js"]
              resources:
                requests:
                  cpu: 100m
                  memory: 128Mi
                limits:
                  cpu: 500m
                  memory: 256Mi
              env:
                - name: REDIS_URL
                  valueFrom:
                    secretKeyRef:
                      name: redis-secret
                      key: url
                - name: APP_URL
                  value: "http://app-service"

Related posts

Related tool

Cron Builder

Build and parse cron expressions with human-readable explanations.

Written by Mian Ali Khalid. Part of the Dev Productivity pillar.