We will implement a robust, atomic backup strategy that protects your n8n workflows, credentials, Redis queues, and custom nodes.
Stack:
- Target: PostgreSQL (n8n data), Redis (Queue state), N8N (Custom nodes & config)
- Storage: Central Backup PVC (PG/Redis) + Direct Upload (N8N)
- Cloud: Mega.nz (Off-site with 15-day retention)
- Encryption: Optional GPG encryption
π Overview
- Architecture: Central Jobs (DB) + Direct Job (N8N).
- Atomic Staging: Jobs write to a
stagingfolder first, then move toreadyonly when complete. - Destination: All backups go to
k8s-backupson Mega (N8N goes intok8s-backups/n8n). - Encryption: Backups are encrypted before syncing to the cloud.
π§© Step 1: Prepare Secrets
We need secrets in BOTH backup and prod namespaces.
1. Configure Rclone
On your local machine (or server), generate the config.
rclone config
# Name: mega
# Type: mega
# Account: Enter your Mega email/pass
2. Create Rclone Secrets
Create the secret in both namespaces.
# In Prod (for N8N Backup)
kubectl create secret generic rclone-secret \
--from-file=rclone.conf=$HOME/.config/rclone/rclone.conf \
-n prod
# In Backup (for Sync Job)
kubectl create secret generic rclone-secret \
--from-file=rclone.conf=$HOME/.config/rclone/rclone.conf \
-n backup
3. Create Encryption Secret (Backup NS)
Generates a key to encrypt database dumps.
openssl rand -base64 32 > backup-passphrase.txt
kubectl create secret generic backup-encryption-secret \
--from-file=passphrase=backup-passphrase.txt \
-n backup
# **Save this passphrase in a password manager!**
rm backup-passphrase.txt
4. Copy Database Password
The database secret is in prod, but the backup job runs in backup. We copy it while stripping internal IDs to prevent conflicts.
kubectl get secret postgres-secret -n prod -o yaml | \
sed -e 's/namespace: prod/namespace: backup/' \
-e '/uid:/d' \
-e '/resourceVersion:/d' \
-e '/creationTimestamp:/d' | \
kubectl apply -f -
π§© Step 2: Infrastructure
File: backups/01-infrastructure.yaml
apiVersion: v1
kind: Namespace
metadata:
name: backup
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-pvc
namespace: backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
Apply:
kubectl apply -f backups/01-infrastructure.yaml
π§© Step 3: Job 1 β PostgreSQL Backup
Dumps to staging, encrypts, moves to ready.
File: backups/02-postgres-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
namespace: backup
spec:
schedule: "0 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: backup-client
image: postgres:17-alpine
command:
- /bin/sh
- -c
- |
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
STAGING_DIR="/backup/staging/postgres"
READY_DIR="/backup/ready/postgres"
mkdir -p $STAGING_DIR $READY_DIR
FILENAME="n8n_db_${TIMESTAMP}.dump"
STAGING_FILE="${STAGING_DIR}/${FILENAME}"
PGPASSWORD=$POSTGRES_PASSWORD pg_dump \
-h postgres.prod.svc.cluster.local -U n8n -d n8n \
-F c -b -f $STAGING_FILE
if [ ! -s $STAGING_FILE ]; then exit 1; fi
# Encrypt if key exists
if [ -f /etc/backup/passphrase ]; then
apk add --no-cache gnupg > /dev/null 2>&1
cat $STAGING_FILE | gpg --batch --yes \
--passphrase-file /etc/backup/passphrase \
--symmetric --cipher-algo AES256 \
--output ${STAGING_FILE}.gpg
mv ${STAGING_FILE}.gpg ${READY_DIR}.gpg
rm $STAGING_FILE
else
mv $STAGING_FILE $READY_DIR
fi
find $READY_DIR -name "*.dump*" -mtime +7 -delete
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeMounts:
- name: backup-storage
mountPath: /backup
- name: encryption-key
mountPath: /etc/backup
readOnly: true
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: backup-pvc
- name: encryption-key
secret:
secretName: backup-encryption-secret
optional: true
restartPolicy: OnFailure
Apply:
kubectl apply -f backups/02-postgres-backup.yaml
π§© Step 4: Job 2 β Redis Backup
Non-blocking backup (BGSAVE).
File: backups/03-redis-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: redis-backup
namespace: backup
spec:
schedule: "0 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: redis-client
image: redis:8.0-alpine
command:
- /bin/sh
- -c
- |
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
STAGING_DIR="/backup/staging/redis"
READY_DIR="/backup/ready/redis"
mkdir -p $STAGING_DIR $READY_DIR
redis-cli -h redis.prod.svc.cluster.local BGSAVE
LAST_SAVE=$(redis-cli -h redis.prod.svc.cluster.local LASTSAVE)
while [ "$(redis-cli -h redis.prod.svc.cluster.local LASTSAVE)" -le "$LAST_SAVE" ]; do sleep 2; done
redis-cli -h redis.prod.svc.cluster.local --rdb - > ${STAGING_DIR}/redis_${TIMESTAMP}.rdb
if [ -s ${STAGING_DIR}/redis_${TIMESTAMP}.rdb ]; then
mv ${STAGING_DIR}/redis_${TIMESTAMP}.rdb ${READY_DIR}/
find $READY_DIR -name "*.rdb" -mtime +7 -delete
fi
volumeMounts:
- name: backup-storage
mountPath: /backup
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: backup-pvc
restartPolicy: OnFailure
Apply:
kubectl apply -f backups/03-redis-backup.yaml
π§© Step 5: Job 3 β N8N Backup (Direct Upload)
Runs in prod, uploads to mega:k8s-backups/n8n.
File: backups/04-n8n-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: n8n-backup
namespace: prod
spec:
schedule: "15 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: n8n-client
image: rclone/rclone:latest
command:
- /bin/sh
- -c
- |
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# Copy config to temp to prevent read-only errors
cat /config/rclone.conf > /tmp/rclone.conf
STAGING_DIR="/tmp/staging/n8n"
READY_DIR="/tmp/ready/n8n"
mkdir -p $STAGING_DIR $READY_DIR
FILENAME="n8n_data_${TIMESTAMP}.tar.gz"
tar -czf ${STAGING_DIR}/${FILENAME} -C /n8n_data .
if [ -s ${STAGING_DIR}/${FILENAME} ]; then
mv ${STAGING_DIR}/${FILENAME} $READY_DIR
rclone copy $READY_DIR mega:k8s-backups/n8n \
--config=/tmp/rclone.conf
rclone delete mega:k8s-backups/n8n --min-age 15d \
--config=/tmp/rclone.conf
fi
volumeMounts:
- name: n8n-pvc
mountPath: /n8n_data
- name: rclone-config
mountPath: /config
readOnly: true
volumes:
- name: n8n-pvc
persistentVolumeClaim:
claimName: n8n-pvc
- name: rclone-config
secret:
secretName: rclone-secret
restartPolicy: OnFailure
Apply:
kubectl apply -f backups/04-n8n-backup.yaml
π§© Step 6: Job 4 β Mega.nz Sync (Central)
Uploads postgres and redis from local PVC to mega:k8s-backups.
File: backups/05-mega-sync.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: mega-sync
namespace: backup
spec:
schedule: "30 2 * * *"
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: rclone-sync
image: rclone/rclone:latest
command:
- /bin/sh
- -c
- |
# Copy config to temp to prevent read-only errors
cat /secret/rclone.conf > /tmp/rclone.conf
rclone sync /backup/ready mega:k8s-backups \
--config=/tmp/rclone.conf \
--transfers 4
rclone delete mega:k8s-backups --min-age 15d \
--config=/tmp/rclone.conf
volumeMounts:
- name: backup-storage
mountPath: /backup
- name: rclone-secret
mountPath: /secret
readOnly: true
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: backup-pvc
- name: rclone-secret
secret:
secretName: rclone-secret
restartPolicy: OnFailure
Apply:
kubectl apply -f backups/05-mega-sync.yaml
π§© Step 7: Kubernetes Dashboard
1. Enable & Expose Service
microk8s enable dashboard
microk8s kubectl expose deployment kubernetes-dashboard -n kube-system \
--name=kubernetes-dashboard --port=443 --target-port=8443
2. Ingress (Valid SSL)
Access at https://kube.kamleshmerugu.me.
File: k8s-stack/ingress/dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kube-system
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
spec:
ingressClassName: public
tls:
- hosts: [kube.kamleshmerugu.me]
secretName: dashboard-tls-secret
rules:
- host: kube.kamleshmerugu.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
Apply:
kubectl apply -f k8s-stack/ingress/dashboard-ingress.yaml
3. Get Token
microk8s kubectl describe secret -n kube-system microk8s-dashboard-token
π§© Step 8: Verification
Test the jobs immediately.
# 1. Run backups
kubectl create job --from=cronjob/postgres-backup manual-pg -n backup
kubectl create job --from=cronjob/redis-backup manual-redis -n backup
kubectl create job --from=cronjob/n8n-backup manual-n8n -n prod
kubectl create job --from=cronjob/mega-sync manual-sync -n backup
# 2. Check Mega.nz
# You should see folder: k8s-backups
# Inside: postgres/, redis/, n8n/
π§© Step 9: Restore Procedures
Restore PostgreSQL
# 1. Download
kubectl run pg-download --image=rclone/rclone:latest --rm -it -n backup --restart=Never \
--overrides='{"spec":{"containers":[{"name":"d","image":"rclone/rclone:latest","command":["sh","-c","sleep 3600"],"volumeMounts":[{"name":"v","mountPath":"/b"},{"name":"c","mountPath":"/s","readOnly":true}]}],"volumes":[{"name":"v","persistentVolumeClaim":{"claimName":"backup-pvc"}},{"name":"c","secret":{"secretName":"rclone-secret"}}]}}' -- sh
# Inside shell:
rclone copy mega:k8s-backups /b/ready --config=/s/rclone.conf
exit
# 2. Restore
kubectl run pg-restore --image=postgres:17-alpine --rm -it -n backup --restart=Never \
--overrides='{"spec":{"containers":[{"name":"r","image":"postgres:17-alpine","command":["sh"],"env":[{"name":"PGPASSWORD","valueFrom":{"secretKeyRef":{"name":"postgres-secret","namespace":"backup","key":"password"}}}],"volumeMounts":[{"name":"v","mountPath":"/d"}]}],"volumes":[{"name":"v","persistentVolumeClaim":{"claimName":"backup-pvc"}}]}}' -- sh
# Inside shell:
gpg --batch --passphrase "YOUR_PASS" \
--decrypt /d/ready/postgres/n8n_db_DATE.dump.gpg \
--output /d/ready/restore.dump
psql -h postgres.prod.svc.cluster.local -U n8n -d postgres -c "DROP DATABASE n8n;"
psql -h postgres.prod.svc.cluster.local -U n8n -d postgres -c "CREATE DATABASE n8n;"
pg_restore -h postgres.prod.svc.cluster.local -U n8n -d n8n /d/ready/restore.dump
Restore Redis
# 1. Pull to local from backup PVC
kubectl run redis-helper -n backup --image=busybox --rm -i --restart=Never -- cat /backup/ready/redis/redis_DATE.rdb > /tmp/redis.rdb
# 2. Push to Redis Pod
REDIS_POD=$(kubectl get pod -n prod -l app=redis -o jsonpath='{.items[0].metadata.name}')
kubectl cp /tmp/redis.rdb ${REDIS_POD}:/data/dump.rdb -n prod
# 3. Restart
kubectl delete pod -n prod -l app=redis
Restore N8N
kubectl run n8n-restore --image=rclone/rclone:latest --rm -it -n prod --restart=Never \
--overrides='{"spec":{"containers":[{"name":"r","image":"rclone/rclone:latest","command":["sh","-c","sleep 3600"],"volumeMounts":[{"name":"n","mountPath":"/n8n_data"},{"name":"c","mountPath":"/s","readOnly":true}]}],"volumes":[{"name":"n","persistentVolumeClaim":{"claimName":"n8n-pvc"}},{"name":"c","secret":{"secretName":"rclone-secret"}}]}}' -- sh
# Inside shell:
rclone copy mega:k8s-backups/n8n /tmp/r --config=/s/rclone.conf
tar -xzf /tmp/r/n8n_data_DATE.tar.gz -C /tmp/r
mv /n8n_data/.n8n /n8n_data/.n8n_old
cp -r /tmp/r/.n8n /n8n_data/
exit
kubectl delete pod -n prod -l app=n8n
π Checklist
| Component | Strategy |
|---|---|
| Postgres | Staging -> Encrypt -> Ready -> Sync |
| Redis | BGSAVE -> Ready -> Sync |
| N8N | Direct Upload to k8s-backups/n8n
|
| Retention | 7 Days Local, 15 Days Cloud |
| Dashboard | Accessible at https://kube.kamleshmerugu.me
|
You now have a bulletproof, production-ready backup system that is Simple, Modular, and RBAC-Free! π‘οΈ
Top comments (0)