This document outlines the essential steps for managing CouchDB data within your on-premises environment. By following these automated and manual procedures, you can ensure high availability and data persistence for your Provider Connectivity Assurance solution.
This guide provides detailed procedures for backing up and restoring CouchDB in on-premises Kubernetes Provider Connectivity Assurance deployments.
Prerequisites
kubectlaccess to the Provider Connectivity Assurance clusterSSH access to cluster nodes (for restoration)
Familiarity with OpenEBS local storage paths
Backup Overview
CouchDB backups run automatically via a Kubernetes CronJob at 01:00 UTC daily. Backups are stored in MinIO at:
/couchDB/v2/couchDB-Backup/
Verify the backup job exists:
kubectl get cronjobs -n pca | grep couchdb
Manual Backup Trigger
From any pod with curl (such as airflow):
kubectl exec -it -n pca airflow-0 -- sh
curl -vvv -f couchdb:10003/backup
Expected output: Successfully ran Backup on couchDB
Access Backups in MinIO
Connect to the MinIO pod:
kubectl exec -it -n pca pca-minio-pool-0-0 -- sh . /tmp/minio/config.envConfigure the MinIO client:
mc alias set --insecure pca https://localhost:9000 "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD"List available backups:
mc --insecure ls pca//couchDB/v2/couchDB-Backup/Copy a backup to the pod filesystem:
mc --insecure cp -r \ pca//couchDB/v2/couchDB-Backup/.tar \ /tmp/
Export Backup to Admin VM
The MinIO image does not include tar, so use cat to stream files:
Set the backup filename:
export CDB_BKUP=".tar" mkdir COUCH_RESTORECopy the file:
kubectl exec -n pca pca-minio-pool-0-0 -- cat /tmp/$CDB_BKUP > COUCH_RESTORE/$CDB_BKUPVerify the checksum:
kubectl exec -n pca pca-minio-pool-0-0 -- cksum /tmp/$CDB_BKUP cksum COUCH_RESTORE/$CDB_BKUPChecksums must match exactly.
Identify Storage Location
Embedded and air-gapped deployments use OpenEBS local storage. Persistent Volume Claim (PVC) data resides on the node filesystem.
Identify the CouchDB pod's node and PVC:
kubectl get pods -n pca -o wide | grep couchdb-0 kubectl get pvc -n pca | grep couchdb-data-couchdb-0Set environment variables:
export CDB_HOST= export CDB_PVC=Copy the backup to the target node:
scp -r COUCH_RESTORE/$CDB_BKUP \ ${CDB_HOST}:/var/lib/embedded-cluster/openebs-local/$CDB_PVC/Note: If CouchDB runs on the admin node, use
mvinstead ofscp.
Restore Procedure
Warning: This procedure replaces existing data. Ensure you have a valid backup before proceeding.
Extract the backup on the target node:
cd /var/lib/embedded-cluster/openebs-local/ tar xvf .tarScale down CouchDB:
kubectl scale statefulset -n pca couchdb --replicas=0Back up and replace the data directory:
cd /var/lib/embedded-cluster/openebs-local/ cp -r "$CDB_PVC" "${CDB_PVC}_$(date +%Y-%m-%d_%H-%M-%S)" mv data "$CDB_PVC"Scale up CouchDB:
kubectl scale statefulset -n pca couchdb --replicas=1Monitor startup:
kubectl logs -f -n pca couchdb-0
Post-Restore Validation
Access the CouchDB UI to verify connectivity
Confirm data is present and processing correctly
Check application logs for errors
Related Topics
© 2026 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and trademarks