✨ New: Try our AI‑powered Search (Ctrl + K) — Read more

CouchDB Backup and Restore

Prev Next

This document outlines the essential steps for managing CouchDB data within your on-premises environment. By following these automated and manual procedures, you can ensure high availability and data persistence for your Provider Connectivity Assurance solution.

This guide provides detailed procedures for backing up and restoring CouchDB in on-premises Kubernetes Provider Connectivity Assurance deployments.

Prerequisites

  • kubectl access to the Provider Connectivity Assurance cluster

  • SSH access to cluster nodes (for restoration)

  • Familiarity with OpenEBS local storage paths

Backup Overview

CouchDB backups run automatically via a Kubernetes CronJob at 01:00 UTC daily. Backups are stored in MinIO at:

/couchDB/v2/couchDB-Backup/

Verify the backup job exists:

kubectl get cronjobs -n pca | grep couchdb

Manual Backup Trigger

From any pod with curl (such as airflow):

kubectl exec -it -n pca airflow-0 -- sh
curl -vvv -f couchdb:10003/backup

Expected output: Successfully ran Backup on couchDB

Access Backups in MinIO

  1. Connect to the MinIO pod:

    kubectl exec -it -n pca pca-minio-pool-0-0 -- sh
    . /tmp/minio/config.env
    
  2. Configure the MinIO client:

    mc alias set --insecure pca https://localhost:9000 "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD"
    
  3. List available backups:

    mc --insecure ls pca//couchDB/v2/couchDB-Backup/
    
  4. Copy a backup to the pod filesystem:

    mc --insecure cp -r \
    pca//couchDB/v2/couchDB-Backup/.tar \
    /tmp/
    

Export Backup to Admin VM

The MinIO image does not include tar, so use cat to stream files:

  1. Set the backup filename:

    export CDB_BKUP=".tar"
    mkdir COUCH_RESTORE
    
  2. Copy the file:

    kubectl exec -n pca pca-minio-pool-0-0 -- cat /tmp/$CDB_BKUP > COUCH_RESTORE/$CDB_BKUP
    
  3. Verify the checksum:

    kubectl exec -n pca pca-minio-pool-0-0 -- cksum /tmp/$CDB_BKUP
    cksum COUCH_RESTORE/$CDB_BKUP
    

    Checksums must match exactly.

Identify Storage Location

Embedded and air-gapped deployments use OpenEBS local storage. Persistent Volume Claim (PVC) data resides on the node filesystem.

  1. Identify the CouchDB pod's node and PVC:

    kubectl get pods -n pca -o wide | grep couchdb-0
    kubectl get pvc -n pca | grep couchdb-data-couchdb-0
    
  2. Set environment variables:

    export CDB_HOST=
    export CDB_PVC=
  3. Copy the backup to the target node:

    scp -r COUCH_RESTORE/$CDB_BKUP \
    ${CDB_HOST}:/var/lib/embedded-cluster/openebs-local/$CDB_PVC/
    

    Note: If CouchDB runs on the admin node, use mv instead of scp.

Restore Procedure

Warning: This procedure replaces existing data. Ensure you have a valid backup before proceeding.

  1. Extract the backup on the target node:

    cd /var/lib/embedded-cluster/openebs-local/
    tar xvf .tar
    
  2. Scale down CouchDB:

    kubectl scale statefulset -n pca couchdb --replicas=0
    
  3. Back up and replace the data directory:

    cd /var/lib/embedded-cluster/openebs-local/
    cp -r "$CDB_PVC" "${CDB_PVC}_$(date +%Y-%m-%d_%H-%M-%S)"
    mv data "$CDB_PVC"
    
  4. Scale up CouchDB:

    kubectl scale statefulset -n pca couchdb --replicas=1
    
  5. Monitor startup:

    kubectl logs -f -n pca couchdb-0
    

Post-Restore Validation

  1. Access the CouchDB UI to verify connectivity

  2. Confirm data is present and processing correctly

  3. Check application logs for errors

Related Topics

© 2026 Cisco and/or its affiliates. All rights reserved.

For more information about trademarks, please visit:
Cisco trademarks 
For more information about legal terms, please visit:
Cisco legal terms
For legal information about Accedian Skylight products, please visit:  Accedian legal terms and trademarks