✨ New: Try our AI‑powered Search (Ctrl + K) — Read more

Prerequisites

Prev Next

This section outlines the prerequisites for running the On-Premise Self Service installer, essential for installing and upgrading the Provider Connectivity Assurance solution in a private cloud environment.

While the overall deployment process is similar across Non-Airgap and Airgap on-premise environments, preparation requirements differ. The following details will assist with the setup of Provider Connectivity Assurance for both scenarios.

1. Determine Environment Sizing

Your Provider Connectivity Assurance contact will provide the necessary environment sizing details.

2. Verify Requirements

Ensure that virtual machine and network requirements are met:

Virtual Machine Requirements

Virtual Machine Requirements
Category Details
OS image
  • Debian 11 or Red Hat 9 recommended, with cgroupsv2 disabled
  • Other flavors are acceptable but should be agreed upon in advance, and they must support Docker and Docker Swarm
IOPS
  • HDD persistent disks must provide a minimum of:
    • read of 250 IOPS
    • write of 50 IOPS
  • SSD persistent disks must provide a minimum of:
    • read of 10000 IOPS
    • write of 1000 IOPS
  • Disk performance to be tested during execution of the cisco-preflight-check.sh script.
Disk partitions
  • Minimum Disk Partition Sizes - these may be larger for your specific deployment:
    • boot/root disk 200G
    • swap 20G
    • log 15G
    • /var/lib/docker ~1.5T (this holds containers and their volumes, and is engineered for each deployment)
Users
  • Admin user with password-less sudo access and Docker access
Docker versions
  • Docker Server 24.0.4 + Docker Compose 1.26.0
Access method
  • Password-less access (ssh keys from deployer host to all deployment VMs)
Time sync
  • Synchronize all nodes with a central Network Time Protocol (NTP) source.
  • Time zone should be set to ETC/UTC.

Network Requirements

Network Requirements
Category Details
Internal connectivity (required)
  • All VMs must have connectivity over a minimum 1 Gbps LAN - all ports open.
  • The leader VM requires one IP address for connectivity to roadrunners and UI/API access.
Inbound connectivity (required for non-airgap)
  • Cisco requires 24/7 connectivity to the environment to maintain the service via customer-supplied VPN to the private LAN of the solution VMs or Cisco Radkit (Preferred) (https://radkit.cisco.com/) to the private LAN of the solution VMs.
  • This can be a generic login or via named accounts for the following Cisco groups:
    • DevOps Engineering
    • Customer Success Managers (CSMs)
    • Site Reliability Engineers (SREs)
  • NOTE: Cisco is not able to work through customer laptops/shared desktops sessions to perform the deployment or maintenance.
Outbound connectivity (required for non-airgap)
  • Cisco requires that the VMs have 24/7 outbound connectivity to the following for non-airgap environments:
    • https://prometheus-prox.npavlabs.accedian.net:443 (for monitoring)
    • https://gcr.io:443 (for container images)
    • https://storage.googleapis.com (for container images)
    • https://api.sendgrid.com (for reports)
    • https://www.workato.com/ (for integration)
    • https://dns.google (for reports)
    • https://mapbox.com (for reports)
    • Debian aptitude or other OS repositories
    • https://download.geonames.org (for location data)
    • https://skylight-watcher.firebaseapp.com (for monitoring)
    • https://firestore.googleapis.com (for monitoring)
    • http://prod.radkit-cloud.cisco.com (for remote management)

3. Install Package Dependencies

Ensure that all virtual machines have the required OS and PIP packages installed before proceeding.

Required OS Packages

The following list outlines the necessary dependencies for both Debian-based and Red Hat-based systems.

Required OS Packages

Common OS Packages (Debian and RHEL):

    Docker
    docker-ce
    docker-ce-cli
    containerd.io
    docker-buildx-plugin
    docker-compose-plugin
    bash-completion
    jq
    curl
    ca-certificates
    chrony
    gnupg2
    s3cmd
    tmux

Debian-Specific OS Packages:

    prometheus-node-exporter
    software-properties-common
    dirmngr
    apt-transport-https

RHEL-Specific OS Packages:

    tgolang-github-prometheus-node-exporter

Required PIP Packages:

    docker==7.1.0
    docker-compose
    pyopenssl
    hvac
    pyyaml
    requests
    wheel

Follow the installation instructions according to your distribution type (Debian or Red Hat):

Debian Package Installation

For Docker Engine installation instructions, see https://docs.docker.com/engine/install/.

Install OS Packages

$ sudo apt install bash-completion curl ca-certificates gnupg2 jq s3cmd prometheus-node-exporter apt-transport-https dirmngr software-properties-common tmux python3-pip -y

Install PIP Packages

$ sudo sh -c 'umask 0022 && pip3 install docker==7.1.0 pyopenssl hvac'
$ sudo sh -c 'umask 0022 && pip3 install docker-compose --no-build-isolation'

Red Hat 9 Package Installation

For Docker Engine installation instructions, see https://docs.docker.com/engine/install/.

Install OS Packages

$ sudo dnf install bash-completion curl ca-certificates gnupg2 jq s3cmd golang-github-prometheus-node-exporter tmux python3-pip -y

Install PIP Packages

$ sudo pip3 install docker==7.1.0 pyopenssl hvac --break-system-packages
$ sudo pip3 install docker-compose --break-system-packages --no-build-isolation

4. Set Up Users and Access

Create and distribute the admin user to all the nodes of your installation.

  1. On all nodes, create a user named admin .

    Execute the following commands to create an admin user with sudo access:

    $ sudo useradd -d /home/admin -m -s /usr/bin/bash admin
    

    For Debian, add the user to required groups:

    $ sudo usermod -a -G sudo admin
    $ sudo usermod -a -G docker admin
    $ echo 'admin ALL=(ALL) NOPASSWD: ALL' | sudo tee /etc/sudoers.d/admin >/dev/null && sudo chmod 0440 /etc/sudoers.d/admin
    $ mkdir -p ~/.docker
    

    For RHEL, add the user to required groups:

    $ sudo usermod -a -G wheel admin
    $ sudo usermod -a -G docker admin
    $ mkdir -p ~/.docker
    
  2. On the leader node, configure ssh.

    Generate an SSH key and authorize access:

    $ sudo su - admin
    $ ssh-keygen -t rsa -b 4096 -N "" -f /home/admin/.ssh/id_rsa
    $ cat /home/admin/.ssh/id_rsa.pub >> /home/admin/.ssh/authorized_keys
    
  3. For multi-node installations, copy the leader node public key to all other nodes:

    On the leader node:

    $ su - admin
    $ cat /home/admin/.ssh/authorized_keys
    

    On all other nodes:

    $ sudo su - admin
    $ vi /home/admin/.ssh/authorized_keys  # Paste the key contents from the leader node
    
  4. Prepare ssh on the leader node for installation and deployment:

    # Note: The ssh-agent session is temporary and must be started before installation.
    $ sudo su - admin
    $ eval $(ssh-agent); ssh-add /home/admin/.ssh/id_rsa
    

5. Additional Prerequisites for Airgap Installation

These steps will be executed by an operations agent.

  1. Obtain the airgap archive.
  2. Transfer the archive to the leader node.

Download and Extract Airgap Archive from Cloud Storage

  1. Request a read-only key for the airgap bucket from your CloudOps contact, along with the archive path.

    This step requires a node with internet access and the Google Cloud SDK installed. For more information, see: https://cloud.google.com/sdk/docs/install

  2. On the leader node, save the key to a file named airgap_bucket-key.json.

    $ vi airgap_bucket-key.json
    # If the file doesn’t exist, vi will create a new empty file with that name.
    # Start editing. Press **i** to enter **insert mode**.
    # Paste the key into the file. If copying the key from another source, **right-click** or press **Shift + Insert** (on some systems) to paste.
    # Save and exit: Press **ESC**, then type **:wq**, and hit **Enter**.
    
  3. Activate the service account key.

    $ gcloud auth activate-service-account --key-file=./PATH_TO_SERVICE_ACCOUNT_FILE.json
    
    # Example
    $ gcloud auth activate-service-account --key-file=./airgap_bucket-key.json
    
  4. Download the archive (confirm the exact location with your Provider Connectivity Assurance contact if unsure):

    $ gcloud storage cp gs://airgap-archives/##VERSION##/##OS##_##MAJOR VERSION##/airgap_skylight-##VERSION##-init.tar.gz .
    
    # or
    
    $ gsutil cp gs://airgap-archives/##VERSION##/##OS##_##MAJOR VERSION##/airgap_skylight-##VERSION##-init.tar.gz .
    
    # For example
    
    $ gsutil cp gs://airgap-archives/15.111.95/ubuntu_22.04/airgap_skylight-15.111.95-init.tar.gz .
    
  5. Extract the archive.

    # Switch to the admin user
    $ sudo su - admin
    
    # Navigate to the admin's home directory:
    cd /home/admin/
    
    # Define the SKYLIGHT_VERSION to use. For example:
    $ export SKYLIGHT_VERSION="16.156.0"
    
    # Extract the skylight-installer directory from the archive:
    $ tar -zxvf airgap_skylight-$SKYLIGHT_VERSION-init.tar.gz skylight-installer/
    
    ## This step extracts the skylight-installer directory from the tar file
    ## At this point, the installer scripts and the airgap archive should be present in this directory
    
    # Verify extraction. For example:
    $ ls -l /home/admin
    -rw-r--r-- 1 admin admin  18G Jan 28 20:11 airgap_skylight-16.156.0-init.tar.gz
    drwxr-xr-x 6 admin admin  165 Jan 28 23:12 skylight-installer
    

6. Run Preflight Checks

Before proceeding with installation, extract and execute the preflight checks to validate that all required software prerequisites are met.

Note: Run this check individually on each node in the cluster:

  1. Navigate to the preflight check directory.

    $ cd /home/admin/skylight-installer/bin/  
    
  2. Execute the preflight check.

    $ sudo ./cisco-preflight-check.sh [airgap|noairgap]  
    
    # Select the appropriate option:
    # noairgap = Internet access is available  
    # airgap = No access to the public internet  
    

Pre-Installation Checklist

Before proceeding with installation, verify that you have completed the following:

    Airgap Archive Bundle – You have the necessary software versions for the next steps.
    Preflight Checks – You have run the cisco-preflight-check.sh script on all nodes and resolved any failures.
    Skylight Version – You have noted the $SKYLIGHT_VERSION value to be installed.
    Deployment Hostnames & IPs – You have all required system details.
    Proxy Servers & TLS Certificates – For non-airgap environments, you have identified your proxy servers and have the required TLS server certificates ready. (If using your own—otherwise, a self-signed certificate will be provided and can be replaced later.)
    Domain & Deployment Configuration – You have your domain details and have decided on:
    • Deployment name (default: performance)
    • Tenant name (default: analytics)
    • If DNS is available, create an entry pointing the leader node’s IP to: <tenant_name>.<deployment_name>.<domain_name>