New: Try our AI‑powered Search (Ctrl + K) — Read more

Skylight Private Cloud Installation BETA

Prev Next

Installing the Skylight Solution

This guide documents the installation and upgrade instructions to deploy a Skylight solution in a private cloud environment. Although Skylight Analytics can be deployed anywhere, there are three general types of deployments:

  • GCP Cloud deployments; other cloud offerings are treated as OnPrem with Internet Connectivity
  • OnPrem deployments with prescribed ingress and egress connectivity
  • OnPrem deployments that have no (restricted) ingress and egress connectivity (AIRGAP)

Included in the above deployment situations are subtypes such as Lite, Session only, and Session + Capture; each of these subtypes may have service scaling differences depending on the session counts and flows.

Although the deployment across all three environment types is generally the same, preparation for deployment is radically different. This document details how to deploy in the two OnPrem types listed above.

Prerequisites

Before proceeding with the installation, ensure the following prerequisites are met:

  1. Read through the Private Could Considerations documentation.
    Note: Permissions are required for this document, so you may have to request access.

  2. All machines required for your deployment have been set up according to the Virtual machines section in the above document; not limited to OS, filesystem, disk types, IOPS, cgroupsv1, Filesystem layout, packages, users, ingress and egress access, etc.

  3. All VMs must be NTP sync with a central source (using NTP, not chrony)

  4. A GCR.json file containing keys for a GCP service account with access to Accedian's private Docker repository. Contact technical support for assistance.

  5. The make package is installed (sudo apt install make).

  6. User with name admin and sudo access.

  useradd -d /home/admin -m -s /usr/bin/bash admin
  usermod -a -G sudo admin
  eval ssh-agent
 ssh-add ~/.ssh/id_rsa
  1. Update authorized_keys file on all hosts in the deployment.
cat id_rsa.pub >> /home/$USER/.ssh/authorized_keys
  1. Obtain the Skylight Installer from here.

  2. AIRGAP ONLY:
    The following activity will be performed by your cloudops/devops/CSM and the software will be made available via cisco.com.
    If you are deploying Skylight Analytics into an AIRGAP environment (an environment than has been isolated and cannot establish any of the external connections outlined in bullet #2 above), you must perform the following actions on a separate non-airgap'd system:

a. Generate an airgap archive using the GitHub (login required for this link to work) archive generator on an instance with the same OS image (i.e. Debian 11) as the deployment.

Export GITHUB_PERSONAL_ACCESS_TOKEN and GCR_TOKEN environment variable, typically you would do:

 export GITHUB_PERSONAL_ACCESS_TOKEN=<<YOUR GITHUB PERSONAL ACCESS TOKEN>>
 export GCR_TOKEN=$(cat GCR.json)

Run the command to create all external artifacts required:

./deployment_version_upgrade.py -v1 ##AOD-DEPLOYER VERSION## --airgap --group_vars ##PATH TO GROUP_VARS/ALL.YML##

where:

  • AOD-DEPLOYER VERSION is a branch or tag of the aod-deployer github repo
  • PATH TO GROUP_VARS/ALL.YML is the path to the group_vars of the aod-deployer ansible project. Located at aod-deployer/onprem/ansible/group_vars/all.yml.
    Note: This will take a very long time and generate up to 50GB of data. Ensure you have ample disk space where you are running this.

b. Transfer the file named airgap_archive.tar.gz generated from the script to your leader VM.


Getting Started

Various implementations of the Skylight solution use the same underlying architecture and deployment strategy. Depending on the services required, one or multiple servers might be necessary to achieve the desired performance.

A Skylight Deployment overview

The required number of servers in each category have been specified by your Accedian CSM as part of the solution design.

In a multi-server scenario, one server will be elected as leader. The leader runs some of the core services, as well as being the docker swarm manager. All other servers are elected as workers.

On large deployments (more than 10 VMs), it is recommended to add two swarmmgr . Unlike the leader, these smaller servers do not run services. They are elected as docker swarm manager so that they can participate in swarm leadership election.

Please follow the recommendation from Accedian for your specific setup.

More about the leader node:

One server in the the analytics server group has a special role: The leader node runs the main reverse proxy to the application (NGINX) and is responsible for receiving outbound connection (HTTPs server). In a deployment, it is really the only server that will be accessed by external clients (such as users of the system, API users, roadrunners, etc.).

All other servers in a deployment only need internal communication with one another.

Unless otherwise specified, all management operations are run by the leader.

More about the installation process

The installation relies heavily on Ansible and Docker. It can be executed on any Linux or MacOSX box, but will require ssh access to all servers.

We make heavy used of HashiCorp Vault’s excellent SSH capabilities to enable secure passwordless connectivity between hosts. This will be configured as part of this setup.

During installation, you must follow the below steps on a server with internet access and SSH access to all servers in the deployment. This may be one of the servers in the deployment itself or a seperate server dedicated to deployment that can later be discarded. Just note that based on where it is run, some parts of the installation will differ.


Part 1: Host Pre-Setup and Getting the Installer Package

This section addresses:

  • Configuring each host
  • Downloading the installation script
  • Preparing the hosts to receive the Skylight application

The Skylight Installer Package

The Skylight installer (typically named skylight-installer-VERSION_INFO.tar.gz) contains a series of helper scripts to:

  1. Use Ansible to configure the hosts, set up the swarm, and create a local vault.
  • Vault is also used for passwordless remote connectivity between hosts.
  1. Use Ansible to deploy the Skylight Solution.

Except for a few helper shell scripts, all of the mechanics are housed in a specialized Docker container named aod-deployer:VERSION_INFO.

Uncompressing the Package

Note: You may elect to run the installation from another linux or MacOSX machine. The first step (setting up your local Vault) will set up Vault on the machine where you run this step.

On the installation machine, untar the skylight-installer-VERSION_INFO.tar.gz in your users directory; example: /home/admin

From here, retrieve the latest version of the package to install.

Specify the SKYLIGHT_VERSION to Use

For instance:

export SKYLIGHT_VERSION="la1-23.07.7158"

cd /home/$USER
wget https://software.accedian.io/skylight/installer/release/la1-23.07/skylight-installer-$SKYLIGHT_VERSION.tar.gz
tar zxvf skylight-installer-$SKYLIGHT_VERSION.tar.gz
Executing ls -la skylight-installer/, you should see the following output:
total 76
drwxr-xr-x 6 admin admin  4096 Feb 13 07:01 .
drwxr-xr-x 7 admin admin  4096 Feb 13 07:01 ..
-rw-r--r-- 1 admin admin   180 Feb 13 07:01 Makefile
-rw-r--r-- 1 admin admin 36114 Feb 13 07:01 README.md
drwxr-xr-x 2 admin admin  4096 Feb 13 07:01 bin
drwxr-xr-x 2 admin admin  4096 Feb 13 07:01 config
drwxr-xr-x 2 admin admin  4096 Feb 13 07:01 config.templates
-rwxr-xr-x 1 admin admin  5641 Feb 13 07:01 install.py
-rwxr-xr-x 1 admin admin  2109 Feb 13 07:01 local_package_setup.sh
drwxr-xr-x 2 admin admin  4096 Feb 13 07:01 tmp
  1. The bin/ folder contains helper and initialization scripts.

  2. The config.templates folder includes sample configuration templates that can be used.

  3. The config/ folder includes configuration information that will need to be filled before executing the installation. Specific files from config.templates/ folder will be copied here and edited to fit your deployment’s needs.

  4. The tmp folder is a temporary placeholder. The intent is to place, into this folder, security artifacts (TLS keys, service accounts, etc.) that will be securely loaded into vault. At the end of a successful installation, the content of that directory should be emptied by the administrator to ensure that these keys do not live in plain text on that server.

  5. Makefile, install.py and local_package_setup.sh scripts all work in tandem to make installation simpler.

(Non-Airgapped) Configuring the Servers for gcr.io Access

On the server running the installation, place the GCR.json received from Accedian at this location:

# Example gcr.key initial location
skylight-installer/tmp/GCR.json

Part 2: Populating Configuration Files

The two main configuration files required for a deployment are:

  1. inventory: An Ansible Inventory in YAML format.
  2. config/variables.lite.env: A list of Skylight specific environment variables containing information about the deployment, which feature must be enabled, etc.

Additionally, if the RESTConf interface served by the Skylight-Gateway is enabled (a.k.a “NSO-GW”), a third file must be provided. Information on how to provide this file is specified in the following sections.

  1. /tmp/nso-gw-secrets.json : Configuration items required to enable the Skylight-Gateway.

See examples of these files below.

1. inventory - host info

The inventory file is a YAML document that is used by Ansible to identify which servers are to be included in this installation. It also contains host’s configuration information, such as Docker labels (to instruct the Docker daemon where to schedule a service), and other hosts information.

Examples are provided in the config.templates directory. This guide will make use of the config.templates/inventory example, which can be used for a single server deployment.

First, copy the template to the directory with the Makefile:

# files in config.templates are overwritten in subsequent patches
# The installer expects all configuration files to be in the config folder
cp config.templates/inventory inventory

Then, fill in the inventory file. An example with the required placeholders is provided below for reference.

all:
  hosts:
    ## Please replace the following label "leader" with the hostname (or IP address) of your leader
    ## We recommend using hostnames for easier operational maintenance
    [[LEADER_MACHINE_HOSTNAME]]:
      # The host address used by ansible to ssh to a host. Typically, this is the address
      # used by a client to access this hosts. If running this 
      # on the leader, ansible_host and inventory_host are typically the same.
      # Note that this must be an IP address
      ansible_host: [[ANSIBLE_HOST_IP]]
    # for additional hosts in the deployment, add them like so:
    #swarmmgr-2:
      #  ansible_host: xxx.xxx.xxx.xxx
    #swarmmgr-1:
      #  ansible_host: xxx.xxx.xxx.xxx
    #worker-2:
      #  ansible_host: xxx.xxx.xxx.xxx
    #worker-3:
      #  ansible_host: xxx.xxx.xxx.xxx
    #worker-1:
      #  ansible_host: xxx.xxx.xxx.xxx
  vars:
    # Replace these if required
    ## the ansible_user is the user used by ansible for configuration operation. 
    ## it needs to have sudo access. Change it here if you are using a different user
    ansible_user: admin

    ## The ansible_password is not currently used. It could be set here if required,
    # but since we are using Vault ssh to perform connectivity, a password 
    # is _not_ required. We recommend leaving this option commented out
    # ansible_password: <secret> 
    
    ## Deployment specifications
    # Note that both deployment_name.deployment_name.basename.basedomain and 
    # tenant_name.deployment_name.basename.basedomain should be FQDN and routable.
    # They will be the addresses to use to access your deployments, and should have
    # proper TLS certificates for HTTPs server
    
    # In this example, our deployment URL is performance.performance.example.com 
    # and our tenant URL is analytics.performance.example.com

    # deployment_name: performance
    deployment_name: [[DEPLOYMENT_NAME]]

    # tenant_name: analytics
    tenant_name: [[TENANT_NAME]]

    # basedomain: example.com
    basedomain: [[BASEDOMAIN]]

    # Proxy ip and port that would be used for outbound internet connection
    # proxy: http://<< proxy IP >>:<< port >>

    # Boolean with options (yes/no) that determines whether or not this deployment is
    # airgapped or not. If no, the os packages will download from the internet and gcr.io
    # will be used to get container images. If yes, os packages are assumed to be installed
    # already (use download_packages.yml otherwise), a container will be launched to host
    # a docker registry instead of gcr.io. This means "registry_archive" is required
    airgap: no

    # This is a tar.gz archive that will be extracted to / and contains all the neccessary container images
    # and packages for the deployment. This is required if airgap == yes
    #airgap_archive: /path/to/airgap/archive/airgap_archive.tar.gz

    # This is the Sendgrid API key. It is added directly to the vault like the previous options.
    # stored at "deployer/email-sender/onprem-deployments". This option is required for airgap == no
    sendgrid_api_key: "12341234"

    # This option is available for airgap == no when this installed is performed by Cisco's authorized CloudOps team.
    # A url for a public vault can be used here.
    #public_vault: "https://vault_url"

    # This section is dedicated to files used in the deployment. gcr_key_path is required for airgap == no
    files:
      gcr_key_path: [[ PATH TO GCR KEY ]]
      ca_key_path: [[ (OPTIONAL) PATH TO CA KEY ]]
      ca_csr_path: [[ (OPTIONAL) PATH TO CA CSR ]]
      ca_crt_path: [[ (OPTIONAL) PATH TO CA CRT ]]
      nso_gw_path: [[ (OPTIONAL) PATH TO NSO GW JSON ]]
      tls_key_path: [[ (OPTIONAL) PATH TO TLS KEY ]]
      tls_cert_path: [[ (OPTIONAL) PATH TO TLS CRT ]]

docker_swarm_manager:
  hosts:
    [[LEADER_MACHINE_HOSTNAME]]:
      docker_labels:
        # The label preceding the digits in the tags below MUST be the same value as all.vars.deployment_name value 
        # (in this case, this would be performance-1 and performance-2)
        - [[DEPLOYMENT_NAME]]-1
        - [[DEPLOYMENT_NAME]]-2

# Labeling for additional hosts may also be neccessary so here is an example of additional hosts
#docker_secondary_swarm_manager:
#  hosts:
#    swarmmgr-1:
#    swarmmgr-2:
#
#docker_swarm_worker:
#  hosts:
#    worker-1:
#      docker_labels:
#        - historical
#        - deployment-name-2
#        - sparkworker
#        - hdfsdata1
#    worker-2:
#      docker_labels:
#        - broker
#        - hdfsdata2
#        - historical
#    worker-3:
#      docker_labels:
#        - historical

In this example, replace:

  • all.hosts.leader: change [[LEADER_MACHINE_HOSTNAME]] to the hostname (or IP address) of your leader. We recommend using hostnames for easier operational maintenance.
  • docker_swarm_manager.hosts.leader: same value as all.hosts.leader
  • [[ANSIBLE_HOST_IP]] : The host IP address of the leader VM.
  • [[INVENTORY_HOST_IP]]: The host IP address used by this host to communicate with other hosts of the deployment.
  • [[LEADER_MACHINE_HOSTNAME]]: The hostname of the leader server where Skylight is to be installed.
  • Repeat for other hosts in the deployment.
  • If you desire, you may change [[DEPLOYMENT_NAME]], [[TENANT_NAME]], [[BASEDOMAIN]] with values that fit your organization. Please follow the instructions in the comments.
  • If you change the [[DEPLOYMENT_NAME]] , you must change the docker_labels with matching values.

2. config/variables.lite.env - Skylight Deployment information

The variable.lite.env file contains a complete list of features that can be turned on or off. The example below has been trimmed down to only include features required for the Skylight platform to run with sensor: agents and gNMI data streaming.

Examples are provided in the config.templates directory. This guide will make use of the config.templates/variables.lite.env example, which can be used for a single server deployment. This is the default setup for a Skylight platform basic installation.

First, copy the template to the config directory:

# files in config.templates are overwritten on subsequent patches
# The installer expect all configuration files to be in the config folder
cp config.templates/variables.lite.env config/variables.lite.env

Then, fill in the config/variables.lite.env file. An example with the required placeholders between [[ ]] is provided below for reference.

Ensure that deployment_name, tenant_name, and basedomain in the inventory file above match the equivalent values in the variable file below.

##################################################
###                Deployment Specs            ###                     
##################################################

# Set the following value with the name of your deployment.
## A TENANT_NAME must be provided to activate the Analytics portal
## The combination of DEPLOYMENT_NAME.BASENAME.BASEDOMAIN as well as
## TENANT_NAME.BASENAME.BASEDOMAIN must both be a fqdn.

## A HTTPS TLS certificate and TLS key to encrypt the traffic must typically be provided to support these two entries.
# In the example below, the master tenant https://performance.performance.example.com and
# the tenant https://analytics.performance.example.com will be created for this deployment

# DEPLOYMENT_NAME=performance
DEPLOYMENT_NAME=[[DEPLOYMENT_NAME]]

# TENANT_NAME=analytics
TENANT_NAME=[[TENANT_NAME]]
# BASEDOMAIN=example.com

BASEDOMAIN=[[BASEDOMAIN]]

# Please do not change this line
BASENAME=${DEPLOYMENT_NAME}


## Swarm Leader Details
### Set the following variables to your leader hostname and IP address
### This should match the values put in ansible's inventory file, under 
### all.hosts.leader.inventory_host 
YOUR_LEADER_HOSTNAME=[[LEADER_HOSTNAME]]
YOUR_LEADER_HOSTNAME_IP=[[LEADER_IP]]

##################################################
###         TLS material (optional)            ###
##################################################
### Set to false  to have that deployment generate a new CA (this is the default). If set to true, a CA needs to be provision in Vault
USE_ACCEDIAN_CA=false
### When USE_ACCEDIAN_CA is set to true populate the following:
#CA_KEY_PATH=/path/to/key
#CA_CSR_PATH=/path/to/csr
#CA_CRT_PATH=/path/to/crt

### TLS CERT: Populate the following if you wish to provide you're own TLS certificate. Otherwise a selfsigned will be created
#TLS_CERT=/path/to/crt
#TLS_KEY=/path/to/pem/key


##################################################
###                Path lists                  ###
##################################################
# the following paths are reasonable defaults. Adapt only if required

export CONF_DIR="$( cd "$( dirname "$0" )" && pwd )"
export TMP_DIR=${CONF_DIR}/../tmp

### (Non-airgapped only) JSON formatted key generated from GCP that contains the credentials for a GCP service account
###    Note: This service account must have access to GCR service (Container registry)
## If they key is different than what is specified in the configuration guide, please specify the full path here
GCP_JSON_KEY=${TMP_DIR}/gcr.key

## the TMP installation files containing secrets
SKYLIGHT_INSTALLER_TMP=${TMP_DIR}

##################################################
###                Feature lists               ###                     
##################################################
### The following values are reasonable defaults for 
### an Analytics Lite deployment

ANALYTICS_LITE=true
NUM_ANALYTICS_STREAMER_NODES=1
CONNECTOR_INCLUDECUSTOMCONFIGTEMPLATES=true

### Set this value to true ONLY if this deployment requires NSO-GW support.  
# Otherwise leave it unset setting this to true also implies 
# WITH_SENSOR_ORCH=true (default is false). - uncomment only 1
#WITH_NSO_GW=false
WITH_NSO_GW=true

# Set the following to true if no DNS resolution exists for this environment.
# It will use the machine IP address instead of the hostname when generating the connector
# configs for RoadRunner. Note that a valid TLS certificate will be required for this IP address
# and it will need to be routable from where the RoadRunner is deployed
# CONNECTOR_CONFIG_USE_IP_ONLY=true

###Set this value to true ONLY if this deployment requires agent orchestration.  Otherwise leave it unset (default is false) - uncomment only 1
#WITH_SENSOR_ORCH=true
WITH_SENSOR_ORCH=true

###Set this value to true ONLY if this deployment requires a standalone DGraph instance.  Otherwise leave it unset (default is true for analytics-lite)
WITH_DGRAPH_DEV=true

### Replica values and their defaults - Uncomment only if necessary (defaults listed) and modify to suit your deployment
NUM_FEDEX=1

### Deployment Details
ANSIBLE_SSH_USER_WITH_SUDO_ACCESS=admin

### Set to true to enable AIRGAP deployment (no access to GCR)
## Note: Setting this to true is not currently supported
AIRGAP=false

### When AIRGAP is true this archive is required. This is a tar.gz archive that will be extracted
### to /var/lib/docker/volumes/**volume**/_data/ and contains all the neccessary container images for the deployment
#REGISTRY_ARCHIVE=/path/to/tar/gz

CONNECTOR_CONFIG_USE_IP_ONLY=true

### Monitoring URL for Prometheus Proxy. Used by Accedian SREs to get health metrics from that deployment (leave commented out to disable)
# MONITORING_PROXY_URL=https://pushprox.colt-prod-mon.analytics.accedian.io

### Set this value to true ONLY if this deployment requires the use of an http or https proxy server
#WITH_ONPREM_PROXY=
### Provide the http and https proxies in http(s)://proxysvr:proxy_port.  Examples below
#ONPREM_HTTPS_PROXY=http://somelocalsquidsvr:3128
#ONPREM_HTTP_PROXY=http://somelocalsquidsvr:3128
#ONPREM_FIREBASE_HOSTNAME=skylight-watcher.firebaseapp.com
#ONPREM_FIREBASE_HOSTNAME_IP=199.36.158.100
#ONPREM_FIRESTORE_HOSTNAME=firestore.googleapis.com
#ONPREM_FIRESTORE_HOSTNAME_IP=172.217.13.202

(Optional) Providing Your TLS and CA Certificates and Keys

If you wish to utilize your own CA certificates or TLS certificates, set the following variables with your associated files:

CA_KEY_PATH
CA_CSR_PATH
CA_CRT_PATH
TLS_CERT
TLS_KEY

3. tmp/nso-gw-secrets.json


***Note:** If the above `variables.lite.env` file has the `WITH_NSO_GW=true`, this file will need to be provided.*

The nso-gw-secrets contains additional credential and configuration material. Since this key contains credentials (as opposed to strictly configuration items), first copy the template from config.templates/nso-gw-secrets.json to your tmp/ directory:

cp config.templates/nso-gw-secrets.json tmp/nso-gw-secrets.json 

Edit the following fields:

  1. sensorcontrol.login.password: The password for sensor: control. If no sensor: controls are required, set to nopass.
  2. sensorcontrol.login.username: The password for sensor: control. If no sensor: controls are required, set to nouser.
  3. sensorcontrol.url: URL of the sensor: control to contact. If not sensor: controls are required, set to https://localhost:9081.

You may leave the truststoreb64 and truststorepass as is. Truststores are only required when NSO-GW runs in standalone mode, which is not the case here.

Here is a sample of this file, which should be located at tmp/nso-gw-secrets.json.

{
   "sensorcontrol.login.password": "SENSOR_CONTROL_PASSWORD",
   "sensorcontrol.login.username": "SENSOR_CONTROL_USER",
   "sensorcontrol.url": "https://localhost:80",
   "truststoreb64":   "MIIGfgIBAzCCBjcGCSqGSIb3DQEHAaCCBigEggYkMIIGIDCCBhwGCSqGSIb3DQEHBqCCBg0wggYJAgEAMIIGAgYJKoZIhvcNAQcBMCkGCiqGSIb3DQEMAQYwGwQUQSb7BE5iPjl77FQCnvi/nWde[...]Bbt4osRxtkN87Zd4Knl46VDNwp8CAwGGoA==",
   "truststorepass": "Nsogw@2022",
   "spring.datasource.username": "postgres",
   "spring.datasource.password": "admin@123"
}

Once this is complete, it is time to setup some services!

Part 3: Setting up the Swarm

The next step will use Ansible to provision HashiCorp Vault as well as configure each server in your inventory file and label them for the Skylight solution.

Once this step is completed, the swarm will be successfully configured, and the final solution can be installed.

To do this, execute the following command:

build_version=##SKYLIGHT_VERSION## make install

By default build_version is latest which isnt always desired, so it is recommended to specify the version of aod-deployer/skylight-installer in place of ##SKYLIGHT_VERSION## from above.

The script will ask for the Ansible vault password and then, depending on the state of the deployment, you may also be prompted after running this installer for such information as:

  • A vault unseal key
  • A vault root key

***CAUTION**:* *When the installer first runs, it outputs the following message prompting you to save unseal keys and root token:* ```bash TASK [skylight-ssh-prep : Add trusted CA to sshd config] **************************************************************************************************** ok: [your-hostname]

TASK [skylight-ssh-prep : Restart service sshd] *************************************************************************************************************
skipping: [your-hostname]

TASK [Share important information] **************************************************************************************************************************
Pausing for 1 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
[Share important information]
Change: Vault initialize!

###########IMPORTANT#############IMPORTANT#############:
Save the following information:

{"keys": ["2c279"], "keys_base64": ["LCfnk="], "root_token": "s.bs***0"}

Change: Vault Unsealed!
Change: Secrets engine "secret" enabled!
Change: Secrets engine "ssh-client-signer" enabled!
Change: admin-ssh-users policy created!
Change: deployer-secrets-read policy created!
Change: SSH key generated
Change: deployer/nso-gw/kanatagrid.io/performance/secrets.yaml secret created!
Change: landlord/datahub-creds/default secret created!
Change: datahub-admin/default secret created!
Change: secret/data/deployer/ramen/server_private_key secret created!
Change: deployer/jwt/performance-analytics secret created!
Change: deployer/jwt_report_svc/performance-analytics secret created!
Change: deployer/onprem/ca secret created!
Change: deployer/gcs/ro-key secret created!

ATTENTION!!!: Save the keys listed above
:
ok: [your-hostname]

PLAY RECAP **************************************************************************************************************************************************
your-hostname : ok=34 changed=2 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0

*You must save the Unseal key in a secure location. **It cannot be recovered.** It is the master key to everything that is secret. It shouldn’t be stored in cleartext anywhere.*<hr class="red"></hr>


The initial Root token can be regenerated (with the unseal key). It is advisable to keep it visible for the course of this install.

<hr class="blue"></hr>***Note:** The Unseal key is used to unseal Vault when it is restarted. It is also used to generate root tokens.* <hr class="blue"></hr>

Below is the output of a successfully run command:

```bash

PLAY [docker_swarm_manager] **********************************************************************************************

# ....

PLAY RECAP **************************************************************************************************************************************************
your-hostname        : ok=34   changed=2    unreachable=0    failed=0    skipped=15   rescued=0    ignored=0

There should be no failed hosts on the first run.

You can check the state of the swarm by performing the following command:

docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
ze9jdsrhmbfvfcuyt6g5zxkgb *   leader     Ready     Active         Leader           20.10.13

Setting up Additional Hosts

If you have more than one VM in this deployment, perform the following operation on each server.

The script has created a unique Vault CA certs. When this CA is installed on remote server, a user that is authenticated by Vault with the proper policy may SSH to that server without needing additional key material.

More details can be found here.

On every server used as part of this Docker swarm:

+************************************************************************+
*  Setting SSH CA. IMPORTANT: PLease manually run this command on all    *
*  servers for this deployment (it will be run automatically for this    *
*  server, but needs to be run manually on the workers)                  *
+************************************************************************+

## Set the VAULT_ADDR variable
source bin/skylight.env

# Get the CA file
sudo curl -o /etc/ssh/trusted-user-ca-keys.pem http://$VAULT_ADDR:8200/v1/ssh-client-signer/public_key

echo "TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem" | sudo tee -a /etc/ssh/sshd_config
sudo service sshd restart

Part 4: Deploy the Skylight Solution

The next step will deploy the Skylight solution following the options in your config/variables.lite.env file. Using the secrets stored in your Vault (TLS keys, GCR keys, etc.), the next step runs the aod-deployer to configure all the services of the swarm, and deploy them according to the labels set in your inventory file.

The following arguments are required:

  • VERSION: The Skylight version to deploy.

  • config/variables.lite.env: The variable file, specifying which options to deploy, filled above.

  • inventory : The inventory file, specifying which hosts to use, filled above.

    # Deploy the specific version. The VERSION tag would be the same that you have used in the previous commands, and
    # points to the software version that is to be deployed.
    
    bin/onprem-deploy.sh $SKYLIGHT_VERSION config/variables.lite.env inventory
    

A successful installation will indicate that no hosts failed to be configured. The output should be similar to this:

PLAY [Deploying nats default config] **********************************************************
skipping: no hosts matched

PLAY RECAP ************************************************************************************
leader   : ok=111  changed=86   unreachable=0    failed=0    skipped=22   rescued=0    ignored=0```

On the `leader` node, you can execute `docker service ls` to get the list of all running services:

```bash
docker service ls

ID             NAME                        MODE         REPLICAS   IMAGE                                                                     PORTS
5yayc5r4c2yv   aod_analytics-streamer1     replicated   1/1        gcr.io/npav-172917/analytics-streamer:1.21.0                              *:30003->8017/tcp
s8b5oad6i4me   aod_blackbox-exporter       replicated   0/1        gcr.io/npav-172917/3rdparty/quay.io/prometheus/blackbox-exporter:latest   *:30012->9115/tcp

#... 

Part 4.1: Waiting for the Solution to Come Online

Before proceeding to the next step, wait for the solution to come up.

First list all of the services to ensure that they are all online. Looking at the REPLICA, no services should show 0/1:

docker service ls

You can then attempt to access the admin portal on the deployed solution. In a browser, navigate to https://<LEADER_HOSTNAME>/. You should be welcomed by the Skylight splash screen.

You can also try to access the login via cURL:

export TENANTHOST=<host_ip>
export DATAHUBUSER=admin@datahub.com
export DATAHUBPASS=<DefaultPassword>
curl -k -X POST https://${TENANTHOST}:2443/api/v1/auth/login -H 'Content-Type: application/x-www-form-urlencoded' -d "username=${DATAHUBUSER}&password=${DATAHUBPASS}"

Note: Depending on the Docker container download speed, it may take from 5 minutes to 60 minutes before all of the services are online and started properly. If you see a whitescreen or a connection error message in your browser, ensure that all services are started and running properly.

Next Steps

The installation is complete. You may now configure a tenant, create a connector and start receiving metrics. Follow the guide below:

Following the configration for first use, you will be able to configure a connector and start sending data to the Skylight platform.

For example:


Patching and Upgrading Skylight

Patching and upgrading the Skylight solution follows a similar workflow. A patch includes application fixes within a single release (23.04.1 → 23.04.5), but does not generally include additional features. An upgrade occurs when moving between major releases (for instance, from 23.04.x to 23.07.x).

Upgrades may require changes to the inventory file or variables.env file, as well as other migration steps. These steps are captured in Method of Procedures (MOP) and communicated at time of release.

Patching a Skylight Solution

A patch is a minor software update to an already configured Skylight Solution. They are usually self-contained and do not require extra manipulation.

  1. Download the appropriate release from our software download repository.
  2. Unarchive the package at the top of the installation folder, as outlined in this section: Uncompressing the package
  3. Re-execute Part 4: Deploy the Skylight Solution

The existing inventory and variables.env will be reused.

Upgrading a Skylight Solution

Upgrading an already deployed solution follows a workflow similar to a patch, but may require changes to the inventory or variables.env files. It may also require pre-upgrade or post-upgrade operations. A Method of Procedure will document these operations.

© 2026 Cisco and/or its affiliates. All rights reserved.

For more information about trademarks, please visit:
Cisco trademarks 
For more information about legal terms, please visit:
Cisco legal terms