Installation on Kubernetes

Prev Next

This article will help you install Provider Connectivity Assurance on your pre-existing Kubernetes cluster.

Deployment Prerequisites

Computing Resources

CPU Cores Memory Disk Minimum # Nodes # of Sessions Read IOPS Write IOPS Retention
Lab 32 64GB 750GB 1 Up to 100 10000 IOPS 1000 IOPS N/A
Small 72 128GB 2.5TB 1 Up to 10000 10000 IOPS 1000 IOPS 1 minute data for 3 months

Kubernetes Version

1.31, 1.32, or 1.33

Available StorageClass

The cluster must have an existing StorageClass available. Provider Connectivity Assurance creates the required stateful components using the default StorageClass in the cluster.

Port Forwarding

To support port forwarding, Kubernetes clusters require that the SOcket CAT (socat) package is installed on each node.

If the package is not installed on each node in the cluster, you see the following error message when the installation script attempts to connect to the Admin Console: unable to do port forwarding: socat not found.

To check if the package that provides socat is installed, you can run which socat. If the package is installed, the which socat command prints the full path to the socat executable file. For example, usr/bin/socat.

If the output of the which socat command is socat not found, then you must install the package that provides the socat command. The name of this package can vary depending on the node's operating system.

RBAC Requirements

The user that runs the installation command must have cluster-scoped access. With cluster-scoped access, a Kubernetes ClusterRole and ClusterRoleBinding are created that grant Provider Connectivity Assurance access to all resources across all namespaces in the cluster.

To install Provider Connectivity Assurance with cluster-scoped access, the user must meet the following RBAC requirements:

  • The user must be able to create workloads, ClusterRoles, and ClusterRoleBindings.
  • The user must have cluster-admin permissions to create namespaces and assign RBAC roles across the cluster.

External URL Access

The following URLs need to be accessible for installing Provider Connectivity Assurance:

  • proxy-registry.pca.cisco.com
  • update.pca.cisco.com
  • index.docker.io
  • cdn.auth0.com
  • *.docker.io
  • *.docker.com
  • replicated.app
  • kots.io
  • github.com
  • charts.jetstack.io
  • operator.min.io
  • prometheus-community.github.io

DNS Configuration

If you are planning on using domain names for your deployment, you must have the following entries configured:

Application Domain Name Example
Identity and Access Management auth.{domain} auth.mydomain.com
Tenant {tenant-name}.{domain} pca.mydomain.com
Deployment {deployment-name}.{domain} performance.mydomain.com

When using DNS, only port 443/TCP needs to be externally exposed.

No-DNS Configuration

If you are not using DNS, then applications are accessed via different ports. No-DNS mode only works with IPv4.

If DNS is not used, applications must be accessed through different ports. Note that No-DNS mode only supports IPv4.

Application Port
Identity and Access Management {ip}:3443
Tenant {ip}:443
Deployment {ip}:2443

Getting Started

Pre-installing Dependencies

Provider Connectivity Assurance requires MinIO, cert-manager, and Prometheus to be installed in your Kubernetes cluster. The following script can be used to install these dependencies.

preinstall.sh
#!/bin/bash

# This script installs or upgrades the following Helm charts
# - cert-manager
# - minio-operator
# - prometheus-operator
# It creates the necessary namespaces if they don't exist
# It sets the security context for the charts based on the namespace UID


# Install or upgrade the cert-manager helm chart 
# ADD repo if it doesn't exist
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace \
  --set crds.enabled=true


# Create the minio-operator namespace if it doesn't exist
kubectl get namespace minio-operator >/dev/null 2>&1 || kubectl create namespace minio-operator
# Get the UID of the namespace to set the security context for the minio-operator
NAMESPACE_UID=$(kubectl get namespace minio-operator -o jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}' | cut -d'/' -f1)

# Install or upgrade the minio-operator helm chart with the security context set to the namespace UID
# ADD repo if it doesn't exist
helm repo add minio https://operator.min.io/
helm repo update
helm upgrade --install minio-operator minio/operator --namespace minio-operator \
  --set operator.securityContext.runAsUser=$NAMESPACE_UID \
  --set operator.securityContext.runAsGroup=$NAMESPACE_UID \
  --set operator.securityContext.fsGroup=$NAMESPACE_UID \
  --set operator.containerSecurityContext.runAsUser=$NAMESPACE_UID \
  --set operator.containerSecurityContext.runAsGroup=$NAMESPACE_UID 


# Create the monitoring namespace if it doesn't exist
kubectl get namespace monitoring >/dev/null 2>&1 || kubectl create namespace monitoring
# Get the UID of the namespace to set the security context for prometheus-operator
NAMESPACE_UID=$(kubectl get namespace monitoring -o jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}' | cut -d'/' -f1)
# Install or upgrade the prometheus-operator helm chart
# ADD repo if it doesn't exist
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace \
  --set prometheus.prometheusSpec.securityContext.runAsNonRoot=true \
  --set alertmanager.alertmanagerSpec.securityContext.runAsNonRoot=true \
  --set grafana.securityContext.runAsNonRoot=true \
  --set global.securityContext.runAsUser=$NAMESPACE_UID \
  --set global.securityContext.runAsGroup=$NAMESPACE_UID \
  --set global.securityContext.fsGroup=$NAMESPACE_UID \
  --set prometheusOperator.securityContext.runAsUser=$NAMESPACE_UID \
  --set prometheusOperator.securityContext.runAsGroup=$NAMESPACE_UID \
  --set prometheusOperator.securityContext.fsGroup=$NAMESPACE_UID \
  --set prometheusOperator.containerSecurityContext.runAsUser=$NAMESPACE_UID \
  --set prometheusOperator.containerSecurityContext.runAsGroup=$NAMESPACE_UID \
  --set prometheusOperator.admissionWebhooks.patch.securityContext.runAsUser=$NAMESPACE_UID \
  --set prometheusOperator.admissionWebhooks.patch.securityContext.runAsGroup=$NAMESPACE_UID \
  --set prometheusOperator.admissionWebhooks.patch.securityContext.fsGroup=$NAMESPACE_UID \
  --set prometheusOperator.admissionWebhooks.config.securityContext.runAsUser=$NAMESPACE_UID \
  --set prometheusOperator.admissionWebhooks.config.securityContext.runAsGroup=$NAMESPACE_UID \
  --set prometheusOperator.admissionWebhooks.config.securityContext.fsGroup=$NAMESPACE_UID \
  --set prometheus.prometheusSpec.securityContext.runAsUser=$NAMESPACE_UID \
  --set prometheus.prometheusSpec.securityContext.runAsGroup=$NAMESPACE_UID \
  --set prometheus.prometheusSpec.securityContext.fsGroup=$NAMESPACE_UID \
  --set alertmanager.alertmanagerSpec.securityContext.runAsUser=$NAMESPACE_UID \
  --set alertmanager.alertmanagerSpec.securityContext.runAsGroup=$NAMESPACE_UID \
  --set alertmanager.alertmanagerSpec.securityContext.fsGroup=$NAMESPACE_UID \
  --set grafana.securityContext.runAsUser=$NAMESPACE_UID \
  --set grafana.securityContext.runAsGroup=$NAMESPACE_UID \
  --set grafana.securityContext.fsGroup=$NAMESPACE_UID \
  --set grafana.containerSecurityContext.runAsUser=$NAMESPACE_UID \
  --set grafana.containerSecurityContext.runAsGroup=$NAMESPACE_UID \
  --set nodeExporter.securityContext.runAsUser=$NAMESPACE_UID \
  --set nodeExporter.securityContext.runAsGroup=$NAMESPACE_UID \
  --set nodeExporter.securityContext.fsGroup=$NAMESPACE_UID \
  --set kube-state-metrics.securityContext.runAsUser=$NAMESPACE_UID \
  --set kube-state-metrics.securityContext.runAsGroup=$NAMESPACE_UID \
  --set kube-state-metrics.securityContext.fsGroup=$NAMESPACE_UID

# Create PCA namespace if it doesn't exist
kubectl get namespace pca >/dev/null 2>&1 || kubectl create namespace pca
# Get the UID of the namespace to set the security context for prometheus-operator
NAMESPACE_UID=$(kubectl get namespace pca -o jsonpath='{.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}' | cut -d'/' -f1)


# Output the namespace UID to feed into the Helm chart values
echo "PCA NAMESPACE_UID is $NAMESPACE_UID"


# Output the namespace UID to feed into the Helm chart values
echo ""
echo "╔════════════════════════════════════════════════════════════════════════════════╗"
echo "║                               IMPORTANT INFORMATION                            ║"
echo "╠════════════════════════════════════════════════════════════════════════════════╣"
echo "║ PCA Namespace UID: $NAMESPACE_UID                                                  ║"
echo "║                                                                                ║"
echo "║ This UID must be used in your Helm chart values.yaml file for proper           ║"
echo "║ security context configuration in OpenShift environments.                      ║"
echo "║                                                                                ║"
echo "║ Example usage in values.yaml:                                                  ║"
echo "║   global:                                                                      ║"
echo "║     securityContext:                                                           ║"
echo "║       runAsUser: $NAMESPACE_UID                                                    ║"
echo "║       runAsGroup: $NAMESPACE_UID                                                   ║"
echo "║       fsGroup: $NAMESPACE_UID                                                      ║"
echo "╚════════════════════════════════════════════════════════════════════════════════╝"
echo ""

Note that this script creates the following namespaces in your Kubernetes cluster:

  • cert-manager
  • minio-operator
  • monitoring
  • pca

The final output of the script outputs the pca namespace UID. Capture this value as it is required in the Configuring Provider Connectivity Assurance step.

Installing KOTS CLI

Provider Connectivity Assurance requires KOTS CLI to be installed. KOTS is a kubectl plugin and admin console to help manage Kubernetes Off-The-Shelf software. To install KOTS CLI to /usr/local/bin run the command:

curl https://kots.io/install | bash

To install to a directory other than /usr/local/bin, run:

curl https://kots.io/install | REPL_INSTALL_PATH=/path/to/cli bash

To install using sudo, you can set the REPL_USE_SUDO environment variable:

curl -L https://kots.io/install | REPL_USE_SUDO=y bash

For even more options for installing KOTS CLI, visit the KOTS CLI documentation.

Install the Provider Connectivity Assurance Application

Once KOTS CLI is available, the Provider Connectivity Assurance application can be installed. Run the command:

kubectl kots install provider-connectivity-assurance/stable

Note: This step can take several minutes.

Example Output

Enter the namespace to deploy to: kotsadm
  • Deploying Admin Console
    • Creating namespace ✓  
    • Creating namespace monitoring ✓  
    • Creating namespace pca ✓  
    • Creating namespace cert-manager ✓  
    • Creating namespace minio-operator ✓  
    • Creating namespace preinstall-extensions ✓  
    • Waiting for datastore to be ready ✓  
Enter a new password for the admin console (6+ characters): ••••••••••••
  • Waiting for Admin Console to be ready ✓  
    • Waiting for Admin Console to be ready ⠸
  • Press Ctrl+C to exit
  • Go to http://localhost:8800 to access the Admin Console

Access the Provider Connectivity Assurance Admin Console

Once the provider-connectivity-assurance installation is complete, you can access the Admin Console. This will be used to further configure your deployment, for future upgrades of the software, and for obtaining support bundles to get help with troubleshooting, if needed.

The admin console is available at the address shared in the installation output. Visit this address from your web browser of choice. It only needs to be accessed internally for users who are managing the cluster's deployment and maintenance.

To relaunch the admin console, run the command:

kubectl kots admin-console -n NAMESPACE

Where NAMESPACE is the original namespace entered on the installation, kotsadm in the above example.

Upon installation, you were asked to configure a password for the Admin Console. Provide this password to access the Admin Console.

image.png

License Upload

In order to proceed in the admin console, you must upload the license provided to you by the Provider Connectivity Assurance team.

Configuring Provider Connectivity Assurance

Follow the steps below to configure your Provider Connectivity Assurance deployment. Each configuration option is documented with details on its purpose. Most options have default values, and it is recommended to stick with defaults when possible.

image.png

Once the deployment begins, a spinner will indicate that it is in progress.

Note: This process takes approximately 15 minutes for online installations.

SMTP Support

SMTP is configured in one of two ways:

  1. Sendgrid
  2. Generic SMTP

Sendgrid

Sendgrid is an online SMTP service. Provider Connectivity Assurance can work with Sendgrid by inputting your Sendgrid API token in this option.

Generic SMTP

Configure Provider Connectivity Assurance to use SMTP by entering your SMTP host, TLS settings, username and password.

Updating SMTP Settings

If you need to update your SMTP settings, you cannot do this in the admin console. Instead, you must log in to the Provider Connectivity Assurance authentication service and update the settings directly.

  1. Log in to the authentication service UI. See First Time Log in
  2. Go to the Default Settings
    image.png
  3. From the menu on the left, select SMTP Provider
    image.png
  4. Select the SMTP configuration you want to modify
    image.png
  5. Follow the configuration wizard
    image.png

Pre-Flight Checks

Upon deploying your software, your system will undergo a series of pre-flight checks to determine compatibility. These checks should not be ignored because they check for the minimum requirements needed to run Provider Connectivity Assurance in your environment.

If you deploy using an IPv6 address for the host machine but you do not use a version of the deployer that is post-fixed with the label dual-stack in the version, the pre-flight checks will fail, indicating that you must use a dual-stack version of the deployer.

First Time Log in

Once the deployment from the Admin Console is complete, you can log in for the first time using the default admin user for the deployment.

Note: The default admin user's username will come from the deployment name and domain name entered in the Admin Console configuration.

Default Admin Username Password
{deployment-name}-admin@auth.{domain-name}

For example:
performance-admin@auth.onprem.cisco.internal
The password is listed in the Admin Console.

As the default admin user, you can log in to the web UI hosted at:

If your installation was installed with DNS

Web UI
https://{tenant-name}.{domain-name}

If your installation was installed with no DNS

Web UI
https://{external-ip}

Note: On first log in, you will be asked to change the default admin user's password.

Accessing the Identity and Access Management UI

Identity and access management in Provider Connectivity Assurance is implemented via Zitadel. The Zitadel UI can be used to create service users for API integration as well as for configuring SSO for the deployment.

It is hosted at:

If your installation was installed with DNS

Zitadel UI
https://auth.{domain-name}

If your installation was installed with no DNS

Zitadel UI
https://{external-ip}:3443

Configuring TLS Clients with Provider Connectivity Assurance

Multiple clients that connect to Provider Connectivity Assurance require TLS connection. For example:

  • Sensor Collectors
  • MQTT Clients
  • Web browser

If you are using the self-signed certificate option provided in the Admin Console installation steps, then you can output the CA to a file to add to your trust stores via the following command:

kubectl get secret -n pca nginx-tls-secret-cm -o jsonpath={.data."ca\.crt"} \
| base64 -d > ca.crt

Note that this command and some additional details are also listed in the Admin Console under the Certificates section when the generate option is selected.

Upgrading to a new Deployer Version

Provider Connectivity Assurance can be upgraded to a new deployer to pick up patch releases and major releases of the software. To upgrade, you must use the admin console.

See Access the Provider Connectivity Assurance Admin Console for more details.

When there is a new version available, you will see it in the admin console Dashboard under the Latest Available Update section.

image.png

Clicking the Go to Version history button will take you to the Version History tab and allow you to deploy an updated version by pressing the Deploy button next to the version number you want to update to.

image.png

Follow the steps in the Deployment view to deploy the software. The steps include updating the configuration, running the pre-flight checks, and confirming the deployment.

image.png