Deploying Sensor Collector in Kubernetes

Prev Next

This article explains how to deploy Sensor Collectors in a Kubernetes environment and assumes you have already configured your Sensor Collector and met the required system requirements.

Note: The steps in the procedure differ slightly depending on whether you are deploying a Gateway Sensor Collector (which receive time-series data in a "push" mode), or a CSV Sensor Collector (which collect time-series data from CSV files).

The Gateway mode requires a server certificate in order for the Gateway Sensor Collector to securely communicate with a Telemetry Collector or Assurance Sensor. You have two options for certificate generation: Generate a self-signed certificate (instructions provided in this article) or utilize a certificate issued by a trusted Certificate Authority (CA).

To deploy Sensor Collector in Kubernetes, you will need to perform the following 3 steps:

Step 1: Download the Helm helper artifacts from Provider Connectivity Assurance.

Step 2: Set up environment variables, secrets and certificates (Installation Prerequisites).

Step 3: Install the Sensor Collector.

Step 1: Download Helm Helper Artifacts

Cisco Provider Connectivity Assurance provides an endpoint for downloading the Helm helper bundle with some pre-populated artifacts which can be used during the Sensor Collector Helm install. These artifacts can be accessed either via the UI or API.

Helm Helper Artifacts (Download via UI)

To download the Helm helper bundle via the UI:

  1. In the UI, navigate to Sensors > Collectors > Sensor Collectors.
  2. Select your Sensor Collector from the list.
  3. Click the ellipsis (three dots) on the right.
  4. Select Download (Kubernetes) from the menu, as shown below.

Download Kubernetes

Helm Helper Artifacts (Download via API)

After authenticating, you can download the artifacts by submitting a GET REST API request to Cisco Provider Connectivity Assurance as follows:

GET https://{{tenant url}}/api/v2/distribution/download-roadrunner-helm?zone={{your zone}}

Where:
- tenant url = The tenant URL for your Provider Connectivity Assurance deployment
- zone = The zone for the Sensor Collector instance you wish to deploy 

A .tar file following the naming convention {connector name}-{date}-helm.tar.gz will be downloaded to your machine.

Copy the file to the working directory where you will initiate your Sensor Collector installation, then extract its contents.

Example:

> cd /opt/data-collector
> cp ~/Downloads/DataGateway-2025008-27-helm.tar.gz .
> tar -xzvf DataGateway-2025008-27-helm.tar.gz

You should now have the following files in your working directory:

 ├── .env
 ├── pcaAuthArtifact
 ├── sensor-collector-install.sh
 └── values.yaml

Step 2: Installation Prerequisites

2.1. Selecting a Namespace and Instance Name

The .env file extracted from your .tar file includes (among others) the following environment variables:

KUBERNETES_INSTANCE_NAME="sensor-collector"
KUBERNETES_NAMESPACE="cisco-sensor-collectors"

These dictate the Kubernetes namespace into which you will deploy your Sensor Collector, as well as the instance name for your Helm deployment. You may override either of these defaults by modifying the associated value in the .env file.

2.2. Setting Up Your Pull Secret

In order for your Kubernetes cluster to be able to pull the Sensor Collector image when starting the pod, a pull secret must be installed in the cluster to point it to the appropriate repository.

If this has not already been done, it can be accomplished with the following steps:

  1. Retrieve the pull secret from the Provider Connectivity Assurance deployment:

On your Provider Connectivity Assurance Kubernetes cluster, run the following command:

> kubectl -n pca get secret pca-dev-registry -o yaml > pca-dev-registry.yaml

Note: In the preceding example, it is assumed that your Provider Connectivity Assurance deployment was installed under the namespace "pca". If this is not the case, then replace "pca" in the example with your actual namespace. The YAML output from this command will also include a namespace: pca entry, which you will need to remove in the next step.

  1. Remove namespace reference from your secret.
    The secret file that you have just produced (pca-dev-registry.yaml) will include a reference to the Provider Connectivity Assurance namespace that you exported it from (typically namespace: pca).
    IMPORTANT Unless you are installing your Data Collector into an identically named namespace—which is unlikely—this will cause a problem during installation. To fix this, open the pca-dev-registry.yaml file. Its contents will look similar to the following:
apiVersion: v1
data:
  .dockerconfigjson: {{**redacted base64 content**}}
kind: Secret
metadata:
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-weight: "-9999"
    kots.io/creation-phase: "-9999"
  creationTimestamp: "2025-10-03T13:38:13Z"
  labels:
    replicated.com/disaster-recovery: infra
    replicated.com/disaster-recovery-chart: admin-console
  name: pca-dev-registry
  **namespace: pca <---- REMOVE THIS LINE!**
  resourceVersion: "14729"
  uid: ba8f3f9b-b9c9-4247-ab7f-d5e272caeca4
type: kubernetes.io/dockerconfigjson

IMPORTANT: Notice the namespace: pca entry, you must remove this entire line and save the file.**

  1. Install the secret in your Kubernetes Cluster
  2. Copy the modified pca-dev-registry.yaml file onto a machine with access to the Kubernetes cluster that you will be installing the Sensor Collector to.
  3. Install the secret into the namespace to which you will be installing your Sensor Collector:
> kubectl -n cisco-sensor-collectors create -f ./pca-dev-registry.yaml

Note: In the preceding example, it is assumed that you will be installing your Sensor Collector into the cisco-sensor-collectors namespace. If this is not the case, then replace that value with the actual namespace you will be using.

2.3. TLS Setup (Gateway Sensor Collectors only)

Note: Not required for CSV.
A server certificate is essential for the Gateway Sensor Collector to securely communicate with a Telemetry Collector or Assurance Sensor.

You have the following two options:

  • Generate a self-signed certificate (instructions in this article)
  • Utilize a certificate issued by a trusted Certificate Authority (CA)

It is the customer choice if they wish to use self-signed certificates, or certificates signed by a trusted Certificate Authority. In either case, they must make the appropriate files available at the time of the Sensor Collector installation.

Within the .env file extracted from your .tar file, you will find the following environment variables:

AGENT_SERVER_CERTIFICATE_FILE="./tls.crt"
AGENT_SERVER_KEY_FILE="./tls.key"

These inform the install process where to find the certificate and associated public key file for use in establishing the web socket server for the sensor agents and collectors.

You must either provide these files using the default naming (ie: copy a tls.crt and tls.key file into your working directory) or modify the values for the above environment variables to point to your files.

Step 3: Install Sensor Collector

The .tar file you downloaded from Provider Connectivity Assurance contains a script named sensor-collector-install.sh. This script runs several pre-flight checks to verify that all prerequisites for the install are met and then performs a Helm installation of the Sensor Collector.

Note: If you do not have access to the registry over the internet (for example, an air-gapped environment), follow the Additional Notes section below to download the Sensor Collector Helm package below before proceeding.

Assuming all prerequisites have been met, running the installer script should produce output similar to the following:

╰─ ./sensor-collector-install.sh
Perform 'helm install' of sensor collector instance to Kubernetes cluster

    Performing preflight checks...
        helm installed ------------------------------ PASS

        confirm required environment variables set in local .env file:
            KUBERNETES_INSTANCE_NAME set ------------ PASS
            KUBERNETES_NAMESPACE set ---------------- PASS
            SENSOR_COLLECTOR_TYPE set --------------- PASS
            AGENT_SERVER_CERTIFICATE_FILE set ------- PASS
            AGENT_SERVER_KEY_FILE set --------------- PASS

        Verify required support files present:
                values.yaml - PRESENT
                ./pcaAuthArtifact - PRESENT
                ./tls.crt - PRESENT
                ./tls.key - PRESENT

    Preflight checks passed

Installing data-collector instance of type dataGateway with name sensor-collector to namespace cisco-sensor-collectors
- Server certificate for sensor agent authentication will be imported from ./tls.crt
- Server certificate key for sensor agent authentication will be imported from ./tls.key

Results of helm install:

NAME: sensor-collector
LAST DEPLOYED: Wed Aug 27 14:44:40 2025
NAMESPACE: cisco-sensor-collectors
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Roadrunner application version gfraboni-dev installed for zone DataGateway.

General Notes:
--------------
Analytics Endpoints:
   - Deployment: gdf-rep.gdf-rep.npav.accedian.net:443
   - Tenant    : gino.gdf-rep.npav.accedian.net:443

Agent Proxy Node Port: 30001
Metrics Gateway Port : 30000

To get the node IP address(es) for your Kubernetes cluster, run the following command at the command line:
    kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}{"\n"}'

If your output resembles the example above, then you have completed your install of the Sensor Collector.

If any of the tested prerequisites were not met, the install will abort, and the user is expected to correct the deficiency before trying again.

Additional Notes

Accessing the Published Roadrunner Helm Charts for Air-Gapped Environment Installations

If your deployment location has direct access to the Accedian Artifact Registry, the instructions above are sufficient. You can reference the Sensor Collector Helm package directly during installation.

If your deployment location does not have direct access to the registry, you will need to complete the following step.

From a location with registry access, download the Sensor Collector Helm package:

> helm pull oci://us-docker.pkg.dev/npav-172917/helm-package/sensor-collector --version $VERSION
oci://us-docker.pkg.dev/npav-172917/helm-package/sensor-collector

The Roadrunner version from Provider Connectivity Assurance is now included in the .env file, which is one of the helper artifacts you downloaded. Update the $VERSION variable with this value.

The above example downloads a file named roadrunner-0.626.0.tgz to your working directory.

Transfer this file via SFTP to the deployment location, placing it in the same working directory where you will perform the installation and where the helper artifacts were downloaded in the previous step.

Node Ports

Both the csv and Gateway Sensor Collector variants must expose ports to allow external access to the Sensor Collector. For the csv variant, this is the port where the SFTP server (used for ingesting .csv files into the system) is exposed. For the Data Gateway variant, these are the ports for the Agent Proxy and the Metrics Gateway.

In both cases, static node port assignments are determined by the values in the values.yaml file used during the Helm installation.

For the csv Sensor Collector, the SFTP server port is assigned with the filewatcher.nodePort parameter.

For the Data Gateway variant, the ports for the Agent Proxy and the Metrics Gateway are determined by the openMetricsGateway.agentProxyPort and openMetricsGateway.metricsGatewayPort parameters, respectively.

These values are automatically populated when you download the values.yaml file from Provider Connectivity Assurance. For a csv Sensor Collector, the filewatcher.nodePort defaults to 3000. For a Data Gateway Sensor Collector, the port assignments are based on the connector configuration you created in Provider Connectivity Assurance user interface when you created the Sensor Collector.

Important: As mentioned in the Configuration article, the default Gateway ports (55777 and 55888) are outside of the valid port range for most Kubernetes clusters (node port range is typically 30000-32767), so if you are deploying a Gateway to a Kubernetes cluster, you will most likely need to select different values for these ports. The port assignments for any of these ports must respect the valid node port range for your Kubernetes cluster.

© 2025 Cisco and/or its affiliates. All rights reserved.

For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms

For legal information about Accedian Skylight products, please visit: Accedian legal terms and trademarks