Deploying Sensor Collector in Kubernetes

Prev Next

Provider Connectivity Assurance includes support for Sensor Collector deployment in Kubernetes environments. This article provides installation instructions. The high level steps are:

  1. Download Helm helper artifacts from Provider Connectivity Assurance
  2. Installation pre-requisites: Set up environment variables, secrets and certificates
  3. Install the Sensor Collector

1 Download Helm Helper Artifacts

Cisco Provider Connectivity Assurance provides an endpoint for downloading Helm helper bundle with some pre-populated artifacts which can be used during the Sensor Collector Helm install.These artifacts can be access via the UI or via API.

Helm helper artifacts - download via UI

From the Sensors -> Collectors -> Sensor Collectors page in the UI, select your Sensor Collector from the list and on the right ellipses (three dots) choose ‘Download (Kubernetes)’ , as shown below.

Download Kubernetes

Helm helper artifacts - download via API

After authenticating, you can download the artifacts by submitting a GET REST API request to Cisco Provider Connectivity Assurance as follows:

GET https://{{tenant url}}/api/v2/distribution/download-roadrunner-helm?zone={{your zone}}

Where:
- tenant url = The tenant URL for your Provider Connectivity Assurance deployment
- zone = The zone for the Sensor Collector instance you wish to deploy 

A .tar file following the naming convention {connector name}-{date}-helm.tar.gz will be downloaded to your machine.

Copy the file to the working directory from which you will initiate your Sensor Collector install and uncompress it.

Example:

> cd /opt/data-collector
> cp ~/Downloads/DataGateway-2025008-27-helm.tar.gz .
> tar -xzvf DataGateway-2025008-27-helm.tar.gz

You should now have the following files in your working directory:

 ├── .env
 ├── pcaAuthArtifact
 ├── sensor-collector-install.sh
 └── values.yaml

2 Installation pre-requisites

2.1 Selecting A Namespace And Instance Name

The .env file extracted from your .tar file includes (among others) the following environment variables:

KUBERNETES_INSTANCE_NAME="sensor-collector"
KUBERNETES_NAMESPACE="cisco-sensor-collectors"

These dictate the Kubernetes namespace into which you will deploy your Sensor Collector, as well as the instance name for your helm deployment. You may override either of these defaults by modifying the associated value in the .env file.

2.2 Setting Up Your Pull Secret

In order for your Kubernetes cluster to be able to pull the Sensor Collector image when starting the pod, a pull secret needs to be installed in the cluster to point it to the appropriate repository.

If this has not already been done, it can be accomplished as follows:

Retrieve the pull secret from the PCA deployment:

On your PCA Kubernetes cluster, run the following command:

> kubectl -n cisco-sensor-collectors get secret provider-connectivity-assurance-registry -o yaml > pca-reg-secret.yaml

Note - in the preceding example, it is assumed that your PCA deployment was installed under the namespace cisco-sensor-collectors - if this is not the case, then replace cisco-sensor-collectors in the example with your actual namespace.

Install the secret in your Kubernetes Cluster:

Copy the file produced in the previous step onto a machine with access to the Kubernetes cluster that you will be installing the Sensor Collector to.

Install the secret into the namespace to which you will be installing your Sensor Collector:

> kubectl -n cisco-sensor-collectors create -f ./pca-reg-secret.yaml

Note - in the preceding example, it is assumed that you will be installing your Sensor Collector into the cisco-sensor-collectors namespace. If this is not the case then replace that value with the actual namespace you will be using

2.3 TLS Setup For The Telemetry Collector or Sensor Agent Connections

If the Sensor Collector which you are using is the csv variety, then this section does not apply, and you can skip ahead.

If, however, the Sensor Collector which you are using of type DataGateway (which is always the case for Telemetry Collector ingestion pipelines) then you will need to perform the following additional steps:

Your Sensor Collector requires a valid server certificate which it will use when instantiating the web socket server which allows the sensor agents and collectors to connect to the Sensor Collector.

It is at the discretion of the customer whether they wish to use self signed certificates for this purpose, or certificates signed by a trusted Certificate Authority. In either case, they must make the appropriate files available at Sensor Collector installation time.

Within the .env file extracted from your .tar file, you will find the following environment variables:

AGENT_SERVER_CERTIFICATE_FILE="./tls.crt"
AGENT_SERVER_KEY_FILE="./tls.key

These tell the install process where to find the certificate and associated public key file for use in establishing the web socket server for the sensor agents/collectors.

You must either provide these files using the default naming (ie: copy a tls.crt and tls.key file into your working directory) or modify the values for the above environment variables to point to your files.

3 Install Sensor Collector

The .tar file which you downloaded from Provider Connectivity Assurance includes a script named sensor-collector-install.sh which will perform a few pre-flight checks to ensure that necessary pre-requisites for the install have been met, and then perform a ‘helm install’ of the Sensor Collector.

Note that if you do NOT have access to the registry over the internet (for example, an air-gapped environment), follow the Additional Notes below to download the Sensor Collector helm package below before proceeding.

Assuming all pre-requisites have been met, running the installer script should produce output similar to the following:

╰─ ./sensor-collector-install.sh
Perform 'helm install' of sensor collector instance to Kubernetes cluster

    Performing preflight checks...
        helm installed ------------------------------ PASS

        confirm required environment variables set in local .env file:
            KUBERNETES_INSTANCE_NAME set ------------ PASS
            KUBERNETES_NAMESPACE set ---------------- PASS
            SENSOR_COLLECTOR_TYPE set --------------- PASS
            AGENT_SERVER_CERTIFICATE_FILE set ------- PASS
            AGENT_SERVER_KEY_FILE set --------------- PASS

        Verify required support files present:
                values.yaml - PRESENT
                ./pcaAuthArtifact - PRESENT
                ./tls.crt - PRESENT
                ./tls.key - PRESENT

    Preflight checks passed

Installing data-collector instance of type dataGateway with name sensor-collector to namespace cisco-sensor-collectors
- Server certificate for sensor agent authentication will be imported from ./tls.crt
- Server certificate key for sensor agent authentication will be imported from ./tls.key

Results of helm install:

NAME: sensor-collector
LAST DEPLOYED: Wed Aug 27 14:44:40 2025
NAMESPACE: cisco-sensor-collectors
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Roadrunner application version gfraboni-dev installed for zone DataGateway.

General Notes:
--------------
Analytics Endpoints:
   - Deployment: gdf-rep.gdf-rep.npav.accedian.net:443
   - Tenant    : gino.gdf-rep.npav.accedian.net:443

Agent Proxy Node Port: 30001
Metrics Gateway Port : 30000

To get the node IP address(es) for your Kubernetes cluster, run the following command at the command line:
    kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}{"\n"}'

If all went well and your output resembles the example above, then you have completed your install of the Sensor Collector.

If any of the tested pre-requisites were not met, then the install will abort and the user is expected to correct the deficiency before trying again.

Additional Notes: Accessing the Published Roadrunner Helm Charts for air-gapped environment installations

If the location you will be running your deployment from has direct access to the Accedian Artifact Registry, then the instructions above are sufficient. You will be able to reference the Sensor Collector helm package directly when kicking off the install.

However, if the location that the deployment will be run from does NOT have direct access to the registry, then you will need to do the following step.

From a location which IS able to access the registry, download the Sensor Collector helm package:

> helm pull oci://us-docker.pkg.dev/npav-172917/helm-package/sensor-collector --version $VERSION
oci://us-docker.pkg.dev/npav-172917/helm-package/sensor-collector

The Roadrunner version from PCA is now provided in the .env file that is one of the helper artifacts that you downloaded from PCA. Replace that in the $VERSION variable.

The above example results in a file named roadrunner-0.626.0.tgz being downloaded into your working directory.

SFTP this file to the deployment location in the working directory that you will perform the install from (should be the same location as where the helper artifacts from the previous step were downloaded).

Node Ports

Both the csv and Data Gateway Sensor Collector variants must expose ports to allow external access to the Sensor Collector. In the case of the csv variant, this is the port on which the SFTP server used for ingesting .csv files into the system is exposed, and for the Data Gateway variant, these are the ports for the Agent Proxy and the Metrics Gateway.

In both cases, static port node port assignments are determined by values in the values.yaml file referenced during the helm install:

For the csv Sensor Collector, the SFTP server port is assigned with the filewatcher.nodePort parameter.

For the Data Gateway variant, the ports for the Agent Proxy and Metrics Gateway are determined by the openMetricsGateway.agentProxyPort and openMetricsGateway.metricsGatewayPort parameters, respectively.

Note that these values will be automatically populated when the values.yaml is downloaded from PCA - for a csv Sensor Collector, the filewatcher.nodePort is defaulted to 3000, and for the Data Gateway Sensor Collector, the port assignments are determined by the connector configuration you created in PCA.

Note that the ports assignments configured for any of these ports must respect the valid node port range for your Kubernetes cluster.