This article explains how to deploy Sensor Collectors in a Kubernetes environment and assumes you have already configured your Sensor Collector and met the required system requirements.
Note: The steps in the procedure differ slightly depending on whether you are deploying a Gateway Sensor Collector (which receives time-series data in a "push" mode), or a CSV Sensor Collector (which collects time-series data from CSV files).
The Gateway mode requires a server certificate in order for the Gateway Sensor Collector to securely communicate with a Telemetry Collector or Assurance Sensor. You have two options for certificate generation: Generate a self-signed certificate (instructions provided in this article) or utilize a certificate issued by a trusted Certificate Authority (CA).
To deploy Sensor Collector in Kubernetes, you will need to perform the following steps:
Step 1: Download the Helm helper artifacts from Provider Connectivity Assurance.
Step 2: Set up environment variables and certificates (Installation Prerequisites).
Step 3: Install the Sensor Collector.
Step 1: Download Helm Helper Artifacts
Cisco Provider Connectivity Assurance provides an endpoint for downloading the Helm helper bundle with some pre-populated artifacts which can be used during the Sensor Collector Helm install. These artifacts can be accessed either via the UI or API.
Helm Helper Artifacts (Download via UI)
To download the Helm helper bundle via the UI:
- In the UI, navigate to Sensors > Collectors > Sensor Collectors.
- Select your Sensor Collector from the list.
- Click the ellipsis (three dots) on the right.
- Select Download (Kubernetes) from the menu, as shown below.

Helm Helper Artifacts (Download via API)
After authenticating, you can download the artifacts by submitting a GET REST API request to Cisco Provider Connectivity Assurance as follows:
GET https://{{tenant url}}/api/v2/distribution/download-roadrunner-helm?zone={{your zone}}
Where:
- tenant url = The tenant URL for your Provider Connectivity Assurance deployment
- zone = The zone for the Sensor Collector instance you wish to deploy
A .tar file following the naming convention {connector name}-{date}-helm.tar.gz will be downloaded to your machine.
Copy the file to the working directory where you will initiate your Sensor Collector installation, then extract its contents.
Example:
> cd /opt/data-collector
> cp ~/Downloads/DataGateway-2025008-27-helm.tar.gz .
> tar -xzvf DataGateway-2025008-27-helm.tar.gz
You should now have the following files in your working directory:
├── .env
├── pcaAuthArtifact
├── sensor-collector-install.sh
└── values.yaml
Step 2: Installation Prerequisites
2.1. Selecting a Namespace and Instance Name
The .env file extracted from your .tar file includes (among others) the following environment variables:
KUBERNETES_INSTANCE_NAME="sensor-collector"
KUBERNETES_NAMESPACE="cisco-sensor-collectors"
These dictate the Kubernetes namespace into which you will deploy your Sensor Collector, as well as the instance name for your Helm deployment. You may override either of these defaults by modifying the associated value in the .env file.
2.2. TLS Setup (Gateway Sensor Collectors only)
Note: Not required for CSV.
A server certificate is essential for the Gateway Sensor Collector to securely communicate with a Telemetry Collector or Assurance Sensor.
You have the following two options:
- Generate a self-signed certificate (instructions in this article)
- Utilize a certificate issued by a trusted Certificate Authority (CA)
It is the customer's choice if they wish to use self-signed certificates, or certificates signed by a trusted Certificate Authority. In either case, they must make the appropriate files available at the time of the Sensor Collector installation.
Within the .env file extracted from your .tar file, you will find the following environment variables:
AGENT_SERVER_CERTIFICATE_FILE="./tls.crt"
AGENT_SERVER_KEY_FILE="./tls.key"
These inform the install process where to find the certificate and associated public key file for use in establishing the WebSocket server for the sensor agents and collectors.
You must either provide these files using the default naming (i.e., copy a tls.crt and tls.key file into your working directory) or modify the values for the above environment variables to point to your files.
Step 3: Install Sensor Collector
The .tar file you downloaded from Provider Connectivity Assurance contains a script named sensor-collector-install.sh. This script runs several pre-flight checks to verify that all prerequisites for the install are met and then performs a Helm installation of the Sensor Collector.
Note: If you do not have access to the registry over the internet (for example, an air-gapped environment), follow the Additional Notes section below to download the Sensor Collector Helm package before proceeding.
Assuming all prerequisites have been met, running the installer script should produce output similar to the following:
╰─ ./sensor-collector-install.sh
Perform 'helm install' of sensor collector instance to Kubernetes cluster
Performing preflight checks...
helm installed ------------------------------ PASS
confirm required environment variables set in local .env file:
KUBERNETES_INSTANCE_NAME set ------------ PASS
KUBERNETES_NAMESPACE set ---------------- PASS
SENSOR_COLLECTOR_TYPE set --------------- PASS
AGENT_SERVER_CERTIFICATE_FILE set ------- PASS
AGENT_SERVER_KEY_FILE set --------------- PASS
Verify required support files present:
values.yaml - PRESENT
./pcaAuthArtifact - PRESENT
./tls.crt - PRESENT
./tls.key - PRESENT
Preflight checks passed
Installing data-collector instance of type dataGateway with name sensor-collector to namespace cisco-sensor-collectors
- Server certificate for sensor agent authentication will be imported from ./tls.crt
- Server certificate key for sensor agent authentication will be imported from ./tls.key
Results of helm install:
NAME: sensor-collector
LAST DEPLOYED: Wed Aug 27 14:44:40 2025
NAMESPACE: cisco-sensor-collectors
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Roadrunner application version gfraboni-dev installed for zone DataGateway.
General Notes:
--------------
Analytics Endpoints:
- Deployment: gdf-rep.gdf-rep.npav.accedian.net:443
- Tenant : gino.gdf-rep.npav.accedian.net:443
Agent Proxy Node Port: 30001
Metrics Gateway Port : 30000
To get the node IP address(es) for your Kubernetes cluster, run the following command at the command line:
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}{"\n"}'
If your output resembles the example above, then you have completed your install of the Sensor Collector.
If any of the tested prerequisites were not met, the install will abort, and the user is expected to correct the deficiency before trying again.
Additional Notes
Accessing the Published Roadrunner Helm Charts for Air-Gapped Environment Installations
If your deployment location has direct access to the Accedian Artifact Registry, the instructions above are sufficient. You can reference the Sensor Collector Helm package directly during installation.
If your deployment location does not have direct access to the registry, you will need to complete the following step.
From a location with registry access, download the Sensor Collector Helm package:
> helm pull oci://us-docker.pkg.dev/npav-172917/helm-package/sensor-collector --version $VERSION
oci://us-docker.pkg.dev/npav-172917/helm-package/sensor-collector
The Roadrunner version from Provider Connectivity Assurance is now included in the .env file, which is one of the helper artifacts you downloaded. Update the $VERSION variable with this value.
The above example downloads a file named roadrunner-0.626.0.tgz to your working directory.
Transfer this file via SFTP to the deployment location, placing it in the same working directory where you will perform the installation and where the helper artifacts were downloaded in the previous step.
Node Ports
Both the csv and Gateway Sensor Collector variants must expose ports to allow external access to the Sensor Collector. For the csv variant, this is the port where the SFTP server (used for ingesting .csv files into the system) is exposed. For the Data Gateway variant, these are the ports for the Agent Proxy and the Metrics Gateway.
In both cases, static node port assignments are determined by the values in the values.yaml file used during the Helm installation.
For the csv Sensor Collector, the SFTP server port is assigned with the filewatcher.nodePort parameter.
For the Data Gateway variant, the ports for the Agent Proxy and the Metrics Gateway are determined by the openMetricsGateway.agentProxyPort and openMetricsGateway.metricsGatewayPort parameters, respectively.
These values are automatically populated when you download the values.yaml file from Provider Connectivity Assurance. For a csv Sensor Collector, the filewatcher.nodePort defaults to 3000. For a Data Gateway Sensor Collector, the port assignments are based on the connector configuration you created in the Provider Connectivity Assurance user interface when you created the Sensor Collector.
Important: As mentioned in the Configuration article, the default Gateway ports (55777 and 55888) are outside of the valid port range for most Kubernetes clusters (node port range is typically 30000-32767), so if you are deploying a Gateway to a Kubernetes cluster, you will most likely need to select different values for these ports. The port assignments for any of these ports must respect the valid node port range for your Kubernetes cluster.
© 2026 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and trademarks