This article explains how to deploy Sensor Collectors (formerly Roadrunner) in a Kubernetes environment and assumes you have already configured your Sensor Collector, met the required system requirements, and have a Kubernetes cluster available that can be configured via helm.
Step 1: Retrieve Sensor Collector Helm Charts (Required only for Air gapped deployments)
Since air gapped deployments can not access the internet to retrieve Helm charts, this step must be done manually. For environments with access to the internet, proceed to step 2.
Download the sensor collector helm charts to a local file and transfer to the working directory on the machine where you configure your Kubernetes cluster via helm.
sensor-collector-0.690.0-fips.tgz
Step 2: Download Helm Helper Artifacts
Cisco Provider Connectivity Assurance provides an endpoint for downloading the Helm helper bundle with some pre-populated artifacts which can be used during the Sensor Collector Helm install. These artifacts can be accessed either via the UI or API.
Helm Helper Artifacts (Download via UI)
To download the Helm helper bundle via the UI:
- In the UI, navigate to Sensors > Collectors > Sensor Collectors.
- Select your Sensor Collector from the list.
- Click the ellipsis (three dots) on the right.
- Select Download (Kubernetes) from the menu, as shown below.

Helm Helper Artifacts (Download via API)
After authenticating, you can download the artifacts by submitting a GET REST API request to Cisco Provider Connectivity Assurance as follows:
GET https://{{tenant url}}/api/v2/distribution/download-roadrunner-helm?zone={{your zone}}
Where:
- tenant url = The tenant URL for your PCA deployment
- zone = The zone for the Sensor Collector instance you wish to deploy
A .tar file following the naming convention {connector name}-{date}-helm.tar.gz will be downloaded to your machine.
Copy the file to the working directory on the machine where you configure your Kubernetes cluster via helm.
Example:
> cd /opt/data-collector
> cp ~/Downloads/DataGateway-2025008-27-helm.tar.gz .
> tar -xzvf DataGateway-2025008-27-helm.tar.gz
You should now have the following files in your working directory:
├── .env
├── sensor-collector-install.sh
└── values.yaml
Step 3: Installation Prerequisites
Selecting a Namespace and Instance Name
The .env file extracted from your .tar file includes (among others) the following environment variables:
KUBERNETES_INSTANCE_NAME="sensor-collector"
KUBERNETES_NAMESPACE="cisco-sensor-collectors"
These dictate the Kubernetes namespace into which you will deploy your Sensor Collector, as well as the instance name for your Helm deployment. You may override either of these defaults by modifying the associated value in the .env file.
Step 4: Install Sensor Collector
The .tar file you downloaded from Provider Connectivity Assurance contains a script named sensor-collector-install.sh. This script runs several pre-flight checks to verify that all prerequisites for the install are met and then performs a Helm installation of the Sensor Collector.
In air gapped environments
In an air gapped environment, you must obtain the sensor collector helm archive as explained in step 1. Then you can run the helm helper script using the locally downloaded helm chart archive.
> ./sensor-collector-install.sh --local-chart ./sensor-collector-0.690.0-fips.tgz
In non-air gapped environments
╰─ ./sensor-collector-install.sh
The expected output:
Performing preflight checks...
helm installed ------------------------------ PASS
confirm required environment variables set in local .env file:
KUBERNETES_INSTANCE_NAME set ------------ PASS
KUBERNETES_NAMESPACE set ---------------- PASS
SENSOR_COLLECTOR_TYPE set --------------- PASS
Verify required support files present:
values.yaml - PRESENT
Preflight checks passed
Installing data-collector instance of type dataGateway with name sensor-collector to namespace cisco-sensor-collectors
- Server certificate for sensor agent authentication will be imported from ./tls.crt
- Server certificate key for sensor agent authentication will be imported from ./tls.key
Results of helm install:
NAME: sensor-collector
LAST DEPLOYED: Wed Aug 27 14:44:40 2025
NAMESPACE: cisco-sensor-collectors
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Roadrunner application version gfraboni-dev installed for zone DataGateway.
General Notes:
--------------
Analytics Endpoints:
- Deployment: gdf-rep.gdf-rep.npav.accedian.net:443
- Tenant : gino.gdf-rep.npav.accedian.net:443
Agent Proxy Node Port: 30001
Metrics Gateway Port : 30000
To get the node IP address(es) for your Kubernetes cluster, run the following command at the command line:
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}{"\n"}'
If your output resembles the example above, then you have completed your install of the Sensor Collector.
If any of the tested prerequisites were not met, the install will abort, and the user is expected to correct the deficiency before trying again.
Additional Notes
Node Ports
Both the csv and Gateway Sensor Collector variants must expose ports to allow external access to the Sensor Collector. For the csv variant, this is the port where the SFTP server (used for ingesting .csv files into the system) is exposed. For the Data Gateway variant, these are the ports for the Agent Proxy and the Metrics Gateway.
In both cases, static node port assignments are determined by the values in the values.yaml file used during the Helm installation.
For the csv Sensor Collector, the SFTP server port is assigned with the filewatcher.nodePort parameter.
For the Data Gateway variant, the ports for the Agent Proxy and the Metrics Gateway are determined by the openMetricsGateway.agentProxyPort and openMetricsGateway.metricsGatewayPort parameters, respectively.
These values are automatically populated when you download the values.yaml file from Provider Connectivity Assurance. For a csv Sensor Collector, the filewatcher.nodePort defaults to 3000. For a Data Gateway Sensor Collector, the port assignments are based on the connector configuration you created in the Provider Connectivity Assurance user interface when you created the Sensor Collector.
Important: As mentioned in the Configuration article, the default Gateway ports (55777 and 55888) are outside of the valid port range for most Kubernetes clusters (node port range is typically 30000-32767), so if you are deploying a Gateway to a Kubernetes cluster, you will most likely need to select different values for these ports. The port assignments for any of these ports must respect the valid node port range for your Kubernetes cluster.
© 2026 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms