✨ New: Try our AI‑powered Search (Ctrl + K) — Read more

Assurance Sensor Agent Installation Guidelines

Prev Next

Installation and Deployment Guidelines

This guide covers the requirements, configuration, and deployment of Cisco® Provider Connectivity Assurance Sensor Agents. The agents are containerized measurement functions delivered as Docker container images.

Agent types: actuate, throughput, trace, transfer

Note:

Using self-signed CA?
If Sensor Agents are to be connected to PCA through a Sensor Collector that uses a self-signed certificate authority (CA) — then the root certificate file ca.pem needs to be copied from the Sensor Collector host to the host where the Sensor Agent should be deployed, and made visible to the agent through a volume bind in the Docker Compose file. See Custom CA Certificates (TLS) for Docker mount examples, and see this document for examples of Sensor Collector CA signing.

Prerequisites and Host Requirements


Container Runtime

The Cisco® Provider Connectivity Assurance Sensor Agents require a container runtime on a Linux host.

Requirement Detail
Runtime Docker Engine (dockerd / containerd) or any OCI-compliant container runtime
Minimum version Docker Engine 23.0.6 or later
Architectures AMD64 (x86-64) and ARM64
Orchestration Docker CLI, Docker Compose, or Kubernetes with Helm charts

Standard daemon configuration (/etc/docker/daemon.json) is sufficient. No special kernel modules or sysctl tuning is required — standard Linux distributions include all necessary kernel networking subsystems (netfilter, bridge interfaces, IP routing) by default. Both bridge networking (Docker default) and host networking are supported. IPv6 is detected automatically.

Linux Capabilities

Important

The NET_ADMIN capability is mandatory for all Sensor Agents. Without it, the agent may not operate as expected.

Grant the following Linux capabilities to Sensor Agents containers:

Capability Required? Purpose
NET_ADMIN Mandatory (all agents) Network interface and routing management for measurements
NET_RAW Recommended (actuate, trace, throughput) Raw socket access for active measurements
SYS_ADMIN Only if using VRF Network namespace access for VRF-aware deployments

In Docker Compose, set capabilities with cap_add. In Docker CLI, use --cap-add. For Kubernetes, use securityContext.capabilities.add.

Network Services

The following host services are not strictly required but are recommended for reliable operation:

DNS resolver — Required if Sensor Collector endpoints are configured using FQDNs. If only IP addresses are used, DNS is not needed. Ensure /etc/resolv.conf on the host points to a functional DNS server. Docker passes this configuration into containers automatically.

NTP client — Sensor Agents read the host system clock for timestamping performance measurements. Clock accuracy directly affects the validity of one-way delay, jitter, and latency measurements. Use any standard NTP client (chrony recommended).

HTTP/HTTPS proxy — In environments without direct internet access, pass proxy settings to the container via environment variables: http_proxy, https_proxy, no_proxy. Both lowercase and uppercase variants are supported. See HTTP/HTTPS Proxy for details.

Firewall Rules

If a firewall is active on the host or in the network path, ensure the following traffic is permitted:

Direction Protocol Port Destination Purpose Required?
Outbound TCP 55777 Sensor Collector Agent management (WebSocket) Yes
Outbound TCP 55888 Sensor Collector Performance data (WebSocket) Yes
Outbound TCP 443 Provider Connectivity Assurance Sensor Collector to cloud/on-prem Yes (from Sensor Collector host)
Outbound UDP/TCP 53 DNS server Name resolution If using FQDNs
Outbound UDP 123 NTP server Time synchronization Recommended
Inbound UDP 862 Agent host TWAMP reflector Actuate only, if reflector enabled
Inbound UDP 7 Agent host UDP Echo reflector Actuate only, if reflector enabled
Inbound TCP/UDP 5201 Agent host iPerf3 throughput reflector Throughput only, if reflector enabled
Outbound TCP/UDP Ephemeral Test targets Active measurements During active tests
Note:

Reflector ports (862, 7, 5201) are defaults. Different ports can be specified via the orchestration API and Docker port mapping. All connections from agent to Sensor Collector, and from Sensor Collector to Provider Connectivity Assurance, are initiated outbound. NAT/PAT firewalls are supported between all components.

Agent Resource Consumption


For containers, the amount of RAM, CPU, and disk is rarely provisioned strictly by the container environment. Instead, the usage is typically monitored and the environment can be configured to alert on overutilization.

As a guideline, the Sensor Agents under typical usage will consume up to:

Agent vCPU* RAM Disk
actuate 0.05 - 1 0.2 - 0.5 GB 200 MB
throughput 1 0.1 GB 200 MB
transfer 0.1 0.1 GB 200 MB
trace 0.1 0.1 GB 200 MB

A vCPU is considered as a thread on standard x86 CPU, running typically at 2.2 to 2.4 GHz.
CPU usage for actuate is shown as a range depending on load. Highest load is 5000 pps (for example 500 sessions of 10 pps each).

Agent logging is through the standard container stdout mechanism. It is up to the container environment to apply restrictions on log volume and retention. Keeping large amounts of logs will increase disk space usage. See Logging Configuration for recommended settings.

Agent Connectivity Requirements


The agent establishes two connections to the Sensor Collector:

  • Management proxy (TCP port 55777, default) — used for agent management and configuration. Authenticated using a secret provided to the agent upon start.
  • Data gateway (TCP port 55888, default) — used to send measurement data and metrics.

Both connections are directed to the same Sensor Collector instance.

If you don't have a Sensor Collector instance up and running for agents, you must set it up. For information, see Sensor Collector.

Lastly, the agent needs connectivity to the test destination. This may be a reflector, responder, service, or another agent — all depending on the test type and configuration. Each agent type has different connectivity requirements for their test traffic. The port table in Firewall Rules summarizes all connections that must be permitted.

Downloading Agent Images


The Sensor Agents are provided as Docker container images. To load an agent into your Docker environment, use the docker pull command for the appropriate agent.

Container registry (docker pull)

x86/AMD versions (amd64):

docker pull gcr.io/sky-agents/agent-actuate-amd64:r25.07.1
docker pull gcr.io/sky-agents/agent-throughput-amd64:r25.07.1
docker pull gcr.io/sky-agents/agent-trace-amd64:r25.07.1
docker pull gcr.io/sky-agents/agent-transfer-amd64:r25.07.1

ARM 64-bit versions (arm64):

docker pull gcr.io/sky-agents/agent-actuate-arm64:r25.07.1
docker pull gcr.io/sky-agents/agent-throughput-arm64:r25.07.1
docker pull gcr.io/sky-agents/agent-trace-arm64:r25.07.1
docker pull gcr.io/sky-agents/agent-transfer-arm64:r25.07.1

Offline installation (docker save / load)

If an Sensor Agent needs to be installed on a host that has no internet access, all agents can be downloaded from software.cisco.com under the Provider Connectivity Assurance category.

Alternatively, an image fetched with docker pull can be exported using docker save to a tar.gz file:

docker save gcr.io/sky-agents/agent-actuate-amd64:r25.07.1 | gzip > agent-actuate-amd64.r25.07.1.tar.gz

This file can then be loaded on the target host using docker load:

docker load -i agent-actuate-amd64.r25.07.1.tar.gz

Once loaded, verify the image is present:

docker images

The output should show the loaded image:

REPOSITORY                                        TAG        IMAGE ID       CREATED        SIZE
gcr.io/sky-agents/agent-actuate-amd64   r25.07.1     4fab523db717   1 week ago     24MB

Agent Configuration


For an agent to be started and ready to perform tests, it needs several configuration elements:

  • An authentication secret (secrets file or environment variable)
  • The IP address or FQDN of a Sensor Collector management service
  • The NET_ADMIN capability enabled
  • Port mappings for the TWAMP reflector (actuate agent only, if reflection is required)
  • Port mappings for incoming throughput test requests (throughput agent only)
  • ca.pem file if using a self-signed CA on the Sensor Collector host (see Custom CA Certificates)

These parameters can be supplied in a Docker Compose file or specified as environment variables on the command line when starting the container with docker run.

Define agent in orchestration API

Important

When querying and changing configuration through the orchestration API, it is highly recommended to use a dedicated service account and not an individual user account. Agents use authentication tokens that will expire if the user account is deleted.

Each agent needs a definition in the orchestration system to be accepted and controllable. To add a new agent, four configuration attributes need to be set:

  • agentId — a unique UUID identifying this agent
  • agentName — a descriptive name string
  • data gateway server — IP address or FQDN of the Sensor Collector (same host as the management proxy)
  • data gateway port — port used for metrics data streaming (default 55888)

To generate a UUID for your agent, use the built-in generator in Linux:

uuidgen

Example output: ae311d23-5ca7-fake-8317-25example54a

Note:

The agentId field can also be left blank when creating the agent configuration — the orchestration system will generate a UUID automatically. If the agent is already deployed with a specific UUID, that UUID must be used.

Create a configuration object like the one below:

{
  "data": {
    "type": "agentConfigs",
    "attributes": {
      "agentId": "ae311d23-5ca7-fake-8317-25example54a",
      "dataGateway": {
        "server": "192.168.0.4",
        "port": 55888
      },
      "identification": {
        "agentName": "demo-agent-12345"
      }
    }
  }
}

Then, POST it to the orchestration service to register this new agent:

POST {{your-tenant-name}}/api/orchestrate/v3/agents/configuration

Secrets and authentication

Each agent needs an authentication token to register with the orchestration system. There are multiple ways to provide this secret to the agent.

##### Retrieve the secret

Use the agent control UI or the orchestration API to generate the initial authentication secret. The API call requires only the agent UUID:

POST {{your-tenant-name}}/api/orchestrate/v3/agents/ae311d23-5ca7-fake-8317-25example54a/secrets

The output contains the initial secret for this agent in YAML format:

agentConfig:
    identification:
        agentId: ae311d23-5ca7-fake-8317-25example54a
        authenticationToken: eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhY2NlZGlhbi5jb20i...

##### Method 1: Secrets file (file on host)

Save the secret output to a YAML file on the host. It is good practice to use the agent UUID as part of the file name when deploying multiple agents on the same host:

/home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml

Mount this file into the container at /run/secrets/secrets.yaml (the default path):

--mount type=bind,source=/home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml,target=/run/secrets/secrets.yaml

In Docker Compose:

volumes:
  - type: bind
    source: /home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml
    target: /run/secrets/secrets.yaml
Important

The secrets file must be writable by the agent. The agent periodically updates the authentication token and writes it back to the secrets file. If the file is read-only, the agent cannot persist renewed tokens.

##### Method 2: Named volumes (Docker)

Named volumes let Docker manage the storage. This is useful when the host filesystem layout should not be exposed to containers.

Create the volume and populate it:

docker volume create agent-secrets
docker container create --name temp-init -v agent-secrets:/run/secrets busybox
docker cp /home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml temp-init:/run/secrets/secrets.yaml
docker rm temp-init

Then reference the volume in your Docker Compose file:

volumes:
  - agent-secrets:/run/secrets

Or on the command line:

--mount source=agent-secrets,target=/run/secrets

##### Method 3: Environment variables

In some cases it may be more suitable to provide the initial token as environment variables when launching the agent. Set both AGENT_ID and AGENT_AUTHENTICATION_TOKEN:

Important

When using environment variables for authentication, the agent writes renewed tokens to its internal secrets file. If the container is destroyed, the renewed token is lost and the original environment variable token may have expired. To preserve renewed tokens across container recreation, mount a named volume on the secrets path. The volume is created automatically by Docker if it does not already exist.

Docker CLI:

docker run -d \
  --env AGENT_ID=ae311d23-5ca7-fake-8317-25example54a \
  --env AGENT_AUTHENTICATION_TOKEN=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhY2NlZGlhbi5jb20i... \
  --mount source=agent-secrets,target=/run/secrets \
  --env AGENT_MANAGEMENT_PROXY="10.11.12.13" \
  --cap-add=NET_ADMIN \
  gcr.io/sky-agents/agent-actuate-amd64:r25.07.1

Docker Compose:

services:
  actuate-agent:
    image: gcr.io/sky-agents/agent-actuate-amd64:r25.07.1
    cap_add:
      - NET_ADMIN
    environment:
      AGENT_ID: "ae311d23-5ca7-fake-8317-25example54a"
      AGENT_AUTHENTICATION_TOKEN: "eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhY2NlZGlhbi5jb20i..."
      AGENT_MANAGEMENT_PROXY: "10.11.12.13"
    volumes:
      - agent-secrets:/run/secrets

volumes:
  agent-secrets:
Note:

If a secrets.yaml file exists and is mounted, it will have precedence over the environment variables.

##### Token lifecycle and renewal

Token expiration

Initial authentication tokens expire after 14 days. Running agents automatically re-negotiate tokens before expiry, so as long as the container is not destroyed, the agent maintains a valid token.

Important: docker compose down destroys containers and any updated tokens stored within them. Use docker compose stop for graceful stops that preserve the container and its updated token. If a named volume is mounted on the secrets path (as recommended in Method 3), renewed tokens are persisted in the volume and survive container recreation. Without a named volume or a bind-mounted secrets file, the original token may have expired and a new one must be generated from the orchestration API.

The default path where the agent looks for its secrets file inside the container is /run/secrets/secrets.yaml. To use a different path, set the AGENT_SECRETS_PATH environment variable.

See Agent secrets file options for additional methods of providing authentication secrets to the agent.

CA trust bundle selection

At startup the agent selects a CA trust bundle based on the AGENT_CA_TRUST_BUNDLE environment variable:

Mode Description
core Contains only Cisco Core CAs. For communication to and from Cisco
intersect Contains the Cisco Core Bundle and the CAs trusted by all major root store programs. This is the default
union Favors connectivity at the cost of security. Includes the Cisco Core Bundle and every trusted public CA
fedramp Use for US federal operations. Contains Cisco Core Bundle plus US CAs from the Intersect Bundle and the US DoD External PKI Trust Chains
mozilla Use the Alpine Linux (Mozilla) distribution trusted root store

If no option is provided, the default mode is intersect.

Custom CA certificates (TLS)

When the Sensor Collector uses a TLS certificate signed by a private Certificate Authority (CA), the CA certificate must be added to the agent's trusted certificate store. This allows the agent to validate the Sensor Collector's TLS certificate.

There are two common approaches to TLS certificates:

  • Self-signed TLS certificate — The certificate signs itself. Simpler to generate (single command). Best for testing and development.
  • CA-signed TLS certificate — A separate CA certificate signs the TLS certificate. A single CA certificate can sign multiple TLS certificates for different servers. Best for production environments.

For instructions on generating certificates, see Generate a Self-Signed Certificate.

Note:

The CA certificate provided to Sensor Agents must be in PEM format. A PEM file is a text file that begins with -----BEGIN CERTIFICATE-----. A DER file is binary and not human-readable.

To check the format of a certificate file:

head -1 myCA.crt

If the output is -----BEGIN CERTIFICATE-----, the file is already in PEM format. If the output is binary (garbled characters), it is in DER format and must be converted:

openssl x509 -in myCA.crt -inform DER -out myCA.pem -outform PEM

##### Loading CA certificates into agents

Create a directory on the host with the CA certificate(s) in PEM format:

mkdir myCaCertsToImport
cp myCA.pem myCaCertsToImport/

Mount this directory into the container at /usr/local/share/ca-certificates/:

Docker CLI:

docker run -d \
  --env AGENT_MANAGEMENT_PROXY=192.168.1.1 \
  --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 \
  --mount type=bind,source=./secrets.yaml,target=/run/secrets/secrets.yaml \
  --mount type=bind,source=./myCaCertsToImport,target=/usr/local/share/ca-certificates/myCaCertsToImport \
  --cap-add=NET_ADMIN \
  gcr.io/sky-agents/agent-actuate-amd64:r25.07.1

Docker Compose:

services:
  sensor-agent:
    image: gcr.io/sky-agents/agent-actuate-amd64:r25.07.1
    cap_add:
      - NET_ADMIN
    environment:
      AGENT_MANAGEMENT_PROXY: "192.168.1.1"
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
    volumes:
      - type: bind
        source: ./secrets.yaml
        target: /run/secrets/secrets.yaml
      - type: bind
        source: ./myCaCertsToImport
        target: /usr/local/share/ca-certificates/myCaCertsToImport

The agent automatically detects and imports PEM certificates placed in subdirectories of /usr/local/share/ca-certificates/ at startup.

Deploying Agents with Docker


Docker Compose

To use Docker Compose, place the configuration parameters in a YAML file.

Docker Compose versions

Depending on your Docker installation, you may need to use docker compose (V2, no hyphen) instead of docker-compose (V1, with hyphen). Docker Compose V2 is the current standard. If one form does not work, try the other.

Example — actuate agent:

services:
  actuate-agent:
    container_name: "my-actuate-agent"
    image: "gcr.io/sky-agents/agent-actuate-amd64:r25.07.1"
    cap_add:
      - NET_ADMIN
      - NET_RAW
    ports:
      - "4000:862/udp"
    restart: always
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
    environment:
      AGENT_MANAGEMENT_PROXY: "10.11.12.13"
      AGENT_MANAGEMENT_PROXY_PORT: "55777"
    volumes:
      - type: bind
        source: /home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml
        target: /run/secrets/secrets.yaml
# Optional: mount CA certificate if using a self-signed CA on the Sensor Collector
    # - type: bind
    #   source: ./secrets/ca.pem
    #   target: /usr/local/share/ca-certificates/ca.pem

Example — throughput agent:

services:
  throughput-agent:
    container_name: "my-throughput-agent"
    image: "gcr.io/sky-agents/agent-throughput-amd64:r25.07.1"
    cap_add:
      - NET_ADMIN
      - NET_RAW
    restart: always
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
    environment:
      AGENT_MANAGEMENT_PROXY: "10.11.12.13"
      AGENT_MANAGEMENT_PROXY_PORT: "55777"
    volumes:
      - type: bind
        source: /home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml
        target: /run/secrets/secrets.yaml
# Optional: mount CA certificate if using a self-signed CA on the Sensor Collector
    # - type: bind
    #   source: ./secrets/ca.pem
    #   target: /usr/local/share/ca-certificates/ca.pem

Launch the agent:

docker compose up -d

If the compose file is not named docker-compose.yml, specify it:

docker compose -f mycomposefile.yml up -d
Recommendations
  • Use type: bind instead of -v shorthand for volume mappings
  • Set a restart policy (restart: always) so the agent restarts automatically after host reboots — see Docker restart policies
  • Configure log rotation to prevent disk space exhaustion — see Logging Configuration
  • Use docker compose stop instead of docker compose down when you want to preserve the container and its updated authentication token

Docker CLI

The same configuration can be provided on the command line using docker run.

Example — actuate agent with secrets file:

docker run -d \
  --name my-actuate-agent \
  -p 4000:862/udp \
  --env AGENT_MANAGEMENT_PROXY="10.11.12.13" \
  --env AGENT_MANAGEMENT_PROXY_PORT="55777" \
  --cap-add=NET_ADMIN \
  --cap-add=NET_RAW \
  --restart=always \
  --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 \
  --mount type=bind,source=/home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml,target=/run/secrets/secrets.yaml \
  gcr.io/sky-agents/agent-actuate-amd64:r25.07.1

Example — actuate agent without secrets file (environment variables):

docker run -d \
  --name my-actuate-agent \
  -p 4000:862/udp \
  --env AGENT_ID="ae311d23-5ca7-fake-8317-25example54a" \
  --env AGENT_AUTHENTICATION_TOKEN="eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhY2NlZGlhbi5jb20i..." \
  --env AGENT_MANAGEMENT_PROXY="10.11.12.13" \
  --env AGENT_MANAGEMENT_PROXY_PORT="55777" \
  --cap-add=NET_ADMIN \
  --cap-add=NET_RAW \
  --restart=always \
  --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 \
  gcr.io/sky-agents/agent-actuate-amd64:r25.07.1

The -d option launches the container in the background (detached mode).

Verifying agent status

To check that the agent has started, use docker ps:

docker ps

Expected output:

CONTAINER ID   IMAGE                                             COMMAND          CREATED       STATUS                 PORTS                   NAMES
437d53381d83   gcr.io/sky-agents/agent-actuate-amd64:r25.07.1   "/sbin/monitor"  5 weeks ago   Up 5 weeks (healthy)   0.0.0.0:4000->862/udp   my-actuate-agent

The output shows how long the agent has been running, the health status, and the port mappings. A status of (healthy) confirms the agent is operating correctly.

Logging Configuration


Important

By default, Docker has no log rotation. If Sensor Agents run for long periods of time, you must configure log rotation to prevent the file system from filling up.

Docker log rotation

Configure the json-file log driver with size limits. This can be done per-container or globally.

Per-container (Docker Compose):

services:
  sensor-agent:
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

Per-container (Docker CLI):

--log-driver json-file --log-opt max-size=10m --log-opt max-file=3

Global default (in /etc/docker/daemon.json):

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

For more details, see the Docker documentation: Configure logging drivers

Centralized logging

Sensor Agents emit structured RFC 5424 syslog to stdout, which the Docker logging driver captures. For centralized log management, configure Docker to forward logs using the syslog, fluentd, or other supported log drivers. Logs are always available locally via docker logs <container-name>.

Advanced Configuration


Multiple interfaces / VRF / namespaces

All agents support attaching to multiple interfaces by configuration in the Docker Compose file or via Docker command line options. The agent can either be provided with a specific mount point for a VRF / namespace or be given access to the full set of namespaces available in the /var/run/netns directory.

To switch namespaces, the agent must be started with the SYS_ADMIN capability enabled.

Interfaces vs namespaces:
Linux systems natively support multiple physical and virtual (VLAN) interfaces. On a standard Linux system, these interfaces cannot have overlapping IP addressing subnets and routes — each interface has its own IP subnet and there is only one default route.

With namespaces / VRF, the Linux kernel provides separate routing domains. Each namespace can have its own interfaces, IP addresses, and default routes, even if they overlap with other namespaces.

When an interface is reassigned to a different namespace, it disappears from the default interface list (ip addr).

Example commands to add a namespace and associate an interface:

ip netns add mynamespace
ip link set eth3.123 netns mynamespace

Docker Compose example with namespace access:

services:
  actuate-agent:
    container_name: "my-actuate-agent"
    image: "gcr.io/sky-agents/agent-actuate-amd64:r25.07.1"
    cap_add:
      - NET_ADMIN
      - NET_RAW
      - SYS_ADMIN
    restart: always
    network_mode: host
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
    environment:
      AGENT_MANAGEMENT_PROXY: "12.13.14.15"
    volumes:
      - type: bind
        source: /home/agentuser/secrets/ae311d23-5ca7-fake-8317-25example54a.yaml
        target: /run/secrets/secrets.yaml
      - type: bind
        propagation: rslave
        source: /var/run/netns
        target: /var/run/netns
Note:
  1. SYS_ADMIN capability is required when binding to namespaces
  2. NET_ADMIN and NET_RAW are recommended for actuate, trace, and throughput agents
  3. Propagation mode rslave allows the agent to see new entries in /var/run/netns — without this flag the agent cannot attach to namespaces created after the agent has started

Environment Variable Reference


The following environment variables can be provided to Sensor Agents at container startup. All variables are set through -e / --env flags (Docker CLI) or the environment: section (Docker Compose).

General variables

Variable Required Default Description
AGENT_MANAGEMENT_PROXY Mandatory — IP address or FQDN of the Sensor Collector management service
AGENT_MANAGEMENT_PROXY_PORT Optional 55777 Port for the management connection
AGENT_ID Optional from secrets file UUID for the agent. Overrides the value in the secrets file if both are provided
AGENT_AUTHENTICATION_TOKEN Optional from secrets file Authentication token. Requires AGENT_ID to also be set. Secrets file takes precedence if mounted
AGENT_SECRETS_PATH Optional /run/secrets/secrets.yaml Path to the secrets file inside the container
AGENT_CA_TRUST_BUNDLE Optional intersect CA trust bundle selection. Values: core, intersect, union, fedramp, mozilla
AGENT_MANAGEMENT_PROXY_USE_SSL Optional true Enable or disable SSL/TLS for the management connection
AGENT_MANAGEMENT_PROXY_SSL_ALLOW_SELFSIGNED Optional true Allow self-signed certificates on the management connection
AGENT_MANAGEMENT_PROXY_SSL_ALLOW_EXPIRED_CERTS Optional false Allow expired certificates on the management connection
AGENT_MANAGEMENT_PROXY_SSL_HOSTNAME_CHECK Optional true Verify hostname/Common Name on the management connection certificate
AGENT_SOURCE_INTERFACE Optional eth0 Default network interface for measurement sessions. The preferred method is to set the interface per-session via the orchestration API. Only needed when running in host network mode or when multiple interfaces are attached to the container
AGENT_MEASUREMENT_NETNS Optional — Default namespace (VRF) for measurement traffic. The preferred method is to set the namespace per-session via the orchestration API. Only needed when running in host network mode or when multiple namespaces are available to the container
AGENT_MANAGEMENT_NAMESPACE Optional — Namespace for the management connection (connection to Sensor Collector)
AGENT_METADATA_* Optional — Custom metadata key-value pairs. The AGENT_METADATA_ prefix is stripped and the key is lowercased (max 64 bytes per key). Example: AGENT_METADATA_LOCATION=Montreal sets metadata key location to Montreal
Additional configuration

For additional configuration parameters specific to each agent type, refer to the documentation for the individual Sensor Agent.

The agent is now ready to be configured using the orchestration API. See Working with the agents control API for more information, or the agent orchestration API reference for full details.

Troubleshooting


Symptom Possible Cause Resolution
Agent shows (unhealthy) in docker ps Cannot reach Sensor Collector on management port Verify network connectivity and firewall rules. Check AGENT_MANAGEMENT_PROXY and port are correct
Agent exits immediately after start Authentication failure Check that the secrets file is mounted correctly and is writable. Verify the token has not expired
NET_ADMIN capability errors in logs Missing capability Ensure cap_add: NET_ADMIN is set in the compose file or --cap-add=NET_ADMIN is used on the command line
Agent cannot resolve Sensor Collector FQDN DNS not available Verify /etc/resolv.conf on the host points to a working DNS server, or use an IP address for AGENT_MANAGEMENT_PROXY
Disk space filling up No log rotation configured Configure Docker log rotation — see Logging Configuration
docker compose down and agent will not reconnect Token expired after container destruction Generate a new authentication token from the orchestration API. Use docker compose stop instead of down to preserve containers and updated tokens
Namespace not visible to agent Missing propagation flag Set propagation: rslave on the /var/run/netns volume mount
TLS connection refused Custom CA not trusted Mount CA certificate in PEM format to /usr/local/share/ca-certificates/ — see Custom CA Certificates
Agent cannot connect through proxy Proxy not configured or unreachable Verify proxy settings with env | grep -i proxy on host and docker exec <container> env | grep -i proxy in container. Ensure proxy URL includes the http:// scheme
Traffic bypasses proxy despite settings NO_PROXY matches the destination Check NO_PROXY patterns — wildcards like * or broad CIDR ranges will bypass the proxy
SSL connections fail through proxy Proxy does not support CONNECT tunneling Verify that the proxy supports the HTTP CONNECT method for SSL/TLS tunneling. Check that https_proxy is configured
Proxy credentials visible in logs Credentials embedded in proxy URL Use IP-based or certificate-based proxy authentication. See HTTP/HTTPS Proxy security guidance

© 2026 Cisco and/or its affiliates. All rights reserved.

For more information about trademarks, please visit:
Cisco trademarks 
For more information about legal terms, please visit:
Cisco legal terms