Agent deployment in Kubernetes using helm
  • 17 Oct 2023
  • 8 Minutes to read
  • Contributors
  • Dark
    Light
  • PDF

Agent deployment in Kubernetes using helm

  • Dark
    Light
  • PDF

Article Summary

You can run Sensor agents in either plain docker or Kubernetes. This article walks through how to get you up and running in Kubernetes. The instructions leverage the use of helm.sh and helm charts for simplified installation.

Helm is advertised as a package manager for Kubernetes, and allows operators to define services and their dependencies using yaml files. Accedian provides the necessary helm charts and pre-seeded values files needed to deploy Sensor agents as pods in a Kubernetes cluster.

Working knowledge of Kubernetes should be considered as a pre-requisite for this workflow.


Environment Requirements

Ensure you have the following:

  1. Kubernetes v1.26 or later running on the compute resource where you want the agent to run
  2. A Kubernetes cluster created where you want to deploy the Sensor agent
  3. kubectl with config to manage the Kubernetes cluster
  4. Helm v3 or later available on the compute where the installation will be performed, and that has connectivity to your Kubernetes cluster.
  5. Enough vCPU and RAM to deploy the agent - see sensor agent release notes for latest resource requirement specs

image.png


Preparation

To start, you'll need to grab two artifacts and make them available to the working directory for helm.

  1. Sensor agent Helm chart bundle
  2. API key to use for agent's authentication towards Analytics

Getting Sensor agent Helm Charts

The helm charts can be downloaded from our software page here:
Sensor agents software repository
Navigate the directory structure to the helm chart directory within each sensor agent type.

Example:
helm-repo-agent-actuate.png

Or download the helm chart via CLI (replacing the release and version tags as necessary):

$ wget https://software.accedian.io/agents/actuate/23.07/helm/agent-actuate-23.07-helm-1.3.0.tgz

OR use helm pull to retreive it

$ helm pull https://software.accedian.io/agents/actuate/23.07/helm/agent-actuate-23.07-helm-1.3.0.tgz

The above example results in a file named agent-actuate-23.07-helm-1.3.0.tgz being downloaded into your working directory. Version numbers refer to agent version (23.07) and helm chart version (1.3.0). The latest of both is always recommended to use.

Notes:

Internet access

  • If the environment you are planning to deploy the Sensor agent into has access to the internet, you can skip the downloading of the helm chart, and use the URL directly in your helm install command as shown in the pull example above.

Future helm chart location

  • We will soon be making the Helm charts available via the Skylight artifact registry, which should allow helm to list Skylight as a repo for direct chart navigation.

Unpack the helm chart bundle and (optionally) edit values.yaml

Unpack the helm chart bundle that was downloaded in the previous step

$ tar xf agent-actuate-23.07-helm-1.3.0.tgz

This creates a subdirectory "actuate" in the current directory with the following contents

actuate/
├── Chart.yaml
├── _secrets.tpl
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── secrets.yaml
│   ├── serviceaccount.yaml
│   ├── service-echo.yaml
│   └── service-twamp.yaml
└── values.yaml
Startup parameters

There are three parameters required for the agent to be able to start up, these are:
agentManagement - set this to the IP address where the Roadrunnner service is running
agentAgentId - set this to a random UUID or the UUID of an already provisioned agent configuration
agentAuthenticationToken - set this to the agent authentication token, or the global tenant-wide bootstrap token (api-key) - this is described in the next section in this article

Either specify these directly on the command line when launching the agent, or define them in the values.yaml file in the unpacked helm chart directory.

Retrieving authentication token for agent

In order for the sensor agent to register with Analytics it requires an authentication token. The authentication token can be provided to the agent in several ways, see Agent secrets file options for examples. This guide will use the direct method of specifying the token directly to the agentAuthenticationToken variable - either in values.yaml file or on command line when deploying the agent with helm install.

There are three ways to fetch an authentication token from the Analytics

  • using the Analytics graphical user interface
  • by calling API for a specific agentID
  • or by calling the API for the tenant-wide api-key

All three methods are described below.

Fetch authentication token via Analytics UI

If using the UI, an agent definition has to be created, then select "Generate auth token"
GenerateAuth.png

Fetch authentication token for a specific agent via API

To use the API, POST the agentID that will be used for the agent to get a secrets file for that agentID. The agentID is a formatted UUID and can be randomly created using for example "uuidgen".

POST {{tenant-server}}/api/orchestrate/v3/agents/{{agentId}}/secrets

The response will be a JSON formatted string like below

agentConfig:
  identification:
    agentId: 9c5d66a3-abcd-efef-0123-3ea38a4fbcf3
    authenticationToken: eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhY2NlZGlhbi5jb20iLCJzdWIiOiJobnlkZWxsQGFjY2VkaWFuLmNvbaa..........vU6QQ3cBsHinzLOLysOAjigqMSmnf-RY6s

Both the agentId and the authenticationToken strings need to be put in the values.yaml file, or specified on the command line when deploying the agent with helm.

Retrieve tenant-wide API key from orchestration service

The third option is to use the tenant-wide API key. This key token can be used to bootstrap many agents as it it not specific to an agent ID.

The API key can only be retreivied via calling the orchestration service API, there is no graphical UI on Analytics for this operation:

POST {{tenant-server}}/api/orchestrate/v3/agents/api-key

EXAMPLE RETURN:
eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhY2NlZGlhbi5jb20iLCJzdWIiOiJhZG1pbkBkYXRhaHViLmNvbSIsImV4csampleXVkIjoiYWNjZWRpYW4uY29tIiwidG9rZW5JRCI6NTA4LCJ0ZW5hbnRJRCI6ImFmYjEwOGQ4LTg3MDMtNDIwNy1hYmYexample1MGJiZWU5NiIsInBlcm1pc3Npb25zIjpbImRhdGEtaW5ncmVzcyJdfQ.8yjsKQWX3xKJTZlsp_dC04b9ZrSgJpc-kXhLm_22abc

Place this api-key in the values.yaml file or use on the command line when deploying the agent with helm.


Deployment

Deploying with Helm

You should now have downloaded the helm package, un-tar:ed it, and retreived an authentication token using one of the three options outlined above. Optionally these three startup parameter fields have been edited in the values.yaml file
agentManagement - set this to the IP address or FQDN (example myroadrunner.mycompany.com) where the Roadrunnner service is running
agentAgentId - set this to a random UUID or the UUID of an already provisioned agent configuration
agentAuthenticationToken - set this to the agent authentication token, or the global tenant-wide bootstrap token (api-key)

Deploying the agent can now be done using "helm install".
Go to the directory where the helm chart was untar:ed and deploy using one of the following two command line examples

If the values.yaml have been populated with startup parameters, use this command:

$ helm install myactuate actuate

If instead specifying the startup parameters on the command line, use this command:

$ helm install \
   --set agentManagement=192.168.12.33 \
   --set agentAgentId=9c5d66a3-abcd-efef-0123-3ea38a4fbcf3 \
   --set agentAuthenticationToken=eyJhbGciO.......m_22abc \
   myactuate actuate

agentAuthenticationToken has been shortened in above example
myactuate in the example above is the container name the deployed agent will get in the kubernetes cluster. It is considered good practice to use the same name also in the agent orchestration system for easier correlation, but it is not a requirement.
actuate is the helm chart directory to deploy - in this case an agent actuate.

A successful deployment will look like the following:

$ helm install myactuate actuate

NAME: myactuate
LAST DEPLOYED: Thu Oct 12 14:53:48 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None                                                                                                            
NOTES:                                                                                                                      
1. The IP address of this service can be obtained by running these commands:                                                
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.                                               
                                                                                                                            
           You can watch the status of by running 'kubectl get --namespace default svc -w myactuate-twamp-reflector'        
  export SERVICE_IP=$(kubectl get svc --namespace default myactuate-twamp-reflector --template "{{ range (index .status.load
Balancer.ingress 0) }}{{.}}{{ end }}")                                                                                      
  echo $SERVICE_IP:862                                                                                                      
                                                                                                                            
           You can watch the status of by running 'kubectl get --namespace default svc -w myactuate-echo-reflector'         
  export SERVICE_IP=$(kubectl get svc --namespace default myactuate-echo-reflector --template "{{ range (index .status.loadB
alancer.ingress 0) }}{{.}}{{ end }}")                                                                                       
  echo $SERVICE_IP:7

Agent reflectors (actuate and throughput)

As can be seen in the startup output above, the agent, in this case an agent actuate, has mapped two ports for incoming data - ports 862 and port 7. These are for the two built-in reflectors for TWAMP (862) and UDP-echo (7).
Kubernetes automatically maps these ports to external ports on the external loadbalancer IP. To see the mapped ports, use kubectl as below

$ k get services
NAME                           TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGE
myactuate-echo-reflector       LoadBalancer   10.91.1.6     34.124.114.2     7:32745/UDP      4d21h
myactuate-twamp-reflector      LoadBalancer   10.91.1.221   34.130.122.145   862:30883/UDP    4d21h
mysecondactuate-echo-reflector       LoadBalancer   10.91.0.156   34.124.123.190   7:30503/UDP      4d21h
mysecondactuate-twamp-reflector      LoadBalancer   10.91.1.150   34.130.176.185   862:30332/UDP    4d21h       

To reach for example the twamp reflector of "myactuate" agent, use external IP 34.130.122.145 and udp port 30833.

If the sender agent is in the same cluster, then the local k8s DNS name ( myactuate-twamp-reflector ) can be used instead, and directly on port 862.

View deployments with helm

Use helm to view all of your deployments, in this example there are two instances of agent actuate deployed, named "myactuate" and "mysecondactuate"

$ helm list
NAME                            NAMESPACE       REVISION        UPDATED                                         STATUS          CHART                   APP VERSION
myactuate                       default         1               2023-10-12 16:03:30.787852487 +0200 CEST        deployed        actuate-1.3.0           r23.07     
mysecondactuate                       default         1               2023-10-12 16:03:20.399765511 +0200 CEST        deployed        actuate-1.3.0           r23.07     

Checking new pod status

Use kubectl to confirm the new pod is running correctly.

  1. Find out the pod name:
$ kubectl get pods

NAME                              READY   STATUS    RESTARTS         AGE
myactuate-7647f74f58-xw4sg        1/1     Running   0                106m
mysecondactuate-868c8f66f8-s7k2n        1/1     Running   0                106m
  1. Look it up:
$ kubectl describe pod myactuate    
Name:         myactuate-7647f74f58-xw4sg
Namespace:    default
Priority:     0
Node:         gk3-autopilot-cluster-1-nap-1pll0c4a-12123331-rr61/10.188.42.206
Start Time:   Thu, 12 Oct 2023 16:03:27 +0200
Labels:       app.kubernetes.io/instance=actuatetx
              app.kubernetes.io/name=actuate
              pod-template-hash=868c8f66f8
Annotations:  <none>
Status:       Running
IP:           10.90.108.18
IPs:
  IP:           10.90.108.18
Controlled By:  ReplicaSet/actuatetx-868c8f66f8
Containers:
  actuate:
    Container ID:   containerd://be7bff0cc56f2cca3d4e5a2689774186ae8dee367797b3b54d6a780cf8e007ff
    Image:          gcr.io/sky-agents/agent-actuate-amd64:r23.11
    Image ID:       gcr.io/sky-agents/agent-actuate-amd64@sha256:490a740eb92342342342341854b333333b74de0495cacc9c73d30642821667f2
    Ports:          862/UDP, 7/UDP
    Host Ports:     0/UDP, 0/UDP
    State:          Running
      Started:      Thu, 12 Oct 2023 16:03:29 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             2Gi
    Requests:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             2Gi
    Liveness:             exec [/bin/health] delay=0s timeout=10s period=30s #success=1 #failure=3
    Readiness:            exec [/bin/healthy] delay=0s timeout=10s period=30s #success=1 #failure=3
    Startup:              exec [/bin/healthz] delay=0s timeout=10s period=30s #success=1 #failure=3
    Environment:
      AGENT_MANAGEMENT_PROXY:          10.99.91.8
      AGENT_MANAGEMENT_PROXY_PORT:     55777
      AGENT_REFLECTORS_DEFAULT_STATE:  true
    Mounts:
      /var/run/secrets from secrets-yaml (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ns6pt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  secrets-yaml:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  myactuate-secrets
    Optional:    false
  kube-api-access-ns6pt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 kubernetes.io/arch=amd64:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>


Deploying Roadrunner in Kubernetes


© 2024 Accedian Networks Inc. All rights reserved. Accedian®, Accedian Networks®,  the Accedian logo™, Skylight™, Skylight Interceptor™ and per-packet intel™, are trademarks or registered trademarks of Accedian Networks Inc. To view a list of Accedian trademarks visit: http://accedian.com/legal/trademarks/. 


Was this article helpful?

Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.