Prerequisites to Setup Service Pipeline
This section details the prerequisite requirements for installing service pipeline in an offline clustering state. This guide is written for an on-prem cluster environment.
Deployment Prerequisite
You will need at least 11 VM’s as per below resources (Dev Environment)
VM Count | Node Type | CPU | RAM | DISK |
---|---|---|---|---|
3 Worker | matrix-core | 12 | 16 GB | 300 GB |
N Worker (Minimum-3) | matrix-dp | 12 | 16 GB | 300 GB |
2 Worker | matrix-db | 12 | 16 GB | 300 GB |
3 Worker | matrix-fm | 12 | 16 GB | 300 GB |
For Future in case (Collector and AO) will be deploy | ||||
3 Worker | matrix-collector | 12 | 16 GB | 300 GB |
3 Worker | matrix-ao | 12 | 16 GB | 300 GB |
VM Interfaces should be configured for Dual Stack (Ipv4 and IPv6).
VM should have a single interface.
Setup Access(Server access).
RKE2 K8 Cluster should be up and running
4 virtual ip for ipv4 are required from the Metal LB CIDR Pool.
4 virtual ip for ipv6 are required from the Metal LB CIDR Pool.
Local registry for image management
Setting up PM/FM Pipeline on K8s
Download Helm Charts (PM and FM)
Installation Requirements: download the necessary helm charts from the SharePoint link below and upload them to the Required server as per below table.
Cross-Domain Analytics - Service-Deployment - All Documents
Required Files | Description | Upload Machine/Server | Upload Path |
---|---|---|---|
service-deployment.zip | Contain all required helm charts and spark prerequisites | K8s-CP-1 server |
|
All deployment files should reside in /matrix/ on master node M1.
# Navigate to the Matrix Directory
cd /matrix/
# Extract the Spark K8s Deployment Archive
unzip service-deployment.zip
# Verify the Extracted Files
ls /matrix/service-deployment
Download Required Images
To pull the required Docker images for the matrix installation, you must have an Internet machine as part of pre-requisite and dockerhub login.
Additionally, you must request access to the matrixcx-repo-pull group. (In case you don’t)
Go to groups.cisco.com.
Click on Groups:
Click on Available groups on the left:
Search for matrixcx-repo-pull
Select it and you will see “Add me as member” on the right. Click that:
When selecting this group request, you will be able to see a list of authorizers on the right. Feel free to reach out to one of the authorizers to acquire access.
Then, once you’ve received access do a login to dockerhub.cisco.com using your Cisco username/password in Internet machine:
# Execute the following command to authenticate with DockerHub:
docker login dockerhub.cisco.com
List of Required Docker Images for Deployment:
Service Name | New Image Available |
---|---|
Metallb | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-metallb-speaker:v0.14.9 |
Rabbitmq | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-rabbitmq:3.13.7 |
Redis-Cluster | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-redis-cluster:7.2.4 |
Zookeeper | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-zookeeper:3.9.3 |
Kafka | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-kafka:3.9.0 |
Webapp | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-base:pca-1.0.0 |
Celerybeat | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-base:pca-1.0.0 |
Coordinator | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-coordinator:pca-1.0.0 |
CeleryWorker | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-base:pca-1.0.0 |
Fileservice | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-fileservice:pca-1.0.0 |
PGadmin4 | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-pgadmin4:9.3.0 |
Flower | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-flower:2.0 |
Redis-Insight | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-redisinsight:2.68.0 |
Nginx | dockerhub.cisco.com/matrixcx-docker/matrix4/nginx:1.28.1 |
Timescaledb | dockerhub.cisco.com/matrixcx-docker/matrix4/timescaledb:pg15.13-ts2.20.0 |
Consumers | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-base:pca-1.0.0 |
SNMPpipeline | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-of-snmppipeline:pca-1.0.0 |
Snmptrapd | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-of-snmptrapd:pca-1.0.0 |
Alert-service | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-of-alertservice:pca-1.0.0 |
Alert manager | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-of-alertmanagerwhitelist:pca-1.0.0 |
OF-Framework | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-of-offramework:pca-1.0.0 |
OF-consumer | dockerhub.cisco.com/matrixcx-docker/matrix4/matrix4-of-ofconsumer:pca-1.0.0 |
OTEL-Transformer | TBD |
OTEL-Collector | TBD |
OTEL-Loadbalancer | TBD |
Note: Use the same local registry to upload the following images, as created earlier during the RKE2 cluster environment setup.
# Run the following command to check if the local registry is running:
docker ps
# Authenticate with DockerHub to download the required images:
docker login dockerhub.cisco.com
# Pull, Tag, and Push Images to the Local Registry
docker pull <image_name>
docker tag <image_name> <local_registry_name/repogistry/image_name:tag>
docker push <local_registry_name/repogistry/image_name:tag>
Application Deployment Prerequisite
Step 1: Create Namespace
Before proceeding with the deployment, ensure the required namespaces are created.
# Connect to the server via SSH:
ssh root@<control plane_ip>
# Execute the following command to create the namespace for pm:
kubectl create namespace matrix-pm-analytics
# Execute the following command to create the namespace for fm:
kubectl create namespace matrix-fm-analytics
Step 2: Add Taints
#Add Taints to Master Nodes
#A taint prevents workloads from being scheduled on a node unless they have a matching toleration
#Run this command for all mater nodes
kubectl taint nodes <MASTER-NODE> node-role.kubernetes.io/control-plane=:NoSchedule
#Check if the taint was applied correctly:
kubectl describe node <NODE_NAME> | grep Taint
Step 3: Configure Node Labels
#Add labels to nodes
#matrix-core for messing service
kubectl label node <worker-node-name> app=matrix-core
#matrix-core for worker service
kubectl label node <worker-node-name> app=matrix-dp
#matrix-core for db service
kubectl label node <worker-node-name> app=matrix-db
#matrix-core for fm service
kubectl label node <worker-node-name> app=matrix-fm
Step 4: Generate Certificates
Note: For Matrix GUI the nginx certs are provided by customers and we can use the same certs, new cert generation is not required.
New Self Signed certificate creation Process for Kafka client and server certificates.
certificate for a server-
In order to use kafka certificate which are there in below path and in jks format presented in helm charts.
Cd /matrix/on-premis/helm_charts/matrix-pm/certificates
[root@alma8-8-matrix2 certificates]# ls -lrt
total 28
-rw-r--r-- 1 root root 4080 Mar 16 09:45 kafka.keystore.jks
-rw-r--r-- 1 root root 978 Mar 16 09:45 kafka.truststore.jks
-rw-r--r-- 1 root root 1375 Mar 16 09:45 nginx-selfsigned.crt
-rw-r--r-- 1 root root 1708 Mar 16 09:45 nginx-selfsigned.key
Below steps we need to follow to create new self signed certificates-
Steps to create RSA private key, self-signed certificate for a client-
a)- Generate a private key
=> openssl genrsa -out clientCA.key 2048
b)- Create a x509 certificate
=> openssl req -x509 -new -nodes -key clientCA.key -sha256 -days 3650 -out clientCA.pem
Steps to create RSA private key, self-signed certificate for a server-
a)- Create a x509 certificate
=> openssl genrsa -out serverCA.key 2048
b)- Create a PKCS12 keystore from private key and public certificate.
=> openssl req -x509 -new -nodes -key serverCA.key -sha256 -days 3650 -out serverCA.pem
c)- openssl pkcs12 -export -name server-cert -in serverCA.pem -inkey serverCA.key -out serverkeystore.p12
d)- Convert PKCS12 keystore into a JKS keystore
=> keytool -importkeystore -destkeystore kafka.keystore.jks -srckeystore serverkeystore.p12 -srcstoretype pkcs12 -alias server-cert -storepass servpass
e)- Import a client's certificate to the server's trust store.
=> keytool -import -alias client-cert -file clientCA.pem -keystore kafka.truststore.jks -storepass servpass
f)- Import a server's certificate to the server's trust store.
=> keytool -import -alias server-cert -file serverCA.pem -keystore kafka.truststore.jks -storepass servpass
Step 5: Generate Secrets
#For webapp inside webapp folder:
cd /pathtowebapp/cert
kubectl create secret generic my-certs --from-file=fullchain.pem=/matrix/onprem/helm_charts/matrix-pm/matrixweb/cert/fullchain.pem --from-file=ca-key=/matrix/onprem/helm_charts/matrix-pm/matrixweb/cert/ca-key --from-file=root-ca.cer=/matrix/onprem/helm_charts/matrix-pm/matrixweb/cert/root-ca.cer -n matrix-pm-analytics
#For celery inside webapp folder
cd /pathtowebapp/cert
kubectl create secret generic matrix-worker-cert --from-file=root-ca.cer=/matrix/onprem/helm_charts/matrix-pm/matrixweb/cert/root-ca.cer -n matrix-pm-analytics
# Fileservice patch inside the fileserver folder
kubectl create cm patch-cm --from-file=run_server.sh -n matrix-pm-analytics
#Kafka (PM)
cd on-premis/helm_charts/matrix-pm/Certificate
kubectl create secret generic <secret_name> --from-file=kafka.keystore.jks=<keystore_certificate> --from-file=kafka.truststore.jks=<truststore_certificate> --from-literal=password=<password> -n <namespace_name>
Example: kubectl create secret generic matrix-kafka-tls --from-file=kafka.keystore.jks=kafka.keystore.jks --from-file=kafka.truststore.jks=kafka.truststore.jks --from-literal=password=servpass -n matrix-pm-analytics
#Nginx(PM)
kubectl create secret tls <secret_name> --cert=<nginx_selfsigned_cert> --key=<nginx-selfsigned.key> -n <namespace_name>
Example: kubectl create secret tls matrix-nginx-tls-secret --cert=nginx-selfsigned.crt --key=nginx-selfsigned.key -n matrix-pm-analytics
PM Pipeline Deployment
Metallb
Step 1: Configure the value.yaml File
cd /matrix/service-deployment/matrix-pm/metallb/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
image:
repository: <local_repository_name> #example: 10.126.87.96/matrix4-metallb-controller
tag: <update_tag> #example: latest
...
storageClass: "longhorn" #We use longhorn in our enviroment select according to your cluster
...
Step 3: Update IP-Address Range as per your Setup
vi metallb-cr.yaml
#Make the following changes to the operative sections of the file:
...
spec:
addresses:
- <IPv4-CIDR>/32
- <IPv6-CIDR>/128
autoAssign: false
...
Step 4: Install the Helm Charts and Address Pool
helm install -n metallb-system matrix-metallb ./
#once deployment is done please execute the below cli to add the IP address range
kubectl apply -f metallb-cr.yaml -n metallb-system
kubectl apply -f metallb-l2.yaml -n metallb-system
Timescale DB (Local Storage)
Setting Up Local Storage Paths on Worker Nodes
To ensure a seamless installation, we must create designated local storage paths on worker nodes that adhere to the assigned affinity rule.
Step 1: Enter the Following for Each Worker Node
#Worker-node-1 (matrix-db1):
#Log in to worker1 and create the storage path
mkdir -p /matrix/vmount/data
mkdir -p /matrix/vmount/wal
mkdir -p /matrix/backup
# Provide the 1000 permission to the storage paths on worker1
chown -R 1000:1000 /matrix/vmount/data
chown -R 1000:1000 /matrix/vmount/wal
chown -R 1000:1000 /matrix/backup
#Worker-node-2 (matrix-db2):
#Log in to worker2 and create the storage path
mkdir -p /matrix/vmount/data
mkdir -p /matrix/vmount/wal
mkdir -p /matrix/backup
#Provide the 1000 permission to the storage paths on worker1
chown -R 1000:1000 /matrix/vmount/data
chown -R 1000:1000 /matrix/vmount/wal
chown -R 1000:1000 /matrix/backup
Step 2: Configure the value.yaml File
cd /matrix/service-deployment//matrix-pm/timescaledb-local-storage/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
image:
repository: <local_repo_name> #example: 10.126.87.96/matrix-pm/matrix4-timescaledb-ha
tag: <image_tag> #example: we are using latest tag, update accordangly
...
#Update the storage paths in the local_persistent_volume_node1 & 2.yaml file within the PersistentVolume section, as shown below:
#vi templates/local_persistent_volume_node1.yaml
#vi templates/local_persistent_volume_node2.yaml
...
spec:
capacity:
storage: 3000Gi # Storage size for data volume
spec:
capacity:
storage: 500Gi # Storage size for wal volume
...
hostPath:
type: DirectoryOrCreate
path: /matrix/vmount/data # First local path created on worker
hostPath:
type: DirectoryOrCreate
path: /matrix/vmount/wal # Second local path created on worker
...
#Update the worker node name in affinity section:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "kubernetes.io/hostname"
operator: In
values:
- “<node_name>” #Example: "matrix-devops-w6"
- “<node_name>” #Example: "matrix-devops-w7"
Step 4: Storage Class Setup
# Inside the templates directory where the storage class files are located
cd templates
kubectl apply -f local_storage_class_wal.yaml
kubectl apply -f local_storage_class_data.yaml
kubectl apply -f local_storage_class_temp.yaml
# Check if the storage classes are created successfully
kubectl get sc
Step 5: Annotation of Storage Classes
kubectl label sc local-storage-data app.kubernetes.io/managed-by=Helm
kubectl label sc local-storage-wal app.kubernetes.io/managed-by=Helm
kubectl label sc local-storage-temp app.kubernetes.io/managed-by=Helm
kubectl annotate sc local-storage-data meta.helm.sh/release-name=matrix-timescaledb
kubectl annotate sc local-storage-wal meta.helm.sh/release-name=matrix-timescaledb
kubectl annotate sc local-storage-temp meta.helm.sh/release-name=matrix-timescaledb
kubectl annotate sc local-storage-data meta.helm.sh/release-namespace=matrix-pm-analytics
kubectl annotate sc local-storage-wal meta.helm.sh/release-namespace=matrix-pm-analytics
kubectl annotate sc local-storage-temp meta.helm.sh/release-namespace=matrix-pm-analytics
Step 6: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-timescaledb ./
Note: For very large databases you will either need to set wal_keep_size to be very large or to enable restore_command.
The service should detect your system specs and set the configuration based on that. You may wish to configure additional tuning parameters based on your deployment VM sizing.
Refer to the following website for tuning recommendations: https://pgtune.leopard.in.ua/
vi postgresql.conf
parameters:
archive_command: "/etc/timescaledb/scripts/pgbackrest_archive.sh %p"
archive_mode: 'on'
archive_timeout: 1800s
autovacuum_analyze_scale_factor: 0.02
autovacuum_naptime: 5s
autovacuum_max_workers: 10
autovacuum_vacuum_cost_limit: 500
autovacuum_vacuum_scale_factor: 0.05
log_autovacuum_min_duration: 1min
hot_standby: 'on'
log_checkpoints: 'on'
log_connections: 'on'
log_disconnections: 'on'
log_line_prefix: "%t [%p]: [%c-%l] %u@%d,app=%a [%e] "
log_lock_waits: 'on'
log_min_duration_statement: '1s'
log_statement: ddl
max_connections: 1000
max_prepared_transactions: 150
shared_preload_libraries: timescaledb,pg_stat_statements
ssl: 'on'
ssl_cert_file: '/etc/certificate/tls.crt'
ssl_key_file: '/etc/certificate/tls.key'
tcp_keepalives_idle: 900
tcp_keepalives_interval: 100
temp_file_limit: 1GB
timescaledb.passfile: '../.pgpass'
unix_socket_directories: "/var/run/postgresql"
unix_socket_permissions: '0750'
wal_level: hot_standby
wal_log_hints: 'on'
use_pg_rewind: true
use_slots: true
retry_timeout: 10
ttl: 30
Rabbitmq
Step 1: Configure the value.yaml File
cd /matrix/service-deployment/matrix-pm/rabbitmq/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
image:
registry: <local_registry_name> #example: 10.126.87.96
repository: <local_repository_name> #example: matrix4-rabbitmq
tag: <update_tag> #example: latest
...
#Update the storage class accordingly.
persistence:
## @param persistence.enabled Enable RabbitMQ data persistence using PVC
##
enabled: true
storageClass: "longhorn" # update the storage class as per your requirements
size: 20Gi # update the storage size as per your requirements
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app ##Define your node label key as per your requirement
operator: In
values:
- matrix-core ## Define your node label value as per your requirement
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution
- labelSelector:
matchExpressions:
- key: app ##Define your node label key as per your requirement
operator: In
values:
- matrix-core ##Define your node label value as per your requirement
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-rabbitmq ./
# Verify all Pods are running
kubectl get all -n matrix-pm-analytics | grep -i rabbitmq
# Enable stable feature flags: -> This feature help for future upgrades
kubectl -n matrix-pm-analytics exec -it <rabbitmq-pod> -- bash
# Enable the feature flag via below cli
rabbitmqctl enable_feature_flag all
Step 4: Configure Vhost
#Configure the vhost with port-forward command
kubectl port-forward -n matrix-pm-analytics svc/matrix-rabbitmq 15672:15672 --address='::'
#open web browser http://<master-ip>/#/vhosts credential : matrixadm | matrixadm
#go to admin -> virtual host -> add new virtual host -> “matrix” , default queue type “classic”
#exit the browser
Redis-Cluster
Step 1: Configure the value.yaml File
cd /matrix/service-deployment//matrix-pm/redis-cluster/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
image:
registry: <local_registry_name> #example: 10.126.87.96
repository: <local_repository_name> #example: matrix4-redis-cluster
tag: <update_tag> #example: latest
...
storageClass: "<storage_class_name>" #Example: “longhorn”
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix ./
Zookeeper
Step 1: Configure the value.yaml File
cd /matrix/service-deployment/matrix-pm/zookeeper/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File:
...
image:
registry: <local_registry_name> #example: 10.126.87.96
repository: <local_repository_name> #example: matrix4-zookeeper
tag: <update_tag> #example: latest
...
storageClass: "<storage_class_name>" #We use longhorn in our enviroment select according to your cluster
...
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-zookeeper ./
Kafka
Notes:
To avoid the changes inside the service name inside the configMap only use these naming conventions for file installation.
Zookeeper should be installed before kafka.
Before kafka installation need to create kafka-tls secret. Inside the kafka folder and navigate the kafka.keystore.jks kafka.truststore.jks
kubectl create secret generic kafka-tls --from-file=kafka.keystore.jks=kafka.keystore.jks --fromfile=kafka.truststore.jks=kafka.truststore.jks --from-literal=password=servpass -n namespace In kafka values.yaml need to change service name of external zookeeper #servers: matrix-zookeeper.matrix-analytics.svc.cluster.local:2181 servers: zookeeperservicename.namespace.svc.cluster.local:2181
In matrix-base-configmap: need to change site_url,fileserver path,allowed hosts,rabbitmq details,redis details,database configuration
Check for the correct namespace name in deployment or configMap file before installing
First create secrets for kafka (if not created earlier)
cd /matrix/service-deployment/matrix-pm/certificate
kubectl create secret generic matrix-kafka-tls --from-file=kafka.keystore.jks=kafka.keystore.jks --from-file=kafka.truststore.jks=kafka.truststore.jks --from-literal=password=servpass -n matrix-pm-analytics
Step 1: Configure the value.yaml File
cd /matrix/service-deployment/matrix-pm/kafka/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
image:
registry: <local_registry_name> #Example: 10.126.87.96
repository: <repo_name> # matrix4-kafka
tag: latest
...
existingSecrets:
- <kafka_tls_secret_name> # matrix-kafka-tls
- <kafka_tls_secret_name> # matrix-kafka-tls
- <kafka_tls_secret_name> # matrix-kafka-tls
...
password: <password> # We used our certificate password
...
loadBalancerIP: "x.x.x.x" #s
#change
storageClass: "<storageclass_name>" #We use longhorn in our environment select according to your cluster...
set autoDiscovery: false in values.yaml
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-kafka ./
Webapp
Step 1: Configure the value.yaml File
cd /matrix/service-deployment/matrix-pm/matrixweb/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
image:
registry: <local_registry_name> #example: 10.126.87.96/matrix4-base
tag: <update_tag> #example: rjio_feature.2024.01.31
…
Update storageClassName in templates/matrix4-web-consumerpvc.yaml file
And templates/matrix4-web-pvc.yaml file
vi templates/matrix4-web-consumerpvc.yaml
vi templates/ matrix4-web-pvc.yaml
vi templates/matrix4-base-config.yaml
add FILE_SERVICE_PORT: “443”
storageClassName: <storage-class-name>
# we are using longhorn, but you can update accordingly
IMPORT_EXPORT_PERMISSION_CHECK="0" [0 to disable permission check and 1 to enable]
CSRF_STRICT_CHECK_QUERY="1"
https://localhost"
AUDITLOG_TIME_ZONE=Asia/Kolkata
NODE_CACHE_REFRESH_TIME: "300"
REPORT_TIME_DELTA: "- INTERVAL '1 minutes'"
USER_CONCURRENT_SESSION_VALIDATION: "1"
USER_STICK_SESSION_CHECK: "1"
USER_STICK_SESSION_COUNT: "1"
WHITELIST_HOST_IP_CONCURRENCY: "fileservice.matrix-fileservice.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDnsDomain }}"
CSRF_TRUSTED_ORIGINS="http://10.126.87.98 https://10.126.87.98 http://localhost #This should be allowed load balancer IP either ipv4 or v6
Instead of
WHITELIST_HOST_IP_CONCURRENCY:"fileservice.matrix-fileservice.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDnsDomain }}" you can use the latest changes:
WHITELIST_HOST_IP_CONCURRENCY: matrix-file-service
Step 3: Update the NGINX loadbalancer IP and other changes In Configmap
cd /templates/
vi matrix4-base-config.yaml
Changes the nginx load balancer ip wherever mentioned
vi matrix4-web-configmap.yaml
Changes the nginx load balancer ip wherever mentioned
SECURE_INTERSERVICE_COMMUNICATION: "1"
SECURE_CONNECTION: "1"
CERT_FILE: "/matrixnis/certs/fullchain.pem"
KEY_FILE: "/matrixnis/certs/ca-key"
CA_FILE: "/matrixnis/certs/root-ca.cer"
VERIFY_HOSTNAME: "1"
Step 4: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-webapp ./
#Now open file in vim editor “templates/matrix4-web-deployment.yaml” and remove the below command
python manage.py init_matrix
(incase above cmd is not present in the file, exec into the pod and run it manually)
#Now Upgrade the matrixweb Helm chart
helm upgrade -n matrix-pm-analytics matrix-webapp ./
#Now validate if ststic file is present or not
Kubectl exec –it -n matrix-pm-analytics <pod-name> -- bash
cd static
ls -lrt
#if files not present copied then follow the below steps:
kubectl exec –it –n matrix-pm-analytics <pod-name> -- bash
cd app
python manage.py collectstatic
Note: if static file present and gui is not opening then follow the below steps to access the GUI
kubectl exec –it –n matrix-pm-analytics <pod-name> -- bash
cd app
python manage.py collectstatic
Celery Beat
Step 1: Configure the value.yaml File
cd /matrix/service-deployment/matrix-pm/celerybeat/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
image:
registry: <local_registry_name> #example: 10.126.87.96/matrix4-base
tag: <update_tag> #example: rjio_feature.2024.01.31
…
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-celerybeat -f values.yaml --values ../values.yaml./
## To implement the secrets we need to execute above command for chart deployment
Coordinator
Step 1: Configure the value.yaml File
cd /matrix/on-premis/helm_charts/matrix-pm/coordinator/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
coordinator:
name: matrix4-coordinator
image: <local_repository_name_with_image_tag> #example: 10.126.87.96/matrix4-coordinator:latest
namespace: <update-namespace> #Example: matrix-pm-analytics
...
cd /templates
vim matrix4-coordinator-base-configmap.yaml
CONSUMER_IMAGE: <image_tag> #Example: 10.126.87.96/matrix4-base:rjio_feature.2024.01.31
…
Step 3: Update the NGINX loadbalancer IP In Configmap
cd templates/
vi matrix4-coordinator-base-configmap.yaml
Changes the nginx load balancer ip whereever mentioned
vi matrix4-coordinator-configmap.yaml
Changes the nginx load balancer ip whereever mentioned
Step 4: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-coordinator –f values.yaml --values ../values.yaml ./
## To implement the secrets we need to execute above command for chart deployment
Celery Worker
Step 1: Configure the value.yaml File
cd /matrix/on-premis/helm_charts/matrix-pm/celeryworker/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
...
celery:
name: <update_image_name> #Example: 10.126.87.96/matrix4-base
tag: <image_tag> #Example: rjio_feature.2024.01.31 #example: 10.126.87.96/matrix4-base:latest
#Update storage class in /templates/matrix-celeryworker-pvc.yaml
storageClassName: <storage_class> ##Example:longhorn
kindly add below env on celery worker configmap
SNMP_FIELDS_TO_CONVERT_TIME_STAMP: "alert_start_ts,alert_end_ts"
SNMP_NESTED_FIELDS_TO_CONVERT_TIME_STAMP: "_db"
RULE_CORRELATED_FIELDS_TO_CONVERT_TIME_STAMP: "alert_snmp_event_id_list"
NBI_STATUS_DB_LOOKUP: "0"
REPORT_TIME_DELTA: "- INTERVAL '1 minutes'"
WORKER_PREFETCH_MULTIPLIER: "1"
ENABLE_AUDITLOG=0
…
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-celeryworker ./
File Service
Step 1: Configure the value.yaml File
cd /matrix/service-deployment/matrix-pm/fileservice/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
image:
registry: <local_registry_name> #Example: 10.126.87.96
repository: <local_repository_name> #Example: matrix4-fileservice
tag: <update_image_tag> #Example: rjio_feature.2024.01.31
storageClass: "<storage_class_name>" #We use longhorn in our enviroment select according to your cluster
…
Edit fileservice fileservice-svc.yaml and add below detials:
- name: https
port: 443
targetport: 443
#add below in value.yaml
…
#Update storage class in /templates/fileservice-pvc.yaml
storageClassName: <storage_class> ##Example:longhorn
…
#Add belowe parameter in /templates/matrix-fileserver-deployment.yaml.yaml
spec:
hostname: fileservice
subdomain: matrix-fileservice #service name
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-fileservice ./
DBSync
Step 1: Configure the Value.yaml File
cd /matrix/service-deployment/matrix-pm/dbsync/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
image: <local_registry_with_image_tag> #Example: 10.126.87.14/matrix-pm/matrix4-dbsync:latest
Step 3: Install the Helm Charts:
helm install -n matrix-pm-analytics matrix-dbsync ./
PGAdmin4
Step 1: Configure the Value.yaml File
cd /matrix/service-deployment/matrix-pm/pgadmin4/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
image:
registry: <local_registry_name> #Example: 10.126.87.96
repository: <local_repository_name> #Example: matrix4-pgadmin4
tag: <update_image_tag> #Example: latest
…
storageClass: "<update_storage_class_name>" #Example: we are using longhorn
…
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-pgadmin4 ./
Flower
Step 1: Configure the Value.yaml File
cd /matrix/service-deployment/matrix-pm/flower/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
image:
repository: <local_repository_name> #Example: 10.126.87.96/matrix4-flower
tag: <update_image_tag> #Example: latest
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-flower ./
#execute below cli to access flower UI via nginx
kubectl exec –it -n matrix-pm-analytics <flower-pod-name> --sh
~ celery flower --url_prefix=flower
Redis-Insight
Step 1: Configure the Value.yaml File
cd /matrix/service-deployment/matrix-pm/redis-insight/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
image:
repository: <local_repository_name> #Example: 10.126.87.96/matrix4-redisinsight
tag: <update_image_tag> #Example: latest
Step 3: Install the Helm Charts
helm install -n matrix-pm-analytics matrix-redisinsight ./
Nginx
Configure the value.yaml file:
cd /matrix/service-deployment/matrix-pm/nginx/
vi values.yaml
Step 2: Make the Following Changes to the Operative Sections of the File
image:
repository: <local_repository_name> #Example: 10.126.87.96/matrix4-nginx-io
tag: <update_image_tag> #Example: latest
Update the NGINX load balancer IP In service.yaml file :
annotations:
metallb.universe.tf/loadBalancerIPs: "ipv4,ipv6" Ex:”10.126.87.111,2001:420:54ff:84::26f”
Update the IPV4 and IPV6
Note:-Those Services are not running , comment out in matrix4-nginx-configmap.yaml
NOte: also update the nginx configmap as per your certificate that you are using whether it is customer provided or self signed.
Install the helm charts:
helm install -n matrix-pm-analytics matrix-nginx ./
Note: By default port for nginx is 80 and 443, If you want to expose nginx ports other than this port then chnage the below files.
open matrix4-nginx-service.yaml with vim editor and chnage the port
ports:
Example:
- name: http
protocol: TCP
port: 9080
targetPort: 80
- name: https
protocol: TCP
port: 9443
targetPort: 443
FM pipeline Deployment
Note: Please ensure that you update the values.yaml file to reflect the appropriate resource allocations and Persistent Volume Claim (PVC) sizes in accordance with your existing deployment specifications.
SNMP Pipeline
# Change directory to the snmppipelinehelm chart
cd /matrix/service-deployment/matrix-fm/snmppipeline
# Edit the value.yaml to update image name and tag
vi values.yaml
...
image:
repository: <repository_name> #example: caloregistry3.io:5000/matrix4/matrix-ent/matrix
tag: <tag> #example: 4-of4.4.3-snmppipeline-osfix-15052025
...
replicaCount: 1 ##Update the replica count as per requirement
# Update the resources block as per requirement
resources:
limits:
cpu: <cpu> #example: 500m
memory: <memory> #example: 500Mi
requests:
cpu: <cpu> #example 500m
memory: <memory> #example: 500Mi
affinity:
snmppipeline:
key: app ##Define your node label key as per your requirement
values: matrix-fm ##Define your node label value as per your requirement
# Install the helm charts:
helm install matrix-snmppipeline -n matrix-fm-analytics -f values.yaml ./
# Verify all Pods are running after the upgrade
kubectl get all -n matrix-fm-analytics | grep -i pipeline
Snmptrapd
# Change directory to the snmptrapd chart
cd /matrix/service-deployment/matrix-fm/snmptrapd
# Edit the value.yaml to update image name and tag
vi values.yaml
...
image:
repository: <repository_name> #example: caloregistry3.io:5000/matrix4/matrix-ent/matrix
tag: <tag> #example: 4-of4.4.3-snmptrapd-osfix-20052025
...
replicaCount: 1 ##Update the replica count as per requirement
# Update the resources block as per requirement
resources:
limits:
cpu: <cpu> #example: 500m
memory: <memory> #example: 500Mi
requests:
cpu: <cpu> #example 500m
memory: <memory> #example: 500Mi
affinity:
snmptrapd:
key: app ## ## Define your node label key as per your requirement
values: matrix-fm ## ## Define your node label value as per your requirement
# Install the helm charts:
helm install matrix-snmptrapd -n matrix-fm-analytics -f values.yaml ./
# Verify all Pods are running after the upgrade
kubectl get all -n matrix-fm-analytics | grep -i snmptrapd
Alert-service
# Change directory to the alertservice chart
cd /matrix/service-deployment/matrix-fm/alertservice/
# Edit the value.yaml to update image name and tag
vi values.yaml
...
image:
repository: <repository_name> #example: caloregistry3.io:5000/matrix4-of-alertservice
tag: <tag> #example: 4-of4.4.3-alertservice-osfix-20052025
...
replicaCount: 1 ##Update the replica count as per requirement
# Update the resources as per requirement
resources:
limits:
cpu: <cpu> #example: 500m
memory: <memory> #example: 500Mi
requests:
cpu: <cpu> #example 500m
memory: <memory> #example: 500Mi
affinity:
alertservice:
key: app ## ## Define your node label key as per your requirment
values: matrix-fm ## ## Define your node label value as per your requirment
#change one command in deployment file
cd /matrix/service-deployment/matrix-fm/alertservice/templates
#To publish the alerts to kafka topics we need to configure credentails.json file:
cd /alertservice/config/alertservice
vi credentials.json
...
{
"services": {
"kafka": {
"factory": "KafkaConnectionFactory",
"renew_timeout": 3600,
"pool_size": 2,
"connections": [
{
"topic": "<topic_name>",
"server": "matrix-kafka-0.matrix-kafka-headless.matrix-pm-analytics.svc.cluster.local:29092,matrix-kafka-1.matrix-kafka-headless.matrix-pm-analytics.svc.cluster.local:29092,matrix-kafka-2.matrix-kafka-headless.matrix-pm-analytics.svc.cluster.local:29092",
"cafile": "/app/ssl/ca-cert",
"keyfile": "/app/ssl/cert.pem"
},
{
"topic": ""<topic_name>",
"server": "matrix-kafka-0.matrix-kafka-headless.matrix-pm-analytics.svc.cluster.local:29092,matrix-kafka-1.matrix-kafka-headless.matrix-pm-analytics.svc.cluster.local:29092,matrix-kafka-2.matrix-kafka-headless.matrix-pm-analytics.svc.cluster.local:29092",
"cafile": "/app/ssl/ca-cert",
"keyfile": "/app/ssl/cert.pem"
}
]
}
},
"number_of_workers" : 20
}
# Install the helm charts:
helm install matrix-alertservice -n matrix-fm-analytics -f values.yaml ./
# Verify all Pods are running after the upgrade
kubectl get all -n matrix-fm-analytics | grep -i alertservice
Alert Manager
# Change directory to the of-alertmanager chart
cd /matrix/service-deployment/matrix-fm/of-alertmanager/
# Edit the value.yaml to update image name and tag
vi values.yaml
...
image:
repository: <repository_name> #example: caloregistry3.io:5000/matrix4-of-alertmanager
tag: <tag> #example: 4-of4.4.3-of-alertmanager-osfix-20052025
...
replicaCount: 1 ##Update the replica count as per requirement
# Update the resources block as per requirement
resources:
limits:
cpu: <cpu> #example: 500m
memory: <memory> #example: 500Mi
requests:
cpu: <cpu> #example 500m
memory: <memory> #example: 500Mi
persistence:
enabled: true
storageClass: "longhorn" # update storage class as per requirements
accessModes:
- ReadWriteMany
size: 1Gi ## update storage size as per requirements
affinity:
alertmanager:
key: app ## ## Define your node label key as per your requirment
values: matrix-fm ## ## Define your node label value as per your requirment
# Install the helm charts:
helm install matrix-alertmanager -n matrix-fm-analytics -f values.yaml ./
# Verify all Pods are running after the upgrade
kubectl get all -n matrix-fm-analytics | grep -i alertmanager
OF-framework
# Change directory to the of-framework chart
cd /matrix/service-deployment/matrix-fm/of-framework/
# Edit the value.yaml to update image name and tag
vi values.yaml
replicaCount: 1 ##Update the replica count as per requirement
...
image:
repository: <repository_name> #example: caloregistry3.io:5000/matrix4-of-framework
tag: <tag> #example: 4-of4.4.3-of-framework-osfix-20052025
...
affinity:
offramework:
key: app ## ## Define your node label key as per your requirment
values: matrix-fm ## ## Define
OF-consumer
# Change directory to the of-consumer chart
cd /matrix/service-deployment/matrix-fm/of-consumer/
# Edit the value.yaml to update image name and tag
vi values.yaml
...
image:
repository: <repository_name> #example: caloregistry3.io:5000/matrix4-of-consumer
tag: <tag> #example: 4-of4.4.3-of-consumer-osfix-20052025
...
affinity:
ofconsumer:
key: app ## ## Define your node label key as per your requirment
values: matrix-fm ## ## Define your node label value as per your requirment
# Upgrade the helm charts:
helm upgrade matrix-consumer -n matrix-fm-analytics -f values.yaml ./
# Verify all Pods are running after the upgrade
kubectl get all -n matrix-fm-analytics | grep -i consumer
#Note: Incase epc.yaml not mount please execute below CLI
kubectl create cm epc-new-config --from-file=/matrix/on-prem/helm-charts/security-fix/matrix-fm/of-framework/epc/epc.yaml -n matrix-fm-analytics
OTEL-Pipeline Deployment
Docker Compose
otel-collector-1:
image: dockerhub.cisco.com/matrixcx-docker/matrix4/matrix:4-of4.4.3-opentelemetry-collector-contrib-latest
container_name: otel-collector-1
ports:
- "4318:4318" # OTLP HTTP
- "13133:13133" # health_check
- "1777:1777" # pprof
- "55679:55679" # zpages
volumes:
- /matrix/matrix-of/vmount/otel_collector/otel-collector-config.yaml:/app/otel-collector-config.yaml
- /matrix/matrix-of/vmount/otel_collector/logs/otel-traces.json:/hostlogs/otel-traces.json
- /matrix/matrix-of/vmount/otel_collector/certs:/etc/otel/certs
- /matrix/matrix-of/vmount/otel_collector/certs:/otel/certs
command: ["--config", "/app/otel-collector-config.yaml"]
env_file:
- otel-pipeline.env
networks:
- matrix-network
otel-load-balancer:
image: dockerhub.cisco.com/matrixcx-docker/matrix4/matrix:4-of4.4.3-otel-load-balancer-latest
container_name: otel-load-balancer
ports:
- "8085:8085" # Load balancer listens on port 8085 for HTTP telemetry data
volumes:
- /matrix/matrix-of/vmount/otel_nginx_conf/nginx.conf:/etc/nginx/nginx.conf:ro
env_file:
- otel-pipeline.env
depends_on:
- otel-collector-1
networks:
- matrix-network
otel-transformer:
image: containers.cisco.com/matrixcx/matrix-ent/matrix:4-of4.4.4-otel-transformer-15072025
container_name: otel-transformer
ports:
- "8087:8080" # Service for sending telemetry data
env_file:
- otel-pipeline.env
depends_on:
- otel-load-balancer
networks:
- matrix-network
OTEL Environment Variables
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-load-balancer:8085
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
OTEL_EXPORTER_OTLP_COMPRESSION=none
© 2025 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and tradmarks