- Print
- PDF
Selecting a deployment profile that scales beyond 5,000 network elements requires that operators configure the virtual Machine with data disks. This is needed to ensure sufficient disk space and I/O performance is available to the application.
Out of the box, the Skylight orchestrator virtual machine ships with its OS disk configured with the 5K profile. This profile starts up a single process called a mediation instance that is responsible for network element communications, as well as performance data collection. These operations are disk I/O intensive.
For Virtual Machine version 23.04.1 or prior
Whereas the 5K profile uses a single mediation instance, the 15K, 30K and 60K deployment profiles use a total of three (3), six (6) and twelve (12) mediation instances, respectively. Each mediation instance is assigned a work area under the /data directory. These areas are named according to the mediation instance number in the following fashion:
- /data/medn1instance
- …
- /data/medn12instance
In order to spread the load, each mediation work area is assigned to a data disk. The rule is that each data disk is assigned to three mediation instances. This means that the 15K profile requires one (1) data disk, 30K profile requires two (2) data disks, and the 60K profile requires four (4) data disks.
As an example, the output below shows the listing of the /data directory when running a 30K profile. The two data disks were mounted under the /so/sdb1 and /so/sdc1 mount points.
visionems@visionems:/data$ ls -l
total 40
drwxrwxr-x 2 root visionems 4096 Feb 11 04:00 backups
drwxr-x--- 3 visionems visionems 4096 Feb 10 21:03 datfiles
drwxr-x--- 3 visionems visionems 4096 Feb 10 21:01 export
drwx------ 2 visionems visionems 16384 Feb 10 17:38 lost+found
lrwxrwxrwx 1 visionems visionems 22 Feb 11 11:27 medn1instance -> /so/sdb1/medn1instance
lrwxrwxrwx 1 visionems visionems 22 Feb 11 11:27 medn2instance -> /so/sdb1/medn2instance
lrwxrwxrwx 1 visionems visionems 22 Feb 11 11:27 medn3instance -> /so/sdb1/medn3instance
lrwxrwxrwx 1 visionems visionems 22 Feb 11 11:27 medn4instance -> /so/sdc1/medn4instance
lrwxrwxrwx 1 visionems visionems 22 Feb 11 11:27 medn5instance -> /so/sdc1/medn5instance
lrwxrwxrwx 1 visionems visionems 22 Feb 11 11:27 medn6instance -> /so/sdc1/medn6instance
drwxr-xr-x 2 visionems visionems 4096 Feb 10 21:01 sdmm
For Virtual Machine version 23.12 and later
Whereas the 5K profile uses a single mediation instance, the 15K, 30K and 60K deployment profiles use a total of two (2), three (3), and six (6) mediation instances, respectively. Each mediation instance is assigned a work area under the /data directory. These areas are named according to the mediation instance number in the following fashion:
- /data/medn1instance
- …
- /data/medn6instance
In order to spread the load, each mediation work area is assigned to a data disk. The rule is that each data disk is assigned to three mediation instances. This means that the 15K profile requires one (1) data disk, the 30K profile requires two (2) data disks and the 60K profile requires four (4) data disks.
As an example, the output below shows the listing of the /data directory when running a 30K profile. The two data disks were mounted under the /so/sdb1 and /so/sdc1 mount points.
visionems@visionems:~$ ls -l /data/*/
/data/backups/:
total 0
/data/lost+found/:
total 0
/data/medn1instance/:
total 0
lrwxrwxrwx 1 visionems visionems 29 Aug 14 03:01 export -> /so/sdb1/medn1instance/export
lrwxrwxrwx 1 visionems visionems 11 Aug 14 03:01 s1 -> /so/sdb1/s1
lrwxrwxrwx 1 visionems visionems 11 Aug 14 03:01 s2 -> /so/sdb1/s2
/data/medn2instance/:
total 0
lrwxrwxrwx 1 visionems visionems 29 Aug 14 03:01 export -> /so/sdc1/medn2instance/export
lrwxrwxrwx 1 visionems visionems 11 Aug 14 03:01 s1 -> /so/sdb1/s3
lrwxrwxrwx 1 visionems visionems 11 Aug 14 03:01 s2 -> /so/sdc1/s1
/data/medn3instance/:
total 0
lrwxrwxrwx 1 visionems visionems 29 Aug 14 03:01 export -> /so/sdc1/medn3instance/export
lrwxrwxrwx 1 visionems visionems 11 Aug 14 03:01 s1 -> /so/sdc1/s2
lrwxrwxrwx 1 visionems visionems 11 Aug 14 03:01 s2 -> /so/sdc1/s3
/data/sdmm/:
total 12
drwxr-x--- 2 visionems visionems 4096 Aug 10 00:14 csv_export
drwxr-x--- 2 visionems visionems 4096 Aug 10 00:13 result
drwxr-x--- 2 visionems visionems 4096 Aug 10 00:14 resync
The configuration of the mount points and the assignment of the symbolic links to each mediation instance are tasks that are handled by the deployment profile configuration command. This command requires that the data disks be present. This section discusses the assignment of data disks to a virtual machine for VMware and KVM environments.
Assigning a Data Disk in VMware
Requirements
You must ensure you have the following items before starting these procedures.
A. A web browser with access to the VMware management console.
B. The root level credentials for the VMware management console.
C. Sufficient disk capacity in your VMware environment. You will need 750 GB per data disk.
Steps
Open your browser to the VMware console and log in with the root credentials.
Once you are logged in, identify your Skylight orchestrator virtual machine.
Note: In the example above, the virtual machine is called Skylight orchestrator. Please adapt the procedure to your specific virtual machine names.
If the virtual machine is running, stop by doing the following:
a. Select the virtual machine.
b. Click Shut down.
Once stopped, the display resembles the following:
Select the virtual machine, click Actions, and then click Edit Settings.
Click Add hard disk.
For the New Hard Disk, enter 750 for the disk size.
Expand the New Hard Disk entry, and locate the Location attribute.
Click the Browse… button, and select the data store that will be used for this data disk.
Notes: The default selection of Thick provisioning and lazy initialization is sufficient.
Each data disk must be on its distinct VMware datastore, and each datastore should be backed by independent storage.
Once the datastore is selected, click Save. This adds a data disk to the virtual machine.
Repeat steps 5 through 9 to add additional data disks.
Restart the virtual machine by selecting it, and then clicking Power on.
The new disks will now be visible and can be used to provision the deployment profile.
Assigning a Data Disk in KVM
Requirements
You must ensure you have the following items before starting these procedures.
A. SSH client (such as PuTTY).
B. A user with virsh rights for the KVM host.
C. Sufficient disk capacity in your KVM host environment. You will need 750 GB per data disk.
D. A formatted partition per data disk. The partition should be on an independent disk or disk array that is not shared with the OS disk nor any other data disks.
Steps
Using an SSH client, log in to your KVM host using credentials that will give you access to the virsh commands.
Once you are logged in, enter the virsh command shell by entering: virsh
At the virsh prompt, list the virtual machines by entering: list
Note: In the example above, the virtual machine is called orchestrator. Please adapt the procedure to your specific virtual machine names.
If the orchestrator virtual machine is running, stop it by issuing the shutdown commands: shutdown orchestrator
Once the virtual machine is stopped, edit the configuration of site-a by entering: edit orchestrator
Note: This opens up a text editor with the configuration details of the machine.
- Using the arrow keys, scroll down to the first disk block. It will look similar to the following:
<disk type='file' device='disk'>
<driver name='qemu' io='native' cache='none' type='qcow2'/>
<source file='/home/visionems/vm/devAndy60K/SO_v21.08_8_vm_core.qcow2'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
</disk>
- Using the editor, add a new disk block that will be of the following format. The source device attribute needs to be set to the partition you wish to use for your data disk. In the example below, it is name /dev/sdb1:
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/sdb1'/>
<target dev='vdb' bus='virtio'/>
</disk>
Save the changes by entering Ctrl-X and when asked, confirm the change by entering: Y
Repeat steps 5 through 8 to add additional disks, taking care to specify a distinct partition for each new disk entry.
Once the virtual machine has been modified, it can be restarted in preparation for the rehoming procedure: start orchestrator
The new disks will now be visible and can be used to provision the deployment profile.
© 2024 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and tradmarks