- Print
- PDF
Configuring redundancy involves the following tasks:
A. Obtain all the information that you will need for the procedures.
See Information Needed to Configure Hot Standby Redundancy.
B. Ensure that all required appliances are installed.
See Ensuring All Required Appliances Are Installed.
C. Perform basic configuration of all appliances at both sites.
See Basic Appliance Configuration.
D. Copy the license file for redundancy to both sites.
See Copying the License File to Both Sites.
E. Configure replication partition to Both Sites
See Configuring Replication Partition on Both Sites.
F. Configure and start the redundancy.
See Configuring and Starting Redundancy.
CAUTION: Please carefully read and adhere to the following regulations. Failure to do so may result in the improper functioning of the Hot Standby Redundancy system:
1. Redundancy control commands MUST ONLY be executed on one site at a time.
2. Ensure that all redundancy control command operations (such as start, stop, restart, etc.) are fully completed before performing any other actions.
3. Contact our technical support team if any unexpected errors occur.
If you want to import a license, the license file must be present on the Legacy Orchestrator appliance. See Copying the License File to Both Sites.
Note: The redundancy feature must be stopped before reconfiguring the hostname of the Docker host.
Information Needed to Configure Hot Standby Redundancy
Information | Site-A | Site-B | Notes |
---|---|---|---|
IP Docker hosts and username/password | Only for Legacy Orchestrator deployments on Docker hosts. One user has sudo privileges or root access on the Docker host. | ||
Host name of each Docker host | Must be unique for the entire Docker host. Root user or user with sudo privilege credentials are required. | ||
IP address/CIDR for management interface | Will be used for interface eth0 | ||
IP address/CIDR for replication interface | Typically used for interface eth1 | ||
IP address/CIDR for monitoring interface | Typically used for interface eth2 | ||
Default gateway IP address | |||
Static routes | |||
Preferred site | Optional. Possibly values: none (default), site-A, site-B. See Preferrred Site and Recovery After Failover. | ||
Virtual IP address | [single address for both sites] | [single address for both sites] | Optional. Same subnet should be present at both sites |
Primary interface for Virtual IP address | Optional, Virtual IP primary interface name (for expample, eth4) is optional. Defaults to eth0 if not set. | ||
IP addresses of DNS servers | Can set one or two. | ||
IP addresses of NTP servers | [list of NTP servers used for all appliances] | [list of NTP servers used for all appliances] | Can set two or more. |
Redundancy license file | Obtained from Accedian Technical Support | ||
Automatic failover | Enable (default) Disable (need to disable) | Enable (default) Disable (need to disable) | For more information, see Disabling Automatic Failover |
Basic Docker Host Configuration for Hot Standby Redundancy
Configuration task | Notes |
---|---|
Configure the management interface | The management interface is normally eth0 |
Set host name | Host names for all appliances must be unique for the entire deployment (both sites). |
Configure NTP client | The same list of NTP servers must be set on all appliances at both sites. |
Configure DNS servers | The same list of DNS servers must be set on all appliances at both sites. |
Add an interface for data replication | Typically assigned to interface eth1 Address must be in IPv4 format. |
Add an interface for monitoring | Typically assigned to interface eth2 Address must be in IPv4 format. |
Add routes (optional) | Although not required, we recommend routing the traffic of the monitoring and replication interfaces over a distinct gateway. Sending all traffic to the default gateway will work but will become a single point of failure that could result in a split brain condition. |
An empty partition to dedicate for the Hot Standby Redundancy function on each site | This partition on each site must be the same size. |
Name of network interfaces on each Docker host | The Name of network interfaces on Docker host must be the same together for both sites (e.g: eth0, eth1, eth2) |
Ensuring All Required Appliances Are Installed
You must ensure that all appliances required at Site-A and Site-B have been installed and are connected to the network.
If you are setting up redundancy for an existing Legacy Orchestrator system, you will need to install the required appliance(s) at the additional site.
After the Docker host is configured to meet all requirements in the “Basic Docker Host Configuration for Hot Standby Redundancy” table. SO Docker can deploy on this Docker host then.
For detailed information about installing Legacy Orchestrator on Docker host, see:
If the Docker host runs RedHat 9.3, follow the steps below to ensure that partitions are correctly identified in the correct order after rebooting the Docker host.
Log into your Docker host with the administrator login credentials.
Edit the /etc/default/grub file.
sudo nano /etc/default/grub
- Add the sd_mod.probe=sync option to the GRUB_CMDLINE_LINUX line in the file. For example:
GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet sd_mod.probe=sync"
- Run the following command to update GRUB:
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
- Reboot the system.
Accessing the climanager Service (socli)
All procedures must be performed in the climanager service (socli). You can connect to the the climanager service (socli) in one of the following ways:
- Enter: “socli.sh” command via SSH connection (port 22) to Docker host
- Use an SSH client to access SSH with port 2200 to Docker host
The procedures must be executed as the skylight user. You must know the account credentials.
Basic Docker Host Configuration
The following table summarizes the basic configuration that is required on all Docker hosts at both sites. Ensure that the Docker host matches these requirements.
Basic Docker Host Configuration for Hot Standby Redundancy
Configuration task | Notes |
---|---|
Configure the management interface | The management interface is normally eth0 |
Set host name | Host names for all appliances must be unique for the entire deployment (both sites). |
Configure NTP client | The same list of NTP servers must be set on all appliances at both sites. |
Configure DNS servers | The same list of DNS servers must be set on all appliances at both sites. |
Add an interface for data replication | Typically assigned to interface eth1. Address must be in IPv4 format. |
Add an interface for monitoring | Typically assigned to interface eth2. Address must be in IPv4 format. |
Add routes (optional) | Although not required, we recommend routing the traffic of the monitoring and replication interfaces over a distinct gateway. Sending all traffic to the default gateway will work but will become a single point of failure that could result in a split brain condition. |
An empty partition to dedicate for the Hot Standby Redundancy function on each site | This partition on each site must be the same size. |
Name of network interfaces on each Docker host | The Name of network interfaces on Docker host must be the same together for both sites (e.g: eth0, eth1, eth2) |
The basic configuration must have been done on all Docker hosts. All procedures must be performed on newly installed Docker hosts. Certain procedures can be skipped on previously installed Docker hosts. The number of Docker hosts that must be configured depends on the installation scenario:
- If you are setting up a second site for an existing Legacy Orchestrator Docker system consisting of a single Legacy Orchestrator Docker, you must configure the Docker host at the new site only.
- If both sites are new installations and each site only includes a single Legacy Orchestrator Docker, you must configure both Docker hosts.
Copying the License File to Both Sites
The redundancy feature requires a license. The license must be available on the Docker host of both sites so that you can import it during the procedure in the next section.
You will need an SCP client (such as WinSCP) on your computer.
Obtain the license file from Accedian Technical Support and save to your computer.
Copy the license file to Site-A:
a. Use the SCP client and the skylight account to access the Docker host for Site-A.
b. Copy the redundancy license file from your computer to the /home/skylight/ directory on the appliance for Site-A.Copy the license file to Site-B:
a. Use the SCP client and the skylight account to access the Docker host for Site-B.
b. Copy the redundancy license file from your computer to the /home/skylight directory on the appliance for Site-B.If you are not already logged in on the socli, open an SSH terminal session to the Legacy Orchestrator CLI on port 2200 of Site-B and log in as the skylight user.
The prompt is displayed.
Note: Perform the procedure below (step 5) in the socli of the Legacy Orchestrator for Site-BImport the license for the redundancy feature by entering:
redundancy license import filename fullPath/licenseFilename
Example of full path and filename: /data/drbd-proxy.license
If you are not already logged in on the socli, open an SSH terminal session to the Legacy Orchestrator CLI on port 2200 of Site-A and log in as the skylight user.
Note: Perform the procedure below (step 7) in the socli of the Legacy Orchestrator for Site-A.Import the license for the redundancy feature by entering:
redundancy license import filename fullPath/licenseFilename
Example of full path and filename: /home/skylight/drbd-proxy.license
Configuring Replication Partition to Both Sites
- If you are not already logged in on the socli, open an SSH terminal session to the Legacy Orchestrator CLI on port 2200 of Site-A and log in as the skylight user.
The prompt is displayed.
Note: Perform the procedure below (step 2 and 3) in the socli of the Legacy Orchestrator for Site-A.
- Configure replication partition for the redundancy feature by entering:
redundancy config replication-partition <partition name> host-admin-user <a user with sudo privilege>
CAUTION: While this operation is running, the partition in the command above will be unmounted and formatted. To prevent data loss on this partition, take care to specify the correct partition name before running this command.
- If prompted, provide the password of the user that has sudo privileges.
You will need to provide the password twice (once for login as the user with sudo privilege and once for sudo privilege).
Example:
Skylight: redundancy config replication-partition /dev/sdc host-admin-user visionems
Password:
[sudo] password for visionems:
The partition '/dev/sdc' will be unmounted and formatted.
Proceed ? (y/N)
y
Skylight:
- If you are not already logged in on the socli, open an SSH terminal session to the Legacy Orchestrator CLI on port 2200 of Site-B and log in as the skylight user.
The prompt is displayed.
Note: Perform the procedure below (step 5 and 6) in the socli of the Legacy Orchestrator for Site-B.
- Configure replication partition for the redundancy feature by entering:
redundancy config replication-partition <partition name> host-admin-user <a user with sudo privilege>
CAUTION: While this operation is running, the partition in the command above will be unmounted and formatted. To prevent data loss on this partition, take care to specify the correct partition name before running this command.
- If prompted, provide the password of the user that has sudo privileges.
You will need to provide the password twice (once for login as the user with sudo privilege and once for sudo privilege).
Example:
Skylight: redundancy config replication-partition /dev/sdc host-admin-user visionems
Password:
[sudo] password for visionems:
The partition '/dev/sdc' will be unmounted and formatted.
Proceed ? (y/N)
y
Skylight:
Configuring and Starting Redundancy
The procedures in this section cover all the tasks required to configure and start the redundancy feature, this procedure needs to be executed on Site-A only.
You will need to set the preferred site, including:
- Configure the virtual IP
- Start the redundancy feature
- Test that the redundancy feature is operating normally.
CAUTION: You must configure and activate redundancy on Site-A. The configuration will be automatically replicated to Site-B.
Note: The redundancy feature must be stopped before reconfiguring the hostname of the Docker host.
To configure redundancy
- Configure redundancy by entering these commands:
redundancy config site-a hostname nameSiteA
redundancy config site-a replication-ip a.a.a.a
redundancy config site-a monitor-ip c.c.c.c
where:
nameSiteA is the hostname that was previously assigned to the Docker host of Site-A.
a.a.a.a is the address of the interface previously configured for data replication.
c.c.c.c is the address of the interface previously configured for monitoring.
- Configure redundancy by entering these commands, and provide Site-B details:
redundancy config site-b hostname nameSiteB
redundancy config site-b monitor-ip b.b.b.b
redundancy config site-b replication-ip d.d.d.d
where:
nameSiteB is the hostname that was previously configured for the Docker host at Site-B
b.b.b.b is the address of the interface previously configured for data replication
d.d.d.d is the address of the interface previously configured for monitoring
- If you want to designate the preferred site (this will be the active site at startup and after recovery from a failover), enter:
redundancy config preferred siteOption
where:
siteOption is your choice of preferred site. Possible values: none (default), site-a, site-b
- Configure the virtual IP for the Legacy Orchestrator system as follows:
Note: By default, the virtual IP state is enabled.
If the user needs to configure the virtual IP, they must follow the two steps below.
a. Set the virtual IP address by entering:
redundancy config virtual-ip vip-address e.e.e.e
where:
e.e.e.e is the virtual IP address (previously configured for the Legacy Orchestrator system)
b. Configure the primary interface associated with virtual IP address:
redundancy config virtual-ip vip-primary-interface interfaceName
where:
interfaceName is the primary interface (previously configured for the virtual IP address)
Example for interfaceName: eth0/eth1/ens160/ens224,...
If the user does not need to configure the virtual IP, enter:
redundancy config virtual-ip vip-state disable
CAUTION: The next step (disabling auto-failover) is NOT recommended. For more information, see Disabling Automatic Failover.
- If you want to disable automatic failover, enter:
redundancy config auto-failover disable
6. Display the redundancy configuration by entering:
redundancy show configuration
The configuration should be similar to the following:
- Start the redundancy feature by entering:
redundancy control start
After a short delay, redundancy becomes operational and the prompt is displayed. If a preferred site has been set, it is the active site. If preferred site is set to none (default value), Site-A is the active site. Data is being replicated from the active site to the passive site. Connectivity between the two sites is being monitored.
- Check whether the redundancy feature is operating normally by entering:
redundancy test
The test checks that redundancy is configured properly and that data replication is taking place. The results are displayed.
Disabling Automatic Failover
By default, redundancy is configured with automatic failover enabled. The system will determine when it is necessary to switch from the active to the passive site and will do so without human intervention.
If you prefer to decide when to fail over from the active site to the passive site, you can change the redundancy configuration to disable automatic failover. If you disable automatic failover, replication and monitoring will continue. It will be necessary to manually switch from the active site to the passive site in the event of a failure on the active site. See the redundancy control switch command in Controlling Redundancy.
If you decide to disable automatic failure, we recommend that you do so during the initial configuration of redundancy. See Configuring and Starting Redundancy.
To change the automatic failover configuration
If you decide to disable automatic failover after redundancy has been started, you can do so as explained in this procedure. You can do this on the appliance at Site-A.
If you are not already logged in on the socli (SSH port 2200), log in as the skylight user.
Stop the redundancy feature by entering:
redundancy control stop
- Ensure that redundancy has been stopped by entering:
redundancy show status
The output should indicate that the global status is Stopped.
- To disable the automatic failover configuration, enter:
redundancy config auto-failover disable
- Ensure that redundancy configuration has changed by entering:
redundancy show configuration
The output should indicate that auto-failover has been Disabled.
- Start the redundancy feature by entering:
redundancy control start
- Ensure that redundancy has been started by entering:
redundancy show status
The output should indicate that the global status is Started.
© 2025 Cisco and/or its affiliates. All rights reserved.
For more information about trademarks, please visit: Cisco trademarks
For more information about legal terms, please visit: Cisco legal terms
For legal information about Accedian Skylight products, please visit: Accedian legal terms and tradmarks