Installing and Upgrading a Manager in a Clustered Environment

You can install and run Manager+Agents in a clustered, or high availability (HA), environment on Red Hat Linux. In a cluster with two nodes, there is an Active Node, where the Manager runs, and a Passive Node, which remains in standby mode until needed. Both nodes are connected to a public network and have access to shared storage. The shared storage is mounted on the active node and its Virtual IP (VIP) interface is UP and answering requests to the cluster's IP. If the active node becomes inoperative, the passive node takes over, ensuring little interruption in service.

Note: It is recommended that your cluster have only two nodes.

active and passive node diagram

The cluster is configured with a group of resources. This group includes components required by your Manager. The resource group can move from one cluster node to the other, allowing either to run the Manager.

Configuring a Cluster

To install and run your Manager in high-availability (HA) mode, you must first configure your Linux cluster.

For instructions on configuring a cluster on Red Hat Linux, refer to the Red Hat Enterprise Linux 7 High Availability Add-On Administration Guide.

Completing Installation

Once you have set up your clustered environment, you must copy the tar bundle from the active node to all passive nodes, configure Agents to use the Virtual IP (VIP) address, and define cluster resources.

Copying the Tar Bundle to Passive Nodes

To configure each passive node:

  1. On the active node, copy the tar bundle to the standby node:

    $ scp /var/opt/ha/sig_ha_bundle.tar root@<standby_host_name>:/tmp

  2. On the passive node, make sure you have the correct execute permissions and un-tar the bundle:

    $ tar vxfpP /tmp/sig_ha_bundle.tar

  3. On the passive node, run the standby node configuration script:

    $ /var/opt/ha/bin/haConfigStandbyNode.sh

Configuring an Agent to Use the Virtual IP Address

When running on a cluster, the Manager's Agent is likely to use the active Agent's IP address as its primary address. In cluster configurations, this is the Virtual IP (VIP) address.

To configure an Agent to use the VIP address:

  1. In your Manager, select Administration > Agents > List.
  2. Select the Manager’s Agent, and choose Edit.
  3. On the Network > General tab, specify the Virtual IP address in the IP Interface field.
  4. Log into the Manager as root.
  5. Restart the Signiant UDP Relay service by running /etc/init.d/siginit restart sigur.

Defining Cluster Resources

A clustered environment requires the creation of a resource group. This group contains the sigHa, siginit_ha start script and file system components needed to run high-availability services. These steps refer to the installation of a cluster on Red Hat 7.

To define cluster resources:

  1. Shut down the Manager using siginit stop.

  2. Add FileSystem as resource for the cluster.

    pcs resource create clusterfs Filesystem device=/dev/sdb1 directory="/shared" fstype=ext4

  3. Set the quorum policy to ignore.

    pcs property set no-quorum-policy=ignore

  4. Add the lsb script.

    pcs resource create siginit_ha lsb:siginit_ha op start timeout=180s stop timeout=180s

  5. To avoid timeout errors from appearing when siginit_ha starts and stops, increase the timeout values:

    pcs resource op add siginit_ha start timeout=180s pcs resource op add siginit _ha stop timeout=180s

  6. Add the lsb script.

    pcs resource create sigHaBecomeActive lsb:sigHaBecomeActive

Note: Add the cluster resources in the given order to ensure that the Manager starts and stops correctly.

  1. Create the resource group.

    pcs resource group add sig_group clusterfs siginit_ha sigHaBecomeActive

  2. Add a constraint to make the VIP run on the same machine as clusterfs.

    pcs constraint colocation add SigniantVirtualIP with clusterfs scoree=INFINITY

Verifying Cluster Function

With cluster installation complete, you can run tests to verify cluster functionality prior to starting the Manager installation.

To test cluster function:

  1. Start the HA service and verify that the shared storage is mounted and the VIP interface is up on the active node.
  2. Re-locate the HA to the passive node, and verify that the shared storage is mounted and the VIP is up on the passive node.
  3. Shut down the active node, and verify that the service has been re-located.
  4. Shut down the remaining node.
  5. Power on both nodes and check that both resources are available on the active node.

Upgrading a Manager in a Clustered Environment

Prior to upgrading a high-availability Manager, you must disable HA services. Once your Manager software is upgraded, configure the passive cluster node(s) and re-enable the HA services.

For more information on upgrading your Manager, see Upgrade Best Practices.

To disable HA services:

  1. In your Manager, disable the sig_HA service.
  2. Under Services, select Service HA and click Edit Services Properties.
  3. On the Service Management page, select sigHa, siginit_ha start script and file system and click Remove Selected Resource.
  4. Save your changes.
  5. Select Send to Cluster.
  6. Re-enable the sig_HA service.
  7. Manually mount the shared storage using mount -t <fs_type> <device><mount point>.

To configure passive cluster nodes:

See Copying the Tar Bundle to the Passive Nodes.

To restart HA services:

  1. Run siginit stop to stop all components.
  2. Manually unmount the shared storage using umount <mount_point>.
  3. In your Manager, under Services, select Service HA and click Edit Services Properties.
  4. On the Service Management page, select sigHa, siginit_ha start script and file system and click Add A Shared Resource.
  5. Save your changes.
  6. Select Send to Cluster.
  7. Re-enable the sig_HA service.

Uninstalling a Manager in a Clustered Environment

In a clustered environment, HA resources must be removed before you can uninstall a Manager.

To remove HA resources:

  1. In the your Manager, disable the sig_HA service.

  2. Under Services, select Service HA and click Edit Services Properties.

  3. On the Service Management page, select the sigHa, siginit_ha start script and file system resources and click Delete Resource.

  4. Save your changes.

  5. Click Send to Cluster.

  6. Re-enable the sig_HA service.

  7. Manually mount the shared storage by running mount -t <fs_type> <device> <mount point>.

    Note: Make sure that the HA service is located on the node that was initially configured as the active node, i.e. where the Manager was installed.

After removing the HA services, you can uninstall the Manager.

Creating a CA Certificate For a Cluster

When creating a CA for your cluster, your cluster node hostnames, found in the cluster.conf file, must be set as altnames.

dds_cert getnewcert -org <organization_name> -key keyless -altnames node1.-<domain_name>,node2.<domain_name> -noprompt

For detailed information on creating a CA certificate, see Offline Certificate Signing.

Obtaining Third-Party Web Server Certificates

In a clustered installation, web server certificates are issued to the cluster name. The certificate signing request (CSR) for the Manager must contain DNS aliases for each node in the cluster. These aliases are based on the hostnames of the cluster members as found in the cluster.conf file.

clusternode name="hostname.example.com" nodeid="1"

For more information on obtaining third-party web server certificates, see Importing Third-Party Web Server Certificates.