You can install and run Manager+Agents in a clustered, or high availability (HA), environment on Red Hat Linux. In a cluster with two nodes, there is an Active Node, where the Manager runs, and a Passive Node, which remains in standby mode until needed. Both nodes are connected to a public network and have access to shared storage. The shared storage is mounted on the active node and its Virtual IP (VIP) interface is UP and answering requests to the cluster's IP. If the active node becomes inoperative, the passive node takes over, ensuring little interruption in service.
Note: It is recommended that your cluster have only two nodes.
The cluster is configured with a group of resources. This group includes components required by your Manager. The resource group can move from one cluster node to the other, allowing either to run the Manager.
To install and run your Manager in high-availability (HA) mode, you must first configure your Linux cluster.
For instructions on configuring a cluster on Red Hat Linux, refer to the Red Hat Enterprise Linux 7 High Availability Add-On Administration Guide.
Once you have set up your clustered environment, you must copy the tar bundle from the active node to all passive nodes, configure Agents to use the Virtual IP (VIP) address, and define cluster resources.
To configure each passive node:
On the active node, copy the tar bundle to the standby node:
$ scp /var/opt/ha/sig_ha_bundle.tar root@<standby_host_name>:/tmp
On the passive node, make sure you have the correct execute permissions and un-tar the bundle:
$ tar vxfpP /tmp/sig_ha_bundle.tar
On the passive node, run the standby node configuration script:
When running on a cluster, the Manager's Agent is likely to use the active Agent's IP address as its primary address. In cluster configurations, this is the Virtual IP (VIP) address.
To configure an Agent to use the VIP address:
/etc/init.d/siginit restart sigur.
A clustered environment requires the creation of a resource group. This group contains the sigHa, siginit_ha start script and file system components needed to run high-availability services. These steps refer to the installation of a cluster on Red Hat 7.
To define cluster resources:
Shut down the Manager using
Add FileSystem as resource for the cluster.
pcs resource create clusterfs Filesystem device=/dev/sdb1 directory="/shared" fstype=ext4
Set the quorum policy to ignore.
pcs property set no-quorum-policy=ignore
Add the lsb script.
pcs resource create siginit_ha lsb:siginit_ha op start timeout=180s stop timeout=180s
To avoid timeout errors from appearing when siginit_ha starts and stops, increase the timeout values:
pcs resource op add siginit_ha start timeout=180s
pcs resource op add siginit _ha stop timeout=180s
Add the lsb script.
pcs resource create sigHaBecomeActive lsb:sigHaBecomeActive
Note: Add the cluster resources in the given order to ensure that the Manager starts and stops correctly.
Create the resource group.
pcs resource group add sig_group clusterfs siginit_ha sigHaBecomeActive
Add a constraint to make the VIP run on the same machine as
pcs constraint colocation add SigniantVirtualIP with clusterfs scoree=INFINITY
With cluster installation complete, you can run tests to verify cluster functionality prior to starting the Manager installation.
To test cluster function:
Prior to upgrading a high-availability Manager, you must disable HA services. Once your Manager software is upgraded, configure the passive cluster node(s) and re-enable the HA services.
For more information on upgrading your Manager, see Upgrade Best Practices.
To disable HA services:
mount -t <fs_type> <device><mount point>.
To configure passive cluster nodes:
To restart HA services:
siginit stopto stop all components.
In a clustered environment, HA resources must be removed before you can uninstall a Manager.
To remove HA resources:
In the your Manager, disable the sig_HA service.
Under Services, select Service HA and click Edit Services Properties.
On the Service Management page, select the sigHa, siginit_ha start script and file system resources and click Delete Resource.
Save your changes.
Click Send to Cluster.
Re-enable the sig_HA service.
Manually mount the shared storage by running
mount -t <fs_type> <device> <mount point>.
Note: Make sure that the HA service is located on the node that was initially configured as the active node, i.e. where the Manager was installed.
After removing the HA services, you can uninstall the Manager.
When creating a CA for your cluster, your cluster node hostnames, found in the
cluster.conf file, must be set as altnames.
dds_cert getnewcert -org <organization_name> -key keyless -altnames node1.-<domain_name>,node2.<domain_name> -noprompt
For detailed information on creating a CA certificate, see Offline Certificate Signing.
In a clustered installation, web server certificates are issued to the cluster name. The certificate signing request (CSR) for the Manager must contain DNS aliases for each node in the cluster. These aliases are based on the hostnames of the cluster members as found in the
clusternode name="hostname.example.com" nodeid="1"
For more information on obtaining third-party web server certificates, see Importing Third-Party Web Server Certificates.