Cluster manager

The use of the cluster manager requires a license. A separate license must be purchased for each INUBIT instance within a cluster.

Usage

With the Virtimo Cluster Manager it is possible to merge several INUBIT instances into a cluster, add or remove new instances at runtime.

In this way, for example, a high availability of an INUBIT system can be achieved or a targeted reaction to high loads can be achieved.

Prerequisites

The following components are necessary for the complete implementation of a clustered system:

  • INUBIT instances: at least 2 INUBIT installations including license for using the cluster manager

  • Network infrastructure: INUBIT instances must be able to communicate with each other via Multicast.

  • Load-Balancer: to distribute the incoming requests to the INUBIT instances of the cluster

The cluster manager controls which nodes belong to the cluster and who is the current master. The load balancer takes care of the distribution of incoming requests to the individual INUBIT instances in the cluster.

This separation of responsibilities must be taken into account during configuration.

Multicast

The clustering is based on Multicast, a special network address range and port. All nodes that have the same multicast address and port configured join together to form a cluster.

Architecture

A cluster always consists of a master node and any number of backup nodes. If INUBIT instances are to work together in a cluster, this must be taken into account when implementing the technical workflows.

Master node

The master node takes over all tasks that are to be carried out within the system on a single node, e.g. scheduled workflows.

Backup node

All nodes (master and backup) take over the tasks that can be technically executed in parallel.

An additional workflow is used to check whether the current INUBIT instance is a master or backup node.

Application status

The cluster manager also runs on every INUBIT instance and checks independently at regular intervals whether the instance is still available. If the INUBIT can no longer be reached, the instance is logged off from the cluster. Logging off in the load balancer is not the task of the cluster manager.

The following INUBIT REST-API endpoints can be used to check the status:

Configuration

Activation of the cluster manager

The cluster manager can be activated via the file <inubit-installdir>/server/ibis_root/conf/ibis.xml:

<Properties>
     ...
     <!-- Cluster Manager configuration -->
     <Property name="ActivateClusterManager" type="Boolean">false</Property>
</Properties>
  1. Stop Process Engine

  2. Set property ActivateClusterManager to true

  3. Start Process Engine

Configuration of the cluster manager

The actual configuration details for the cluster manager are controlled via the file <inubit-installdir>/server/ibis_root/conf/clustermanager/clusterManagerConfig.xml. All the settings required for network communication are made here.

All possible configuration options are documented in the file.

Defining the master node

The priority setting determines which node should be active as the master in the cluster. Specify the highest integer value for exactly one instance in the configuration. As soon as this instance joins the cluster, it is set as the master.

If the master fails, a new master is selected based on the priority of the other instances. This is the instance with the currently highest priority.

Configure a different priority for each INUIBT instance in the cluster. This is the safest way to control the choice of master node.

Checking for the master node

A file is stored in the file system on the instance that is active as the master. The path to the storage can be adjusted via the configuration. An additional workflow checks for the existence of this file. This can be used to determine at runtime whether the respective INUBIT instance is the master node or not.

The workflow can be found at: <Installer_dir>/inubit/server/ibis_root/conf/clustermanager/inubit-cluster-manager-workflow.zip.

Enable custom logging

To redirect the logging output of the cluster manager into a separate file proceed as follows:

  1. Login with a Workbench

  2. Go to Administrator tab and select General Settings

  3. In the tree navigate to Logging > Trace

  4. On the right side panel select Custom server trace and click "…​" button

  5. Logger dialog opens

  6. Right click to open the context menu and select Add

  7. Enter the name de.virtimo.inubit.clustermanager, choose Rolling file logger and click OK

  8. If you want to update log file name, select the newly added entry in the table and click "…​" button

  9. Select "Output file" and adjust to value to something like $ibis.root.directory$/log/clustermanager.log

  10. Close the "Rolling file logger" dialog with OK

  11. Close the "Logger" dialog with OK

  12. Save the general settings by clicking on the save icon in the global toolbar