BPC Cluster

A BPC Cluster consists of an OpenSearch Cluster and a network of Karaf installations.

BPC Cluster
Illustration 1. BPC Cluster

Preparations

On the available servers, install, according to the Installation instructionsthe desired number of OpenSearch nodes and Karaf instances that are to form the BPC cluster. The OpenSearch cluster should consist of at least 3 nodes. The Karaf cluster should consist of at least 2 instances.

OpenSearch Cluster

The necessary settings for setting up an OpenSearch Cluster are listed below. Changes to these settings require a restart of the affected Karaf and OpenSearch installations.

Further information on setting up an OpenSearch Cluster can be found in the OpenSearch documentation.

In an OpenSearch cluster, if the FS (file system) variant is still to be used as the backup repository type, the target directory (opensearch_data/bpc_backup) must be a shared file system.

If this is not so easily possible, S3 could also be defined as the backup repository type for an on-premise installation by installing and using an S3-compatible solution such as MinIO. See MinIO as S3 backup for OpenSearch.

The backup repository to be used is defined via the Core Services Setting Core_BackupRepository. The value of this setting is one-to-one the value used by OpenSearch. The configuration options can be found in the OpenSearch documentation.

OpenSearch-ClusterName

OpenSearch uses a cluster name to identify related nodes. By default, OpenSearch only binds to local network interfaces, so instances installed on different servers cannot see each other and can therefore be operated with the same cluster name without complications. However, if several separate instances are operated on one server or if the listeners are actually deliberately bound to more than one local host (e.g. to deliberately create clusters), the resulting clusters should then be separated from each other (e.g. several stages), the clusters must be assigned the same name. several stages), the cluster names must be changed.

This is done in the file OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml:

cluster.name: opensearch_virtimo

OpenSearch-NodeName

Giving the respective nodes/nodes in the cluster a descriptive name is optional but recommended. If no name is set, OpenSearch assigns a machine-generated name that makes it more difficult to monitor and troubleshoot the node.

This is done in the file OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml:

node.name=bpc-opensearch-node1

OpenSearch-NodeRoles

Different roles can be assigned to the nodes/nodes in the cluster. By default, a node is assigned all roles: cluster_manager, data, ingest, remote_cluster_client

If you have many nodes (> 3) available, you can, for example, define a node as the cluster manager. This node then does not store any data and can only take care of the management of the cluster.

This is done in the file OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml:

node.name: bpc-opensearch-cluster_manager
node.roles: [ cluster_manager ]

The following is the default setting:

node.roles=cluster_manager,data,ingest,remote_cluster_client

OpenSearch-NetworkInterface

If you want OpenSearch to listen to more than just the local network interfaces, this can also be configured in the file OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml (e.g. by setting it to _global_):

network.host: _global_

IP addresses, host names or special placeholders can be set here. Possible special Parameters here are:

  • _local_ → Only local interfaces

  • _[NetzwerkInterfaceName]_ → a specific network interface such as eth0 → _[eth0]_

  • _site_ → Locally assigned addresses of the host

  • _global_ → Globally assigned addresses of the host

A combination of these is also possible:

network.host: ["192.168.1.3", "unserbpchost.virtimo.net", _"[tun0]"]_

If an IP address is set under network.host, it must also be set in karaf/etc/en.virtimo.bpc.core.cfg under de.virtimo.bpc.core.opensearch.host. "localhost" is then no longer available.

OpenSearch-DiscoveryHosts

In order for it to make contact with other cluster nodes, however, at least one other (preferably all other) nodes must be made known.

This is done in the file OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml:

discovery.seed_hosts: ["<opensearch-1>", "<opensearch-2>", "<opensearch-3>"]

OpenSearch-InitialClusterManagerNode

Initially, a Cluster Manager Node must be defined:

Takes place in the file OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml:

cluster.initial_cluster_manager_nodes: ["bpc-opensearch-node1"]

Firewall

If a firewall is in use, at least the following connections must be open.

  • Karaf → OpenSearch : HTTP Port (OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml | http.port)

  • Karaf → OpenSearch : WebSocket Port (OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml | os-bpc-plugin.websocket.port)

  • OpenSearch → OpenSearch : Transport Port (OPENSEARCH_CONFIG_VERZEICHNIS/opensearch.yml | transport.tcp.port)

Karaf Verbund

For each Karaf, you store the URLs of the individual nodes of the OpenSearch cluster in BPC configuration file using the option de.virtimo.bpc.core.opensearch.hosts separated by commas. This means that all nodes are known and there are no problems if one is not accessible.


Keywords: