Optimizing Performance

This section explains options for improving the performance of the INUBIT software.

General information

  • Due to time constraints, workflows are developed and tested with smaller files than they are actually used in production. So right at the workflow design stage, you should try to take the actual or maximum file size into account!

  • Measure the performance before and after every system optimization, to adequately determine if the optimization has brought about a performance improvement.

In case of high load, it is recommended to setup a virus scanner exclusion for the folder <install-installdir>/inubit/server/ibis_root/ for a normal PE installation.

Optimizing the INUBIT Software

Parallelization

By default, the INUBIT software may execute Technical Workflows in several independent threads. The maximum number of threads that may be executed depends on the license and the configuration of the workflow thread pool. Additionally, several processes of a Technical Workflow may be executed in parallel.

When using the INUBIT software on a single-processor machine, the use of at most ten parallel threads increases message throughput and optimizes the machine’s load. A portion of the computing time can often not be used in an average machine, because one must wait for relatively slow, external events such as user input or the network connection. The wait time thus created may be used by other threads. However, if the machine is used to capacity by, for example, computation- intensive threads, the use of parallel threads will result in even more threads receiving proportional computing time, instead of having to wait for another, computation-intensive thread.

Refer to

Application Server

Information on changes to increase the performance of the application server you are using may be obtained from the respective manufacturer’s technical documentation.

Performance analysis

Object Displayed in

System load (hard disk, processor, memory)

Tab Reporting > as default reports

Execution times of workflows and modules

You can customize the report settings, refer to Reporting and Business Process Monitoring

Available working memory

Tab Administration > General Settings > Administration > Server

Memory load of the INUBIT Workbench

INUBIT Workbench > Status information

Memory load of JVM

Read out with a workflow containing the IS Configuration utility.

Optimizing Process Engine JVM

This section explains the options for optimizing the memory allocation and garbage collection of JVM. Both factors are relevant to the performance of the Tomcat application server in high-load scenarios.

Optimizing JVM performance should always be completed before a hardware upgrade of the server is made. The performance improvements achieved are often underestimated! Linux systems are used successfully, especially in high-end segments, where in comparison to Windows systems, a larger maximum JVM heap is to be achieved.

Background

JVM has a special area in the working memory, known as the heap, where Java objects are generated and managed. A portion of this heap is permanently allocated, the other is virtually reserved and is only used once the allocated memory is no longer sufficient.

The heap is cleaned up regularly by garbage collectors (GC), in order to free up unused memory. The work of the GC impairs performance. The more rarely a GC filters starts, the less time is taken up and the performance of the application server is that much better.

Allocating the memory (memory allocation) and the garbage collection can be optimized with help from parameters defined when JVM starts.

  • -Xms specifies the size of the fixed, initially allocated memory space of JVM.

  • -Xmx defines the maximum size of the heaps. The size is calculated from the fixed allocated memory plus the virtually allocated memory. The maximum size also depends upon the operating system.

    If the heap memory is too small, then garbage collections will occur frequently. During these periods, the application will respond slowly, depending on the GC mechanism selected. This will lead to poor performance and slower response times.

    The heap is divided into two large segments:

    • Young Generation (YG):

      For newly generated objects that are often short-lived and that are consequently more frequently collected as part of a simple GC.

    • Tenured Generation (TG):

      Includes older objects that have previously survived several GC runs during the young generation. A GC in this segment if often time-consuming. The frequency of the GC in the tenured generation is essentially influenced by the size of the TG: the larger the TG, the less frequently the GC must search for used objects to free up memory space.

    • Permanent Generation (PG): ü Segment where objects are created that are already expected to have long life, such as class objects.

All of these together constitute the JVM memory.

A rule of thumb for dimensioning the working memory is

Xmx + (40% of Xmx) + memory for the operating system. Example: 1 GB + 400 MB + 500 MB = 1,9 GB memory.

It is best to increase the Xmx in 32MB steps. Make sure you provide enough real physical memory. Swapping leads to drastic performance losses.

The following table gives some examples for Xmx settings for given working memory sizes and several configurations.

Xmx value RAM size

1024M/1G
2048M/2G
3072M/3G
5120M/5G

2048M/2G
4096M/4G
5120 M/5G
8192M/8G

Note that all optimization information depends directly on the JVM version used. If necessary, Virtimo AG is happy to provide support in optimizing your system.

Optimizing Workbench JVM

Usage

To optimize the JVM memory for the INUBIT Workbench, for example, if memory issues occur

Proceed as follows

  1. Shut down the Workbench.

  2. Open the start_workbench.sh or start_workbench.bat file in the <inubit-installdir>/client/bin directory.

  3. Go to the line containing: Xmx512M

  4. Increase the existing value Xmx512M (512 MB), for example to: Xmx1G (1 GB)

  5. Save the start_workbench file.

  6. Start the Workbench.

  7. If the Workbench runs into memory issues again, repeat step 1 to step 7 by increasing the Xmx value by another 1 GB.

Optimizing Network and Hardware

Another factor influencing system performance is the capacity of the connections between the INUBIT Workbench and the INUBIT Process Engine as well as the INUBIT Process Engine and third-party systems, for example database or partner systems.

Hardware

The hardware equipment also has significant influence on the performance of a system. In principle, a faster processor, more working memory and a faster hard disk drive will increase system performance.

The following rule of thumb may be applied when dimensioning the working memory: messages and variables are retained during your processing in a workflow with non-streaming capable modules. A message that is about 100 MB will have three-fifths of the necessary working memory. If several processes of a Technical Workflow are executed in parallel, then the working memory must be sufficiently dimensioned for the maximum number of parallel running processes.

The following module types are capable of streaming. Refer to:

Optimizing Memory Settings of Remote Connector and INUBIT

This section explains the recommended settings for optimizing the memory allocation of Remote Connector and INUBIT based on file size.

Currently, files larger than 1 GB are not supported.

The following table gives the recommended Xmx value (MB) settings in the following files:

  • Remote Connector: start_rc.[bat/sh]

  • INUBIT: setenv.[bat/sh]

File Size (MB) Remote Connector INUBIT Remarks

< 250 MB

2048 MB

4096 MB

Default values

>= 250 MB < 500 MB

4096 MB

4096 MB

Increase Remote Connector memory to 4096 MB.

>= 500 MB < 1 GB

8182 MB

8182 MB

Increase Remote Connector and INUBIT memory to 8182 MB.

Benchmark Workflows

Usage

You can use benchmark workflows to compare hardware that is to be used with the INUBIT software with a reference hardware and determine the following values:

  • Absolute number of workflows executed

  • Average execution time of the workflows

  • CPU load

How it works

The benchmark workflows supplied are each executed once, five times and ten times in parallel in a pre-defined time period (standard 3 minutes). The number of workflow runs completed is compared with the values stored for the reference hardware.

One factor, the INUBIT benchmark index, specifies the extent to which the tested hardware deviates from the reference hardware. The reference value is 10 and corresponds to 100 %. Deviations downwards or upwards show that fewer or more workflows have been processed on the tested hardware than on the reference hardware.

Reference hardware

You will find the features of the reference hardware used in the report file under the ServerTest entry with the attribute host='benchmarkReferenceServer'.

Own benchmarks

Alternatively, you can create your own benchmark workflows and use them in your own benchmark suites. You can use your own reference hardware to determine your own reference values and determine your own benchmark index.

The indexes determined from your own reference values are not comparable with the INUBIT benchmark index included in the scope of delivery.

Preparing and Executing Benchmark Workflows

Proceed as follows

  1. Import the diagram with the benchmark workflows:

    <inubit-installdir>/benchmark/benchmark.diagramgroup.zip

  2. Activate all the benchmark workflows.

    The Benchmark Scenario 1 and the Benchmark Scenario 2 contain licensed modules. With a basic license, these modules and the relevant scenarios cannot be executed.

  3. Go to directory <inubit-installdir>/benchmark.

  4. Open a command line window and set the width of the window to at least 140 characters so that only the overview line is refreshed.

  5. Start the benchmark script suitable for your system:

    • Unix/Linux: benchmark.sh

    • Windows: benchmark.bat

      When a workflow is started for the first time, the server is optimized for the workflow. Therefore, in a warm-up phase, the benchmark workflows are executed fifty times before the actual benchmark workflows are started.

  6. Display the benchmark report.

Calling Syntax of the Benchmark Script

The values that you set when calling the script overwrite the values in the configuration file <inubit‑installdir>/benchmark/´benchmark.properties.

Call up

Start the benchmark script from the command line with the following call:

benchmark [OPTIONS] [-server <server_name>] [‑r <report_file>]

  • -b|-benchmark <benchmark Suite>

    Benchmark suite to be executed, for example: -b fast

  • -c <configuration file>

    Path and file name of the configuration file

  • -h|--help

    Display help

  • -port <port number>

    Port via which the INUBIT Process Engine can be reached, for example: -port 9000

  • -pr <report file>

    Report file to be output

  • -r <report file>

    Path and file name of a report file to be updated

    The report file must already exist. If you omit the -r option, the report file <inubit-installdir>/benchmark/benchmarkReport.xml is created.

  • -rt|-runtime <duration>

    Maximum duration for the benchmark suite in minutes, for example: -rt 10

  • -server <IP address or hostname>

    IP address or DNS name of the computer on which the INUBIT Process Engine is running, default localhost, for example: ‑server 192.168.0.223

  • -t|-thread [<Thread1> <Thread2> <Thread3> …​]

    Number of parallel threads or a group of threads, for example: -t [1 5 10]

Configuring the Benchmark Suite and the Benchmark Procedure

Prerequisites

You have prepared and executed the benchmark workflows.

Proceed as follows

  1. Open the benchmark.properties file in directory <inubitinstalldir>/benchmark.

    This file is created during the first start of the benchmark scripts.

  2. Adjust the following parameters:

    • benchmark.config.thread=[1 5 10]

      Number of workflows executed in parallel for each of the configured runs.

    • benchmark.config.runtime=3

      Time for the benchmark of a workflow in a specific configuration

    • benchmark.config.server=localhost

      Host name or IP address of the system with the INUBIT Process Engine

    • benchmark.config.port=8000

      Port for the communication with the INUBIT Process Engine

    • benchmark.config.callTimeout=10

      Call timeout in seconds: after the time period specified elapses, the script is terminated with an error message if it was not possible to start the benchmark suite.

    • benchmark.suite.full

      Suite with the names of the HTTP connectors of the benchmark workflows to be executed (option -b full)

    • benchmark.suite.normal

      Standard suite with the names of the HTTP connectors of the benchmark workflows to be executed

    • benchmark.suite.fast

      Suite with the names of the HTTP connectors of the benchmark workflows to be executed (option -b fast)

  3. Below the section List of Benchmark sets, modify an existing benchmark suite or create a new benchmark suite based on the following sample:

    • Name of the benchmark suite

      benchmark.suite.<Name of the suite>

    • Names of the HTTP connectors of the workflows to be executed, separated by a comma and a space

      <HTTP connector1>[, <HTTP connector2> …​] benchmark.suite.suite1=http_conn_twf1

  4. Save your changes.

  5. Run the benchmark test with the changed configuration file.

Creating Your Own Benchmark Workflows

Proceed as follows

  1. Create a new workflow or use an existing workflow.

  2. Insert an HTTP connector and connect it with the first module of the workflow.

  3. Create a new benchmark suite.

  4. Enter the name of the workflow as a parameter value.

  5. Save the modified configuration file under another name.

  6. Start the benchmark script with the modified configuration file and with a modified report file.

Setting Your Own Reference Values

Proceed as follows

  1. Execute a benchmark suite with the workflows you created and a benchmark suite created from this on the reference hardware.

    Refer to

  2. Open the report file.

  3. Remove the ServerInfo element and the TestCase elements under the ServerTest entry with the attribute host='benchmarkReferenceServer'.

  4. Copy the ServerInfo element and the TestCase elements under the ServerTest entry with the attribute host='<Name of the reference server>'.

  5. Insert the copied entries under the ServerTest entry with the attribute host='benchmarkReferenceServer'.

  6. Save the report file under a new name.

  7. Transfer the new report file and the related configuration file to the systems whose INUBIT benchmark index you want to determine.

  8. Start the benchmark script with the modified configuration file and the modified report file.

  9. Open the benchmark report.

The index determined indicates how your current hardware deviates from your reference hardware.

Displaying the Benchmark Report

Proceed as follows

  1. Go to directory <inubit‑installdir>/benchmark.

  2. Start the benchmark script with the option -pr (display selected report).

    • Unix/Linux: benchmark.sh -pr [report file]

    • Windows: benchmark.bat -pr [report file]

      You only have to specify the report file (standard: benchmarkReport.xml) if you specified another path and/or file name when starting the benchmark suite.

  3. Open the report file in the workbench editor.