Load Distribution

In large enterprise installations, huge amounts of data are being transferred, processed and historized by the system at any one time. This can put a lot of pressure on the processing and memory resources of the component host machines. In these cases, there is a need to distribute or balance the loads placed on certain components in the system to maintain performance across all the different functionalities.

Secondary Core

In systems where the Master Core host is under a lot of pressure to manage data flow whilst also servicing client calls for reading data from the system, the secondary Core configuration allows the work load to be distributed so that these two task categories are broadly separated. The secondary Core essentially can work as a "Read Only" component to handle all client access for data (reads and history calls etc.). The Master Core host is therefore not burdened with such client calls and the user on the client side will not experience performance slow down as they might be when requesting data from an overloaded Master Core component.

The Secondary Core is installed on a second host machine with the Master Core as it’s direct parent object. Server and Web Server components can be installed and configured on the same host to give access to connected clients. The Secondary Core is connected to the MongoDB repository so it can handle history calls directly without routing them through the Master Core. The secondary Core is able to see all the objects that the Master Core can see.

Secondary Core - Overview
Figure 1. Secondary Core - Overview

As the Secondary Core is designed primarily to work as a "Read Only" component, certain actions such as creating/deleting objects, changing object configurations and writing data (for example, to dynamic properties) is prohibited. This extends to all interfaces connecting to Core, for example DataStudio, Web API and OPC server connections.

Performance counter information is not transferred from the Secondary Core to the Master Core. Therefore, depending on which Core you are connected to (for example, with DataStudio) you will see the performance counters relevant to that Core.

There is also an option for the Secondary Core to receive Realtime Data from the Master Core. This option is not usually recommended though as it can slow down performance of the system.

Please see the How to create a Secondary Core section of the documentation for more information on how to install and create a Secondary Core in DataStudio.

Secondary Core - Local Core

A Secondary Core can also be utilized to operate as a secondary to a Local Core. In this configuration (specified by the Core Role at installation using the Windows Installer or Command Line command line), the Secondary Core would have access to the custom data stores at the Local Core level.

Secondary Core - Local Core
Figure 2. Secondary Core - Local Core

Secondary Local Cores are installed and configured in a similar way to regular Secondary Cores. Please see the How to create a Secondary Core section of the documentation for more information.

Secondary Core Logging and Log Storage

Secondary Cores use the same log store settings as the Primary Master Core and logs are always forwarded to the Primary Master Core. Logs can only be queried from the Secondary Core if the LogBackend property is set to MongoDB. Please read more about configuring the System Logging Data Store for the Primary Master Core and the choice of MongoDB or SQLite as the backend log store.

If the Log Store Backend is set to use SQLite and there is a Secondary Master Core configured, then the Secondary Master Core will have its own database. In this case, logs stored on the Master Core will be inaccessible on the Secondary Master Core.
Log Store Configurations for Local and Secondary Cores
Figure 3. Log Store Configurations for Local and Secondary Cores
MongoDB and SQLite for Logging

It is important to note that using SQLite for the Logging Data Store may require more disk space than using MongoDB. In addition, if the Logging Data Store is configured to use SQLite, when a large number of log records (~250000) are stored in the Logging Data Store, the disk usage for SQLite will start to exceed the disk usage of MongoDB for the same amount of records. Using the General Log Retention property, the frequency in which logs are purged from the Logging Data Store can be configured.

Variations in query time are dependent on the configuration and hardware of the System and therefore, the overall performance of the query time for the Logging Data Store should not be largely affected by using either MongoDB or SQLite for the Log Store Backend.