How to create a Redundant Data Flow

This page addresses how Redundant Data Flows are realized in an existing multi Core system configuration.

Typical Use Case Configuration

A typical use case for Redundant Data Flow involves the following components:

  • an external data lake for data archival

  • a Local Core component, responsible for forwarding data to the data lake during normal conditions

  • another Core component, responsible for forwarding data to the data lake during abnormal conditions. Here the Master Core is used.

  • an Ingress Connector connected to the Master Core

  • an Egress Connector connected to the Master Core

  • an Ingress Connector connected to the Local Core

  • an Egress Connector connected to the Local Core

  • an external data source / endpoint

The data lake as well as the data source / endpoint are typically not system:inmation components.


 

A corresponding I/O Model may look like this:

Redundancy Relationships

Redundancy relationships define which objects act as backup alternatives to each other. They need to be established for the entry points to the system (the Ingress Conectors) as well as for the exit points from the system for which Generic Time Series Buffers (GTSBs) or Generic Event Buffers (GEBs) can be used.

The Redundancy Relationship For Incoming Data (Ingress Connectors)

To establish a redundancy relationship between the Ingress Connectors, both these Connectors need to be added to the same Connector Group on the Master Core.

To create a new Connector Group …​

1 In the I/O Model, select the Master Core.

2 In the Property Panel, expand the Redundancy Configuration section and open the Connector Group table property by clicking on the table icon.

 
3 Double click in the Name column of the first empty row and provide an individual name for the new Connector Group.

4 In the Connectors column, select the 'Ingress' Connectors from the drop down menu.

5 Click OK to close the table property.

6 Click Apply for the Properties Panel to confirm the changes.

This property only needs to be configured on the Master Core object.

The Redundancy Relationship For Outgoing Data (GTSBs or GEBs)

The data is written into the data lake through Generic Time Series Buffer (GTSB) (or Generic Event Buffer (GEB)) objects configured on the Egress Connectors. To create a redundancy relationship between these buffer objects, they have to be added to the same dedicated Data Store Set on all Cores which are involved in the Redundant Data Flow.

Data store set configurations where the same store is part of a redundant set as well as a regular (non-redundant) set are not supported. Configuring a system in such a way will lead to non-functional data flow for data archived to such a regular (non-redundant) data store set.

Master Core Configuration

To create a new Data Store Set for the Master Core …​

1 In the I/O Model, select the Master Core.

2 In the Master Core’s Property Panel, expand the Data Store Configuration section and open the Data Store Sets table property by clicking on the table icon.

 

3 Double click in the Name column of the first empty row and provide an individual name for the new Data Store Set dedicated to the Redundant Data Flow.

4 In the Data Stores column, select all GTSBs (or GEBs) which are involved in this Data Flow from the drop down menu and adjust their priority with the Up and Down arrows.

The order of GTSB objects in this column determines the priority of the data flow through each object, with the highest priority first.

5 In the Primary Data Store column, select the GTSB (or GEB) object to receive data for the data flow via this Core component. Only this object will receive SaF data for items configured to archive data to this Data Store Set.

Only during abnormal conditions does the buffer object specified as the Primary Data Store receive data. If a higher-order buffer (according to the order in the Data Stores column) is known to successfully receive and forward data, the buffer object selected as Primary Data Store will not receive this data.

If nothing is selected in the Primary Data Store column, the Data Store Set is a regular set, which all Data Stores receiving data. When configuring the data stores sets of a Master Core, only GTSB or GEB objects can be selected from its Local Cores. MongoDB store objects from Local Cores cannot be selected.

6 Click OK to close the table property.

7 Click Apply for the Properties Panel to confirm the changes.

Local Core Configuration

Also the Local Core needs to have a Data Store Set configured.

To create a Data Store Set for the Local Core …​

1 Follow the same steps as for the Master Core, configuring the Data Stores column exactly as on the Master Core but setting the Primary Data Store column to the GTSB (or GEB) on the Egress Connector for the Local Core.

  • The Data Stores column need to configured exactly the same for Master Core and Local Core! Contradicting configurations (e.g. different priorities) will result in undefined behavior.

  • On the Local Core the Primary Data Store column needs to specify the Local Core's Egress Connector GTSB (or GEB) object, instead of the Master Core’s.

High Availability and Data Duplication

During certain abnormal conditions, the default and the redundant data flows may forward data simultaneously for a longer period of time. The system favors high data availability and transport reliability, contrary to minimizing data duplication with the risk of data loss.