System Data Stores

system:inmation provides different types of data stores depending on the type of data that is being archived. The number of different data types produced by varying data sources means that one data store design is not sufficient to ensure effective and efficient storage and retrieval for each type. Therefore the inmation MongoDB historian has separate data stores for different data types.

These data stores are known as System Data Stores and are created by default upon installation/integration of the system with MongoDB. The System prefix is used to distinguish these data stores from the Custom Data Stores which the user creates themselves as objects in the I/O Model.

From v1.96 on, System Data Stores are created with a unique, machine generated name. This automatically generated database name is shown in the Generated Database Name property of the different Data Store compounds of the System object.
On systems which have been upgraded from a pre-v1.96 version, the value of Database Name property is used to refer the database in the MongoDB and the Generated Database Name property is empty.

MongoDB Data Stores in system:inmation
Figure 1. Default MongoDB Data Stores

The configuration for the default data stores is controlled in the System Object.

Disabling MongoDB System Data Stores

In the Object Properties Panel for the System object, the System Data Store Settings > Disable MongoDB Stores property can be used to disable MongoDB for system data stores. System data stores do not monitor MongoDB server connectivity and do not change state with this property enabled. In addition, any Store and Forward (SaF) data stored for System data stores will be discarded automatically.

It is important to note that the Logging Data Store is independent of the System Data Store Settings > Disable MongoDB Stores property and can continue to use MongoDB to store log records.

Custom data stores continue to monitor MongoDB server connectivity, independent of the Disable MongoDB Stores property value.

Time Series Data Store

The time-series historization provides users with fast and secure access to data collected within any time interval and given update rate. system:inmation makes use of aggregation concepts to be reliable at any time and to be able to answer requests within a minimum of time. All aggregates are calculated as specified within the OPC UA specification.

Writing to MongoDB uses an efficient write mechanism, that uses periodic bulk writes. The logic of the bulk writes follows the following principle: Historical data is accumulated in a buffer before it is written to MongoDB. The write cycle begins as soon as a minimum data volume threshold is met (BufferMin property), or a latency timer has elapsed (Latency property). This ensures efficient writing both for sustained loads, or peak loads in archiving operations.

Test and Production Archives

The Time Series Data Store contains 'Test' and 'Production' archives for storing time series data.

  • The 'Test' archive can be used when testing or experimenting with different objects (for example when creating custom Lua scripts) or when trying out new storage and retrieval techniques. In its default configuration the 'Test' archive data will be retained for 7 days and then purged.

  • The 'Production' archive is the full production data store used to historize all production time series data. In the default configuration the data is stored permanently as Raw history data. For other options, see the Purge Options page.

Event (A&E) Data Store

Alarms and events are historized in the Event Data Store according to the specification of the originating data source. The Event (A&E) data store can store OPC Classic and OPC UA produced alarms and conditions (OPC UA A+C). Historized events can be quickly retrieved using the Event View in DataStudio or the inmation APIs (Lua, Web API).

For more information on the storage schema for event documents, please visit the Lua geteventhistory function documentation.

Logging Data Store

The Logging Data Store is used to historize all log messages produced within the system. Log messages can be produced by any of the inmation components or objects within the system (including OPC datasources). Log messages can also be created using the Lua API with the syslib.log function.

By default, the Logging Data is stored in the system MongoDB. Alternatively SQLite can be selected in the Logging Data Store > Log Store Backend property of the System object. When the Log Data Store is set to SQLite, the log messages will be stored in 'C:\inmation.root\work\core\db\log_<objectid>.db'.

If the Log Store Backend is set to use SQLite and there is a Secondary Master Core configured, then the Secondary Master Core will have its own database. In this case, logs stored on the Master Core will be inaccessible on the Secondary Master Core.

Log messages can be retrieved from the Logging data store, either MongoDB or SQLite, using either the Log Display in DataStudio or the Lua API syslib.getlogs function.

Logs are purged from the Log Data Store based on the General Log Retention property.

The Logging Data Store can still use a MongoDB store if MongoDB is disabled as the behavior of the Log Data Store is irrespective of the System Data Store Settings > Disable MongoDB Stores property.

MongoDB and SQLite for Logging

It is important to note that using SQLite for the Logging Data Store may require more disk space than using MongoDB. In addition, if the Logging Data Store is configured to use SQLite, when a large number of log records (~250000) are stored in the Logging Data Store, the disk usage for SQLite will start to exceed the disk usage of MongoDB for the same amount of records. Using the General Log Retention property, the frequency in which logs are purged from the Logging Data Store can be configured.

Variations in query time are dependent on the configuration and hardware of the System and therefore, the overall performance of the query time for the Logging Data Store should not be largely affected by using either MongoDB or SQLite for the Log Store Backend.

Production Tracking Data Store

The Production Tracking Data Store is part of the wider Production Tracking functionality for electronic batch record creation, modification, historization and retrieval. Electronic Batch records can be created/added to the Production Tracking Data Store either using the BatchTracker object, using the Lua API or by importing from other batch record historian using the Batch Record DataSource.

Batch Records can be retrieved in DataStudio using the Calendar Display or by using the Lua API.

Custom Data Store

The Custom Data Store can be used to store any custom data that user specifies. Unlike the other data stores, there is no set schema for this repository so there are no data type restrictions when using this data store. The custom data store is currently only accessible using the Lua API. For more details on using the Custom Data Store, please contact inmation Customer Service.

It should be noted that the System Custom Data Store is different to the Custom Data Store objects that can be created by the user in the I/O Model.

File Store

The File Store is used to store all files attached to objects in the system using the File Pointer Properties. These properties have the File or Filelist datatype and The files are stored in the MongoDB repository using the MongoDB GridFS specification. In addition to the stored file, File Pointer Properties have accompanying metadata that contains a link (or 'pointer') to the actual stored file as well as information related to the file (such as the file name). This metadata can also contain custom user data.

Audit Trail Data Store

The Audit Trail Data Store is used to store all records relating to the Audit Trail functionality. This feature creates a secure time stamped record of every object configuration change, component launch, update and connection in the system. For information on how to enable Audit Trail and begin using the Audit Trail Data Store please visit the Administrator Hands On section of the documentation.

Big Table Data Store

The Big Table Data Store is a repository for especially large tables containing millions of rows of data. It is linked to properties which have the Big Table property capability enabled and is designed to avoid performance issues by replacing the need to store such large property tables in RAM during operations.

The use of the Big Table can only be specified by configuring the Big Table property capability on the respective object property.

Custom Time Series Data Stores

It is also possible to create custom data stores for time series data by using the Archive Selector objects. These objects can be created under the Core and are configured in the object itself rather than in the System object properties.

Custom Time Series Data Stores are then available for selection in the Archive Selector menu for I/O items and other time series data producing objects in the namespace.