Configuration of History Transfer Objects
This section will show a general example of how to configure the History Transfer objects to make a simple transfer of historical data from tags with data contained in an external historian to a JSON file Export.
This example demonstrates the general approach to setting up a history transfer but the examples can be tweaked and configured to work with different historian systems.
Prerequisites and Example Setup
This example requires connection to historical data stored in an external historian. In this example, we will connect to another inmation Core through the inmation Server and transfer historical data from there. To do this you will need two host machines (or VMs) with system:inmation fully installed on both (including all components and MongoDB), with one machine designated the as the "Local" system and the other as "External".
-
On the "External" host, install a second Connector instance using the Node Installer Setup. Also configure the DemoData set of example Data Items from MassConfig file to give you a set of example tags with a year’s worth of historical data.
-
On the "Local" host machine, create a new Connector and use the IP address and port of the 2nd Connector instance installed on the "External" machine. Also create a local Connector object using the IP and port settings for the "Local" Connector instance. You will also need to remove the "External" inmation Server from the Connector’s Datasource Blacklist so that the Connector client can discover the inmation Server to connect.
To browse the Server you might also have to add a NT Authority User object under the "so" profile in the Access model of both the "Local" and "External" systems. They should have "NT AUTHORITY" set as the _Authenticating Domain and "SYSTEM" as the Account Name.
You can also try out this example using a different external historian (Aspen, or OSI PI) to transfer data from. The example will provide options for alternative interface configuration at the appropriate point.
Creating and Configuring a History Controller
The starting point for configuring a history transfer chain is creating a History Controller object underneath the "Local" Core object. The History Controller is the central point from where the whole history transfer process can be overseen, including the creation and configuration of History Transporter and History Sink objects. Although Transporter and Sink objects can be created and configured individually, using a Controller as the central point of configuration means configuration changes will be passed down to other objects automatically and other History Transporter objects can be added later.
-
Connect to "Local" Core with DataStudio and make sure you can browse the inmation Server on the "External" system through the 2nd Connector instance. Check that you can subscribe to the DemoData > Process Data I/O items and see the values changing (This step is not necessary if you plan to transfer data from another external historian system).
Figure 1. Example Namespace - Browsing the "External" I/O Model -
Select and then right click on the "Local" Core object and select
from the context menu. When the Create Object wizard opens, enter a name for the controller in the Object Name field and click Create to create the History ControllerFigure 2. History Controller Wizard -
Select the History Controller in the I/O model and then open the Common property compound in the Object Properties panel. For the moment, we will only change the Recurrence property to "Each minute" so that you can troubleshoot any errors in this example setting. However, it is important to note and explain some of the other settings here:
-
Processing Mode: This switches the Controller from "Configuration" mode to "Operation" mode and essentially acts as an "on" switch to begin the History Transfer process. This property is only switched to "Operation" after configuration is finished.
-
Controller Status: This read only table is populated once transfer begins and gives detailed information on the progress of the transfer, tag by tag.
-
Reset Status: This option will clear the Controller Status table.
-
Reload: This will reload the code behind the Controller without resetting the Controller Status table. This is useful to employ if there is a freeze or halt in the transfer process.
Figure 3. History Controller - Common Properties
-
-
Click Apply to commit the change then move to the next step.
Transporter Configuration
-
Click the + button by Transporter Configuration property compound to add a Transporter.
The History Controller can be used to oversee history transfer options for multiple History Transporters (transporting history from different interfaces and sources). Clicking the + button will add another Transporter. Figure 4. History Controller - Transporter Configuration -
This example will transfer data from the "External" historian using OPC HDA protocol so we will choose the inmation.OpcServer.1 as the target Datasource by clicking on the … and selecting it from the Object Picker, click OK to confirm selection. If you want to transfer data from a PI-Server using a Transporter with an alternative interface please visit the instruction page here.
Figure 5. History Controller - Transporter Configuration, pick Datasource -
Enter a unique Object Name for the Transporter. When configuration is eventually finished, the Transporter will be created beneath the inmation.OpcServer.1, if you wish to also create a Folder/Node hierarchy to place the Transporter in, it can be entered into the Folder Hierarchy field.
Figure 6. History Controller - Transporter Configuration, Name -
Click Apply in te Object Properties panel then go to the Tag Configuration section.
Tag Configuration
Click on the Tag Configuration … button to open the Tag Configuration table. This table is populated with the tags from the external historian that you want to transfer data from.

Brief explanations of the columns are below, more detailed descriptions are available here.
-
ExternalID: the External IDs of the items from which the historical data will transferred (mandatory)
-
Aggregate: the required aggregate for the data (only Raw History currently supported)
-
NodeName: folder that will be created to house the newly created I/O item/DataHolder items to transfer the data to.
-
ObjectName: the name of the object that will be created to transfer the historical data to (mandatory).
-
Expected Frequency: the expected frequency of the time points in the external historian (e.g. 5s would be every 5 seconds)
-
Target ID: the unique identifier in the destination historian system in case the item is used with a history sink function.
For this example, the ExternalID should match the Item ID of the tag in the "External" system for example:

To populate the table:
-
Copy the ExternalID for each external tag you wish to transfer data for.
-
Enter an ObjectName for each tag row. This will be the name of the newly created I/O Item/DataHolder that will historize the transferred data in the "Local" system
-
Optionally, a NodeName can be added. In this example the NodeName is used to create folders to house the different data types.
The completed Tag Configuration table looks like this:

Click OK to close the table and click Apply in the Object Properties panel to confirm.
Processing Mode
Two different processing modes can be selected for the Transporter. These are explained below.
-
Subscription: The historical data is transferred to newly created I/O items that historize the data and also subscribe to new values from the original tags.
-
Continuous: The historical data will be transferred to newly created DataHolder items that historize the data. New values from the original tag are transferred via history calls (known as near time data).
If you wish to create the new items within the namespace of the Datasource/External Historian, both "Subscription" and "Continuous" mode can be selected as both I/O items and DataHolder objects can be created underneath a Datasource object. However, if you wish to create the new items outside of the namespace of a Datasource, you should use "Continuous" mode as only DataHolder object creation is supported beneath other objects, for example Connectors.
In this example we will create the objects within the Datasource namespace so will select "Subscription".
Click Apply in the Object Properties panel to confirm changes.
Other History Transporter options
The other History Transporter options regarding performance and control of the number of data transfer points will be left as default but more information about these options can be found here.
For this example, change the Depth property to "*-30 days" to transfer the last 30 days of history for the selected tags. For the purposes of demonstration, we will also change the Recurrence property to "Each minute" so the results of history transfer can be seen and evaluated more quickly.
History Sink Configuration
The History Sink is the other end of the history transfer chain that allows you to also write history data to an external historian or system. In this example we will configure the History Sink to export the historical data to disk in JSON file format. If you wish to configure the History Sink to write the data to a PI historian using PI Bridge, please contact inmation Customer Service for assistance.
-
Go to Sink Configuration property compound in the History Controller Object Properties panel.
A History Controller can only be connected to one History Sink object. It is not possible to transfer data to multiple History Sink interfaces using the same controller. Figure 10. Sink Configuration Options -
Click on the … button next to the Connector property and select a Connector object using the Object Picker. This is the Connector that the History Sink object will be created under when the Controller is activated and the transfer started. For this example we are using the Sink to export data to file so we will choose the Connector on the "Local" machine. Click OK when finished.
Figure 11. Sink Configuration Options - Connector -
Enter a unique Object Name for the History Sink and then under the Configuration property compound select "Disk (JSON Dump Files)" from the Interface drop-down menu. This will reveal the JSON Dump Files File Path property, where you can enter a path to a directory where the exported files will be placed.
Figure 12. Sink Configuration Options - File Export Settings -
The Statistics checkbox is selected meaning that the Sink object will create Variable children objects to record and historize statistical data for the transfer/export. Click Apply in the History Controller Object properties panel to confirm the property changes.
Starting the History Transfer
When you have finished the configuration of the Transporter and Sink objects in the History Controller Object Properties panel, you can begin the History Transfer.
-
Firstly, make one final check of the configuration of the History Controller to make sure mandatory properties have been selected (Object Names, Connector or Datasource selection, Connection settings etc.) and check for any typos or other errors that might cause problems.
-
When ready, change the History Controller Processing Mode to "Operation" and click Apply to start the transfer.
Figure 13. Start Transfer - Processing Mode -
Once activated, the History Controller will create the History Transporter and History Sink objects beneath the designated Datasource and Connector. The History Transporter then creates the I/O Nodes and I/O items using the NodeName and ObjectName entries from the Tag Configuration table.
Figure 14. Start Transfer - I/O Model -
The Transporter creates I/O Nodes and I/O Items when in subscription mode. As can be seen from the screenshot above, the I/O items are subscribed and receiving real-time values (that are historizing). The transfer of the 30 days of historical data proceeds in the background.
-
The History Transporter and History Sink objects will show one green "good" light (indicating an overall good object state) initially and will show 2 green lights (indicating an overall good object state AND good communication state) when the first successful transfer cycle is completed. If any of the objects shows a red "bad" state then the Diagnostics section of the Object Properties panel should be checked first for errors (see below for more information).
Diagnostics
-
To monitor the overall progress of the history transfer, open the Diagnostics property compound in the History Controller Object Properties panel. This gives information regarding the most recent run (including data size, runtime, errors and end time of last run). The Process State will show the time until the next run and when in operation, will show what the Controller is doing (for example, "Waiting for HistorySink").
Figure 15. Diagnostics - History Controller -
Selecting the History Transporter object in the I/O Model and opening the Diagnostics property compound allows you to see the progress of the history transfer just for the History Transporter function (this is the progress of the history transfer from the "External" historian to the I/O items historizing in the MongoDB repository).
Figure 16. Diagnostics - History Transporter -
The History Transporter progress will normally be completed before the Controller transfer. This is due to the relative fast speed of transfer and writing to the internal MongoDB compared to the History Sink’s output to file or external historian.
Status Table
The status of the history transfer can be viewed by opening the Controller Status table in the History Controller Object Properties panel.

The status table allow the information about and progress of the history transfer to be viewed on a tag by tag basis. The table contains information about the start and end time of the up to date (UTD) values (these are the values that have been transferred as subscription or near time values from the moment the history transfer was started) and the history (HIS) values that are transferred and backfilled to the Depth value specified during the Transporter configuration.
Statistics
The Statistics checkbox is selected by default upon creation the History Controller (and for the subsequent Transporter and Sink configurations). This creates a series of performance monitoring Variable objects beneath the Controller object that can be used to analyze all aspects of the transfer.

Statistics objects are also created beneath the Transporter and Sink objects giving more specific performance data for their individual functions.
Checking the Result of the History Transfer and Exported Files
Once Diagnostics indicate that the transfer is complete, you can check the tags under the History Transporter for transferred history data by adding to a History Trend or History Grid display.

To check the History Sink Export data, go to the File Path specified in the Sink Configuartion to see the exported history data in JSON format.

The files are named with the timestamp of each cycle of the export. Therefore each cycle exports a new JSON file until the transfer is completed. Each file contains the history VQT data transferred during each cycle, organized by tag.
