Dropzone Processing Options
The Dropzone object has three main processing options for data files containing VQT data:
-
Column Based
-
Line by line
-
Custom Script
Column Based Dropzone Processing
If the import file contains tabulated data with distinct columns (including a timestamp data column) separated by tabs, the column based processing option is often sufficient to parse these files and import them into the system. The header at the top of the columns is used to create distinct I/O items in the I/O Model tree. As an example, we will use the Column_Based_Dropzone_simple.txt file, which can be downloaded from this page, it has four tab separated columns, each with a header:

Using a text editor which can display formatting (such as tabs, spaces and carriage returns) is useful when troubleshooting import difficulties as it is often a formatting mistake (for example a space is used rather than a tab) that leads to Dropzone errors. |
To import this file we will use the Dropzone that was created in the first section.
-
Check that the Dropzone Processing option of your Dropzone object is set to Column Based. You can do this in either the Create Object wizard or in the Object Properties panel if you have already created the Dropzone in the I/O tree.
-
Copy the above text file into the Disk Folder location specified when the Dropzone was created.
By default all files which are put in the the Dropzone Folder will be processed and immediately deleted from the Dropzone Folder! Make sure you copy your data files into this folder. DO NOT move them into this folder or they are lost. To preserve the data you put into a Dropzone folder, check Save in 'Processed' folder in the 'Dropzone Options' compound for your Dropzone object in Properties Panel. For an economical handling of storage space, you can limit the amount of hours, for which your data will be preserved, in the Retention Time property. -
After the file is copied a “work” folder will appear in the Disk Folder indicating that the file is being processed:
If there is a problem in processing the file, a separate folder named “error” is created and a text file containing the error message(s) will be placed there. When troubleshooting processing errors it is also useful to check the Log Display for the Dropzone object (to open the Log Display select the Dropzone object in the I/O Model panel, right-click and select Admin → Open Log → Last 10 minutes). Double click the log entries to see more details. -
I/O Item objects are created underneath the Dropzone object with names corresponding to the column headers in the import file:
Figure 2. Dropzone after Column-Based File Import -
Select one of the new I/O items under the Dropzone, right-click and select Add item to History Grid → Entire Data from the context menu (Shortcut
Shift + Alt + J
). You may also need change the Aggregate Type to' Raw data' in the menu bar of the History Grid. The History Grid should look like below:Figure 3. History Grid Display - Imported Dropzone Data -
The data from each column is imported to the corresponding I/O item along with a timestamp from the first column.
The entry in the top row of the HistoryGrid display with the BadWaitingForInitialData quality and no value is created by the Auxiliary State Management (ASM) to indicate the initial auxiliary state of the item as it was created. For more about auxiliary states and auxiliary state management (including how to turn off the ASM) see the System Documentation.
In this example we used a file containing ISO 8601 format timestamps. Other timestamp formats can be used if the Dropzone is configured to accept these, see the System Documentation for more details.
Adding Contextual Information
Extra contextual information contained in the column headers can be used to add property information to the created items, for instance: limits, datatype and engineering units. In this example, we will add contextual data that is contained on extra rows in the column header. Use the file Column_Based_Dropzone_Multi_Column_Header.txt, which can be downloaded from this page, and shown below as the import file:

-
We will use the same Dropzone object created in the first example and change the configuration. If you haven’t created a Dropzone data source yet follow the instructions: Creating a Dropzone Datasource.
-
Delete the objects below the Dropzone by right-clicking on the Dropzone and selecting
. -
In the Object Properties panel for the Dropzone object, expand the Server Type and Dropzone Options sections of the panel.
-
From the Column Header Processing drop-down menu, select Multi-line Column Header.
-
Click … next to Column Header Rows to open a MassConfig sheet that you can use to fill in the Column Header Row information. On each line fill out the information contained on the row and the row number. It should look like the example below:
Figure 5. MassConfig for Column Header Row Information -
Click Ok to return to the Object Properties panel.
-
If you expand the Column Header Rows section of the panel all the header information should now be entered.
-
It may be necessary to change the code page in the Object Properties panel to match that of your input file. This is particularly the case if the file contains special characters or symbols. In this example, we will change the Code Page to Unicode UTF-8 to ensure that the special characters in the Engineering Units are displayed correctly.
-
Click Apply to confirm the changes to the Dropzone settings.
-
Copy the input file with the column header rows into the configured Disk Folder and then check to see the items created under the Dropzone object.
-
Select one of the items created and check to see that the contextual Engineering Unit and Location information have been added in the Object Properties panel.
Contextual information can also be imported from column header using the line by line syntax or enclosing properties in brackets. This is described in more detail in the Dropzone section of the system documentation.
Line by Line Dropzone Processing
The line by line processing option reads VQT data and other information from each line of the import file. The lines of the file are made up of information fields, with each line corresponding to a VQT value.
The different information fields and formats are listed in more detail in the system documentation so this example will demonstrate a simple example of the line by line syntax processing.
The import files are structured with each line containing up to nine different information fields.
-
The Server and Node fields give information about the namespace location of the item. If the Server and Node paths don’t already exist under the Dropzone object then they will be created.
-
The Item field is the tag name of the item. This item will also be created beneath the specified Node path under the Dropzone if it doesn’t already exist.
-
The Value, Quality and Timestamp fields contain the VQT data for the item (Timestamp in ISO 8601 format).
-
An example of a line from an import file is shown below. The fields are separated by semi-colons
Server=DemoData;Node=/Reactor 4711/LabData/;Item=Density;Value=20;Quality=192;Timestamp=2016-08-04T09:26:43.000Z
In this example, the item “Density” under the specified node path and server has the VQT data written.
To try this out, first re-configure the Dropzone object created in the first example to Line by Line processing (If you have not already created a Dropzone datasource see the how-to section above and configure the Disk Folder and Archive options in the same way).
-
Select the Dropzone object in the I/O tree and in the Object Properties panel change the Dropzone Processing option to Line-by-line VQT values. Click Apply to confirm the change.
Figure 6. Configure Dropzone for Line by Line Processing -
Using a text editor, create an import file(.txt or .csv) with a similar line structure to the one shown below or download Line_By_Line_Dropzone_Example_Simple.csv from here.
-
Each line contains the fields specified in the example file. Save the file as a text or CSV file and copy it into the Disk folder, configured in the Dropzone properties, for processing.
-
The Density items are created under the respective Node paths and are both paths are created under the same DemoData Server object. Open a HistoryGrid display for each item to check that the VQT data was imported with the correct timestamps.
Property Values
Along with namespace information and VQT data, other contextual information regarding the items can also be imported using line-by-line processing. The item datatype and location can be imported using the Datatype and Location fields (see Datatype Field section for more info). In the case of the Location field we can specify Latitude, Longitude, Altitude and Location name as shown below (download the Line_By_Line_Dropzone_Example_Location.csv file from here) as part of the Line by line syntax.
After import, the separate location properties are added as I/O property items below the I/O object.
Other property values outside of the ones with dedicated fields can also be imported using the Property field.
In these cases, the numerical or textual property code is defined in brackets after a label indicating the property
field (can be prop, p or prp) followed by the value. For example, prop(DESCRIPTION)="Simulated Density"
or prp(EU_UNITS)="g/cm³"
These fields are added to the import file lines like above (download Line_By_Line_Dropzone_Example_Propertyfield.csv file from here).
The properties are then added as I/O property items below the I/O object:

Importing Data to items Outside of the Dropzone Namespace
As demonstrated in the Line by line syntax examples above, Data can be imported to already existing items. If the items don’t exist then the items are created underneath the Dropzone using the Server, Node and Item field entries to create the folder structure and item. Data can also be imported to objects outside of the Dropzone namespace, however, the object must already exist for the import to be successful. The Dropzone will not create the object (or path to the object) for objects outside the Dropzone namespace.
-
Create a file in a text editor with Node, Item, Value, Quality and Timestamp fields like the one shown below or download here (we do not need a Server field for this example).
-
Save this as a .txt or .csv file and copy it into the Disk Folder of the Drop Dropzone that you used for the previous Line by line syntax examples (if you haven’t created the Dropzone object yet, see the Create a Dropzone section.
-
The file will be processed by the Dropzone (if this is first time you use the dropzone you will see a work folder created in the disk folder) but the Node path and Item are not created as they lie outside the Dropzone namespace.
A processing error is not produced as the file was successfully processed by the Dropzone (a Information log message will be produced in the System Log confirming this). -
Create the items in the I/O model tree that correspond to the Node and Item fields in your import file (right-click on the Connector and selecting Admin → New → Data Processing → Folder from the context menu). In the case of the example file above, the I/O Model should look like below:
Figure 8. Existing Objects Outside of Dropzone -
Re-copy the import file into the Disk Folder and wait for processing to complete. The TestObject HolderItem should change state and go green as the data in the file is written to the object.
-
Right click on the TestObject and select Add item to… → History Grid → Entire Data from the context menu (Shortcut
Shift + Alt + J
). The HistoryGrid should look similar to below, showing that the file was imported and data written to the TestObject.Figure 9. Importing Data to Items Outside of Dropzone Namespace - History Grid
Custom Script
Not all import file types or formats will be covered by the processing options provided by the Column Based and Line by line options. In these cases, the Custom Script option can be used to write a specific processing method in Lua to import your file type. The Custom Script option can also be used to expand the processing method to create new objects, perform calculations or anything else that is possible with Lua scripting. It is not possible to cover every different file type that may be encountered, however, this section will demonstrate a simple custom scripting example that will give you the foundation to create your own custom Lua script for Dropzone.
If you are a beginner to Lua, it is helpful to read the Using Lua Scripting Jump Start as an introduction. Also, the Lua Scripting section is the most useful reference point for using the system Lua functions. |
The script entered in the Lua Script Body property of the Dropzone Datasource must be structured as a Lua Function with the processing script contained within it. For example:
return function(file)
--processing script including IO file handling code
end
The function takes the file that is dropped into the disk folder as its input some file handling code needs to be included to open the file. The Lua IO library is supported in the system and can be called using 'require'. Please see the Line Count example below for more details.
Create a Dropzone to process files using Custom Scripting
-
Creating a Dropzone that uses Custom Scripting to process files is the same as creating any other Dropzone (see the Creating a Dropzone section above), except Custom Script should be chosen as the Dropzone Processing type.
Figure 10. Dropzone - Custom Script Dropzone Processing -
Give the Dropzone a name and choose a Disk folder location in which to deposit the import files.
-
Click Create to create the Dropzone underneath the Connector in the I/O Model tree.
Line Count Example
In the first simple example, we will use the scripting functionality to count the number of lines in a file and write the result to a HolderItem. Custom Dropzone scripts are structured with the whole script written as a Lua function with the file that is being dropped into the Disk Folder as the input for that function. With Custom Scripting processing we have to define exactly how to process that file within the function. Unlike the other processing options where a lot of the actions occur automatically, they must be coded manually in the Lua script of the Custom Script Dropzone. This includes opening the file, and to do that in Lua we need to use the Lua IO library (supported in the system). The following script has a simple function but also demonstrates the basic structure of a Custom Dropzone script.
Select the Dropzone in the I/O Model and click on the Lua Script Body “…” to open the script editor. Enter the following script and click Ok:
return function(file)
local ioLib = require("io")
local name = "Line Count"
local parentPath = syslib.getselfpath()
local lines = {}
local f = ioLib.open(file, "rb")
if f ~= nil then
lines = {}
for line in ioLib.lines(file) do
lines[#lines + 1] = line
end
end
f:close()
local ioitem = nil
ioitem = syslib.getobject(parentPath .. "/" .. name)
if ioitem == nil then
ioitem = syslib.createobject(parentPath, "MODEL_CLASS_HOLDERITEM")
ioitem.ObjectName = name
ioitem.ArchiveOptions.ArchiveSelector = 2
ioitem.ArchiveOptions.StorageStrategy = 1
ioitem:commit()
end
syslib.setvalue(ioitem, #lines)
end
The first line returns the function which takes the import file as its argument.
The rest of the script consists of roughly three parts:
-
The Lua IO library is loaded using
require("io")
and the variablesname
,parentPath
andlines
are defined -
The Lua IO library open function is called to open the file and each line is read into the lines table. The file is then closed using
f:close
-
A HolderItem item is created and its properties set to archive data. The number of entries in the lines table is then written to the value property of the HolderItem
To test the script, drop any multi-line text or CSV file into the Disk Folder (configured when setting up the dropzone). The I/O Model should look like below after the file has been processed, with the “Line Count” value being the same as the one in the imported file.

Although this is a simple example it demonstrates how the Script processing Dropzone can be used to read data files into a Lua table and then process them in any desired fashion. Logic can be applied to the data and objects can be created in the I/O Model. The next example will demonstrate how this can be used to process an non-standard file format and successfully create objects and write VQT data to the object.
Processing a Non-Standard File Format
Custom Scripting is particularly useful for processing non-standard file formats. What is meant by this is file formats that are not processed correctly using the other processing options. Take the following file as an example (download Custom_Script_Non_Standard_File_Format.txt from here), where the coordinates of different objects are recorded at different timestamps:

This type of file would initially look like a candidate for the Column Based processing option. However, if we set up a Dropzone object with Column Based processing and drop the file into the configured disk folder, the column names are used to create I/O items beneath the Dropzone and we end up with the following:

The more appropriate result would be that the Name column would be used to create separate nodes (A, B, C, D etc.) and the x, y, z coordinates for each node would be imported to separate I/O items.
To do this we need to again use the Custom Scripting processing option to process the file in the desired fashion. As with the previous simple Line Counting example, we need to write a script that will do everything we need. In this case that includes:
-
Opening the file and reading all file lines into a Lua table
-
Reading the “Name“ column to create nodes
-
Reading the column headers to create “x”, “y”, ”z” HolderItems
-
Writing the coordinate data to the HolderItems
In this example we will first define some Lua functions to perform these tasks separately. They can be housed in the same script and called in the main function at the end. Later you might want to use or modify these separate functions to construct your own script or place them in a script library.
The Lua function readFileLines
to open the file and read the lines into a table:
--function to read in lines of a input file into a Lua table
readFileLines = function(file)
local ioLib = require'io'
local lines = {}
local f = ioLib.open(file, "rb")
if f ~= nil then
lines = {}
for line in ioLib.lines(file) do
lines[#lines + 1] = line
end
end
f:close()
return lines
end
The Lua function lineColumnsToTable
puts each column in a line into a Lua table, it takes the line and the column
separator type
as arguments. This function is very versatile and can be used for many different file formats:
--function to put columns in a line into a Lua table. arguments are the line and the column separator type
lineColumnsToTable = function(line,sep)
local res = {}
local pos = 1
sep = sep or ';'
while true do
local c = string.sub(line,pos,pos)
if (c == "") then break end
if (c == '"') then
-- quoted value (ignore separator within)
local txt = ""
repeat
local startp,endp = string.find(line,'^%b""',pos)
txt = txt..string.sub(line,startp+1,endp-1)
pos = endp + 1
c = string.sub(line,pos,pos)
if (c == '"') then txt = txt..'"' end
-- check first char AFTER quoted string, if it is another
-- quoted string without separator, then append it
-- this is the way to "escape" the quote char in a quote. example:
-- value1,"blub""blip""boing",value3 will result in blub"blip"boing for the middle
until (c ~= '"')
table.insert(res,txt)
assert(c == sep or c == "")
pos = pos + 1
else
-- no quotes used, just look for the first separator
local startp,endp = string.find(line,sep,pos)
if (startp) then
table.insert(res,string.sub(line,pos,startp-1))
pos = endp + 1
else
-- no separator found -> use rest of string and terminate
table.insert(res,string.sub(line,pos))
break
end
end
end
return res
end
The next two Lua functions create the Folder and HolderItems, taking the name and parent object path as arguments.
The functions check if the objects already exist and creates them if they don’t. The createioitem
function
automatically set the items to historize data:
--function creates a folder
function createFolder (name, parentPath)
local parent = syslib.getobject(parentPath)
local folder = nil
if parent ~= nil then
folder = syslib.getobject(parentPath .. "/" .. name)
if folder == nil then
folder = syslib.createobject(parentPath, "MODEL_CLASS_GENFOLDER")
folder.ObjectName = name
folder:commit()
end
end
return folder
end
--creates holder items
function createioitem (name, parentPath)
local parent = syslib.getobject(parentPath)
local ioitem = nil
if parent ~= nil then
ioitem = syslib.getobject(parentPath .. "/" .. name)
if ioitem == nil then
ioitem = syslib.createobject(parentPath, "MODEL_CLASS_HOLDERITEM")
ioitem.ObjectName = name
ioitem.ArchiveOptions.ArchiveSelector = 2
ioitem.ArchiveOptions.StorageStrategy = 1
ioitem:commit()
end
end
return folder
end
These functions can be copied into the Dropzone Lua script body or alternatively, referenced from a script library. Finally, the main function that calls each of the functions, should be copied into the script body:
--main function, takes file as input, creates I/O tree under DZ from file, writes history data
return function(filepath)
local datasourcepath = syslib.getselfpath()
local lines = readFileLines(filepath)
local headercolumns = lineColumnsToTable(lines[1], "\t")
for i=2, #lines do
local columns = lineColumnsToTable(lines[i], "\t")
createFolder(columns[2], datasourcepath)
local itempath = datasourcepath .. "/" .. columns[2]
for j=3, #headercolumns do
createioitem(headercolumns[j], itempath)
dhpath = itempath .. "/" .. headercolumns[j]
utc = syslib.gettime(columns[1])
syslib.setvalue(dhpath, columns[j], 0, utc)
end
end
end
In the first lines of the function, the file is opened and each line placed in a table (using the readFileLines
function) then the lineColumnsToTable
function reads the header columns into another table (using tab “\t” as the
separator type). The rest of the lines are then iterated through using a for loop and lineColumnsToTable
is used
again to separate each line into columns. The createFolder
function creates a folder using the Name column and then
the “x”, “y”, “z” IoItems are created below it using the createioitem
function in a nested for loop (the names coming
from the header columns table). The VQT values are written to the objects in the same for loop using the
syslib.setvalue
function. Once the Lua script is finished, click Ok to commit the script and try dropping the file
into the configured disk folder. After processing the tree should look like below.

Open a HistoryGrid display for one or more of the items to see that the historized values have been recorded.

The Custom Scripting processing option allows you to use the full Lua scripting engine of system:inmation to process files however you want. The two examples here only touch the surface of what is possible and the potential is there to completely customize the Dropzone object to match the requirements of your system. Please visit the Lua Scripting section to learn more about the system Lua library and the Lua scripting engine. For more examples of Lua applications in the system, look at the Using Lua Scripting Jump Start document.