Actions
Definitions
If you are new to WebStudio make sure to scan through the "Getting Started" page for some context to the reference material presented here. Below is a quick summary of a few key concepts needed to make sense of WebStudio actions documentation.
{
"type": "button",
"actions": { // 1. All widgets have an actions property
"onClick": // 2. Respond to the mouse click Action Hook
[ // 3. The property value is an action pipeline,
// containing 2 actions
{
"type": "passthrough",
"message": { // 4. Add data to the message for use by the next action
"payload": {
"url": "https://docs.inmation.com/home/index.html"
}
}
},
{ // 5. OpenLink action Loads the page referenced in the message
"type": "openLink"
}
]
}
}
-
Widget model
actions
properties: Theactions
property is available in all widget models. It is used to configure action pipelines for selected events/action hooks allowing the compilation to respond to triggers such as onClick or willRefresh for example. -
Action hooks: Are child properties of
actions
used to implement the logic which trigger when specific events occur. Refer to the Widget reference pages for details about the hooks exposed by each. The information is found under theaction
heading. The value assigned to a named action hook is an action pipeline. -
Action pipeline: A pipeline can consist of single
action
or it can be an array containing actions and nested action arrays. The actions used in the pipeline are described on this page. They are executed one after another in the order defined in the pipeline. -
Messages: Every action pipeline receives an input message, in the form of a JSON object, when it is executed, and passes this to the first action in the pipeline. The initial message content depends on the type of widget and the event that triggered the pipeline. Messages are passed from action to action as the pipeline executes. The content of a message can be modified by actions in the pipeline.
Nested action arrays defined sequentially are handled slightly differently from what you might expect based on the description so far.
In essence, they behave as if they execute in parallel, with each array of actions receiving the same input message as the first one in the sequence. The output from the last array in the sequence is passed down to the next single action.
Refer to the nested action arrays section for an example of how this works in a bit more detail. -
Actions: Implement some or other logic based on their type and configured parameters. They can also read and modify the content of the message they receive. Actions in a pipeline often pass information to downstream actions using the message object, usually by manipulating the
payload
field.Each action has the following basic model.
{ "type": "", // Type of the action. "message": { // (Optional) To be merged with the input message. "topic": "", // (Optional) Can be a string or null. "payload": {} // (Optional) Can be any number, string, object or array. }, "catch": {} // (Optional) Define treatment of action errors }
-
Action parameters: In addition to passing data to actions, the message can also be used to supply parameters that are not explicitly defined in the model. For example the text to show in a notify action can be read from the incoming message
payload.text
property by omitting the property from the model definition.[ { "type": "passthrough", // Define a message to be passed through to the next action "message": { "payload": { // Provide action parameters for the notify action "title": "Title from the message", "text": "Text from the message" } } }, { "type": "notify" // Text and title will be taken from the incoming message } ]
Catch
All actions can optionally define a catch
property, which is used to deal with errors that might occur when the action is executed.
{
"catch": { // (Optional) Define treatment of action errors
"type": "break",
"action" : [] // Catch action-pipeline
}
}
The action
property is used to define a pipeline to "handle" the error in some way, by notifying the user for example. Information about the error is provided in the payload
field of the message sent to the catch
pipeline.
The exact payload
content depends on the specific action that was invoked. Refer to the action documentation below for more information.
The catch.action pipeline has no access to the incoming message received by the action itself and thus cannot affect its content. In other words, the main pipeline message is not affected by logic in the catch.action implementation.
|
The type
field determines how the main pipeline execution proceeds after an error has been detected. Options are:
-
continue: The main action pipeline continues to execute once
catch.action
completes. -
break: The main action pipeline execution is aborted once
catch.action
completes. -
throw: The error is "re-thrown" in WebStudio, typically resulting in information being written to the browser console. This option is similar to not implementing
catch
in the action, except that the message is also accessible in the compilation, which it otherwise isn’t.
Supported actions:
-
action: Refers to another action to be executed.
-
collect: Collect data from a widget.
-
consoleLog: Write to the browser’s console log.
-
convert: Converts data to and from JSON, Base64.
-
copy: Copy to clipboard.
-
delegate: delegate the execution context of an action pipeline.
-
dismiss: Dismiss an active floating tab or the last shown prompt
-
function: Advanced Endpoint call to the system.
-
gettime: Converts relative, ISO UTC and milliseconds since Epoch timestamps.
-
load-compilation: Load a compilation from the system.
-
modify: Change the model of a widget.
-
notify: Display a notification.
-
open-file: Load data from file
-
openLink: Opens a URL in the browser.
-
passthrough: Passes the input message to the next action with the option to merge a message and/or payload.
-
prompt: Show a dialog.
-
read: Reads a value of an object.
-
read-historical-data: Read historical data
-
read-raw-historical-data: Read raw historical data
-
read-write: Used for data sources, supports
read
andwrite
. -
refresh: Refresh a widget.
-
save-file: Save data to file.
-
send: Send data to another widget.
-
subscribe: Subscribe to data changes in the system.
-
switch: Execute different actions based on conditions.
-
tpm-oee: Read configuration table models from the backend relating to OEE monitoring.
-
transform: Transform the data using MongoDB’s Aggregation Pipeline logic.
-
wait: Adds a delay before executing the next action.
-
write: Writes a value to an object.
Action
Invoke a named action defined in the widget’s own actions
collection or in the actions
collection at compilation
level. Actions defined at the widget level take precedence over those at compilation level. If a widget refers to a
named action which exists in his own model and at compilation level, the one in the widget collection will be executed.
{
"type": "action",
"name": "NAME OF THE ACTION"
}
Refer to the write-example-01 compilation to see how named actions are defined and used.
Collect
Collect data from a widget. The specific data retrieved dependents on the source widget. In general, collect returns the
same payload data provided to action pipelines defined directly on the referenced widget. Refer to the widget specific
documentation for more details. The collected information is assigned to the provided key
name of the message
payload
. If a key
is not specified the whole payload
will be overwritten with the collected data.
{
"type": "collect",
"from": "Place the ID of the widget here",
"key": "collectedData"
}
The from
field can be set to one of the following:
-
Widget ID: ID of a widget at the same level in the compilations. Note: Dot-notation cannot be used to collect data from widgets nested inside tabs.
-
Route: The route notation is typically used to access widgets contained in nested tab compilations.
-
"self": This allows the pipeline to fetch data from the widget that initiated the actions. This might seem like an odd thing to do since the message at the beginning of a pipeline will be initialized by the source widget, but as the execution progresses this information may be overwritten. Using the
collect
action on "self" provides a way to get back to the original content.
Note: Collect is mainly intended to grab the "data" content of a widget. If you need to get access to any other fields
of a widget during pipeline execution, the modify action can be used together with set
Console Log
Writes the input message payload
to the browser’s console log.
{
"type": "consoleLog"
}
Add context by providing a tag
.
{
"type": "consoleLog",
"tag": "fetch response"
}
Example to set a fixed text message on the payload
:
{
"type": "consoleLog",
"tag": "fetch response",
"message": {
"payload": "This is just a message"
}
}
Example to merge the input message payload
with other key-value data.
{
"type": "consoleLog",
"tag": "fetch response",
"message": {
"payload": {
"otherText": "This is just some other text message"
}
}
}
Convert
Converts the payload
to a specified format. Supported formats are:
-
json
-
base64
Encode the payload:
{
"type": "convert",
"encode": "json"
}
Decode the payload:
{
"type": "convert",
"decode": "json"
}
Delegate
Using tabs
widgets, compilations can be created that contain sub-compilations in each tab. Widgets defined within
these cannot directly interact with other widgets in peer level compilations.
The delegate
action provides a mechanism to address this constraint.
{
"type": "delegate",
"action": []
}
This action is probably easiest to understand by considering an example.
Suppose we have a tabs
widget with two tab instances, each containing their own text
widget, text01 and text02. We want to change the text of text02 when clicking on text01. As a first attempt, we might try something like this:
{
"type": "text",
"text": "Click Me",
"id": "text01",
"actions": {
"onClick": {
"type": "send",
"to": {
"route": [
"tabs01",
"tab02",
"text02"
]
},
"message": {
"payload": "Text on tab 1 was clicked"
}
}
}
}
This fails to update text02 since, from within the tab01 compilation, there is no widget that resolves to the provided route. The route only makes sense when it is traversed starting at the root compilation.
Root (compilation) | +- tabs01 (widget) | +- tab01 (compilation) | | | +- text01 (widget) <-- Action starts here | +- tab02 (compilation) | +- text02 (widget)
To get the correct execution context we need to define the action at either the root compilation level, or in the actions section of the tabs01 widget.
Note: The execution context refers to the compilation model from which routes and widget ids are resolved.
A named action can be be defined in the root compilation and invoked from text01. This works since named actions are resolved by searching upwards in the containment hierarchy. In this case the onClick
could be defined like so:
{
"type": "text",
"text": "Click Me",
"id": "text01",
"actions": {
"onClick": {
"type": "action",
"name": "modifyWidgetOnSecondTab"
}
}
}
with the named action modifyWidgetOnSecondTab
in the root compilation:
{
"actions": {
"modifyWidgetOnSecondTab": {
"type": "send",
"to": {
"route": [
"tabs01",
"tab02",
"text02"
]
},
"message": {
"payload": "Text on tab 1 was clicked"
}
}
}
}
Unfortunately, just declaring the named action in the root compilation is not enough. The execution context is not affected by where the action is declared, unless delegate
is used.
{
"actions": {
"modifyWidgetOnSecondTab": {
"type": "delegate",
"action": [
{
"type": "send",
"to": {
"route": [
"tabs01",
"tab02",
"text02"
]
},
"message": {
"payload": "Text on tab 1 was clicked"
}
}
]
}
}
}
In other words, delegate
changes the context of the execution pipeline to be at the level where it is defined in the compilation. Using the new context, the pipeline defined in the action
property is executed.
When the delegate
action returns, the context is restored and any subsequent actions will be executed in the context that was there before.
Note: The context also includes the widget that initiated the pipeline, and is referred to as "self"
. If the
delegate
action is declared at compilation level, the self
widget will not be set.
Dismiss
This action is used to close a modal dialog shown using the prompt action or to close a floating tab from within the tab compilation.
{
"type" : "dismiss"
}
Note: The dismiss action closes the last shown prompt. Prompts can be "stacked" by invoking a prompt
action from
within an active popup dialog. When dismiss
is called, only the top most instance is closed.
The example below shows the use of the dismiss action to close a tab.
{
"type": "tabs",
"appearance": {
"type": "floating"
},
"tabs": [
{
"id": "tab01",
"name": "Tab01",
"indicator": {
"title": "Tab 01"
},
"compilation": {
"version": "1",
"widgets": [
{
"type": "text",
"text": "Dismiss",
"captionBar": false,
"actions": {
"onClick": {
"type": "dismiss"
}
},
"layout": { "x": 0, "y": 0, "w": 32, "h": 32,
"static": false
},
"id": "txt"
}
],
"options": {
"stacking": "vertical",
"numberOfColumns": 32,
"width": 200,
"height": 150,
"numberOfRows": {
"type": "count",
"value": 32
}
}
}
}
],
"options": {
"tabAlignment": "top"
},
"layout": { "x": 31, "y": 0, "w": 29, "h": 5,
"static": false
},
"id": "tabs"
}
GetTime
The gettime
action is used to convert one or more input time-values to a desired output format.
{
"type" : "gettime",
"set": [ // List of values to convert (optional)
{
"value": "*", // Input value to be converted
"name": "now", // Name of the converted value in the returned payload
"asEpoch": false, // Return time as an epoch integer (optional)
"timezone" : "America/Sao_Paulo", // Time zone name (optional)
"format": "HH:mm:ss" // Format expression (optional)
}
]
}
Input Value Format
The input values can be provided in one of three forms listed below, each of which have a default output conversion format. The defaults are overridden by providing additional action parameters:
-
Relative time expression: Converts to:
-
An ISO time string by default. The result is expressed in UTC. An optional
timezone
name can also be set, resulting in a time offset being added. -
A formatted time string is returned when a
format
expression is provided. Thetimezone
can also be specified. The structure of theformat
expression is documented here -
An epoch integer number, expressed in milliseconds, is returned when the
asEpoch
property is set to true. Note that theasEpoch
flag is ignored if either aformat
ortimezone
is provided.
-
-
ISO time string: Converts to:
-
An epoch number by default.
-
A time zone specific ISO string is returned when a
timezone
name is provided. -
A formatted time string results when a
format
expression is added.
-
-
Epoch number: Converts to:
-
An ISO UTC string by default.
-
A time zone specific ISO string is returned when a
timezone
name is provided. -
A formatted time string when a
format
expression is added.
-
Refer to the conversion matrix for more details.
Input Value Source
The most direct way to use gettime
is by explicitly configuring the set
property which, as shown above, is an array of objects each defining a value to be converted.
This works really well for fixed input values, but less so when they are dynamically set while the compilation is being used. The source could be user input from a form in which a custom relative time is entered for example or a value received from the back-end etc.
To accommodate the conversion of dynamic values gettime
merges properties from the input message payload as follows:
-
The incoming
message.payload
is a string, containing either a relative or ISO time, or an integer representing an epoch time:gettime
treats thepayload
as avalue
parameter and overwrites thepayload
with the converted output.format
,timezone
andasEpoch
can be set directly in the action body. Theset
property must be omitted from the action definition. -
The incoming
message.payload
is an object with aset
property (see example). Ifgettime
already contains aset
property, the entries from themessage.payload.set
array are merged with it.
Relative time expressions
A time relative expression consists of a *
, representing the time now, optionally followed by an offset expression subtracted from, or if required, added to the current time.
The offset is stated as a sequence of one of more duration expressions. Each duration expression consists of an integer or decimal* number followed by a timescale unit. The supported timescale units are:
-
ms - millisecond
-
s - second
-
m - minute
-
h - hour
-
d - day
-
w - week
-
M - month
-
Q - quarter
-
y - year
Care should be taken when using decimal numbers for the time offset, since the value is interpreted differently depending on which timescale units it applies to:
|
Here are some examples of relative time expressions:
Expression | Description |
---|---|
* |
Now |
*-5d |
5 days ago |
*-1h30m |
1 hour, 30 minutes ago |
*+30m |
30 minutes from now |
Conversion Matrices
The tables below illustrate the conversions performed by gettime
for each of the supported input formats in combination with the modifier properties asEpoch
, format
and timezone
.
Input value as a relative time = "*-1d"
asEpoch |
format |
timezone |
Result | Comment |
---|---|---|---|---|
"2022-07-21T12:02:17.336Z" |
Default conversion |
|||
true |
1658404937336 |
Force epoch |
||
"DD/MMM/YYYY HH:mm" |
"21/Jul/2022 13:02" |
Local time |
||
"Asia/Tokyo" |
"2022-07-21T21:02:17+09:00" |
Tokyo time |
||
"DD/MMM/YYYY HH:mm" |
"Asia/Tokyo" |
"21/Jul/2022 21:02" |
Tokyo time |
Input value as an ISO UTC String = "2022-07-21T12:02:17.336Z"
asEpoch |
format |
timezone |
Result | Comment |
---|---|---|---|---|
1658404937336 |
|
|||
"DD/MMM/YYYY HH:mm" |
"21/Jul/2022 13:02" |
local time |
||
"Asia/Tokyo" |
"2022-07-21T21:02:17+09:00" |
Tokyo time |
||
"DD/MMM/YYYY HH:mm" |
"Asia/Tokyo" |
"21/Jul/2022 21:02" |
Tokyo time |
Input value as an ISO String with time offset = "2022-07-21T21:02:17+09:00"
asEpoch |
format |
timezone |
Result | Comment |
---|---|---|---|---|
1658404937000 |
Default |
|||
false |
"2022-07-21T12:02:17.000Z" |
Convert to UTC |
||
"DD/MMM/YYYY HH:mm" |
"21/Jul/2022 13:02" |
Local time |
||
"GB" |
"2022-07-21T13:02:17+01:00" |
London Time |
||
"DD/MMM/YYYY HH:mm" |
"GB" |
"21/Jul/2022 13:02" |
London Time |
Examples
Convert relative times
The JSON below shows an example of gettime
used to convert relative times to absolute ISO UTC strings:
{
"type": "gettime",
"set": [
{
"name": "starttime",
"value": "*-5d"
},
{
"name": "endtime",
"value": "*-1d"
}
]
}
Resulting message:
{
"payload": {
"starttime": "2021-04-24T10:31:04.091Z",
"endtime": "2021-04-28T10:31:04.092Z"
}
}
Convert to Epoch
This example shows how to convert a relative time to its Epoch equivalent:
{
"type": "gettime",
"set": [
{
"name": "starttime",
"value": "*-5d",
"asEpoch": true
}
]
}
yields this output message:
{
"payload": {
"starttime": 1619260264103
}
}
Convert a value from the input message payload
The action can also take the set
instruction from the message payload to do dynamic conversions.
Action message input:
{
"payload": {
"set": [
{
"name": "myTimestamp",
"value": "*-2d"
}
]
}
}
Action:
{
"type": "gettime" // Notice it does not contain a set field.
}
Result is the message output:
{
"set": [], // Since the action result is merged with the message input the set is still present.
"myTimestamp": 1630497600000
}
Function
Invoke an Advanced Endpoint.
{
"type": "function",
"lib": "LIBRARY NAME",
"func": "FUNCTION NAME", // (Optional) Function name in case library is Lua table.
"farg": {}, // (Optional) Function argument.
"ctx": "", // (Optional) System object path.
"catch":{ // (Optional) Handle server errors.
"type": "break",
"action": [
{
"type": "notify"
}
]
}
}
If the function call fails for some reason, by returning a 4xx or 5xx HTTP response code, an error will be logged to the browser console, which the user will only see if they manually review the log.
By defining a catch section the failure can be made visible, or "handled" in some way, directly in the compilation. Error information is sent to the catch.action
pipeline in a message which will have the following structure (also see Advanced Endpoints hlp:createResponse documentation):
{
"payload": {
"body": {
"error": [
{
"msg": "Error message from the hlp:createResponse call in the invoked Lua script"
}
]
},
"error": {
"msg": "Error message from the hlp:createResponse call in the invoked Lua script"
},
"headers": {
"content-length": "97",
"content-type": "application/json"
},
"method": "POST",
"statusCode": 500,
"url": "http://vps01.inmation.eu:8003/api/v2/execfunction/http-results/result_500"
}
}
This payload is generated by WebStudio by processing the HTTP response received back from the system.
name | description |
---|---|
|
HTTP response body as received by WebStudio |
|
Depending on the content of
|
|
Response headers received back from the call. |
|
HTTP call method used. |
|
HTTP status code |
|
Advanced endpoint URL. |
When the notify action is invoked in the catch action pipeline, it will use the value of the message payload.error.msg as the text field.
|
Load-Compilation
WebStudio can retrieve an initial compilation from the back-end by providing appropriate load parameters in the URL. With the load-compilation
action this can also be done interactively from within a compilation.
The compilation source can be static JSON stored on the property of an object or it can be dynamically generated by invoking a Lua advanced endpoint which returns the desired JSON.
Reading static compilation JSON
In this scenario a compilation, interactively created in WebStudio, is stored in the back-end from where it can be retrieved using the following action expression:
{
"type": "load-compilation",
"subType": "object-name", // Read compilation JSON from the provided object spec and property name
"history": { // Update the browser history
"type": "replaceState" // replace the current history entry with the target compilation's URL
},
"objspec": "/System/Core/WebStudio/Hello World Compilation Holder", // Source object on which the JSON is held.
"name": "hello-compilation" // Name of the property containing the JSON.
}
The history
property must be provided, with the type
set to either "replaceState" or "pushState" if the load action should also update the browser history. If the property is not present or the history.type
field is set to "none" then the browser history is left untouched. In this case, users will typically need to manually reload the source compilation if they want to return to the originating compilation.
Note: Use DataStudio to save the compilation JSON on a selected object. Typically a custom property is used for this purpose. (To apply the JSON in DataStudio, select the property input field and press Shift-F2 to bring up the JSON editor)
Post Processing
In many cases it is convenient to be able to add some dynamic elements to an otherwise static compilation.
Prerequisites
|
Consider the following example: Let’s say, we start with the "Hello World" static compilation shown below, which is stored in the system object "Hello World Compilation Holder" on the custom property "hello-compilation".
{
"version": "1",
"widgets": [
{
"type": "text",
"text": "Hello World",
"captionBar": false,
"layout": { "x": 32,"y": 2,"w": 21,"h": 6,"static": true},
"id": "txt1"
}
],
"options": {
"stacking": "none",
"numberOfColumns": 96
}
}
But now we want to modify the txt1.text
field to show the current release version of the core. This can be done dynamically when the compilation is loaded, by adding a script function called "webstudio-hello-compilation" to either the "WebStudio" folder or directly on the compilation holder.
Using DataStudio the script function can be added to the "WebStudio" folder for example.
The logic to set the widget text to the system version might look something like this:
return function(arg, req, hlp, comp)
-- Get the text widget from the compilation
local txt = comp.widgets[1]
-- Make sure this is the widget we expect and bail out if not.
if txt?.id ~= "txt1" then return comp end
-- Change the text to show the system version
txt.text = "System version: " .. syslib.queryenvironment("MODULE_VERSION")
-- return the modified compilation
return comp
end
The mechanism by which this code is invoked is exactly the same as that used to customize the index page. Here is how it all works:
-
When the
load-compilation
action is invoked, WebStudio checks thesubType
to see if it equals "object-name" and whether thectx
execution contexts was provided. Thectx
is required since only scripts that appear somewhere in the context path will be accessible to the loader.-
If
ctx
is not set an attempt is made to provide a default. Theobjspec
property is inspected to determine if it is assigned a path string which includes the "WebStudio" folder object. If this is so, this path is used as the execution context. Alternatively, the path to the WebStudio folder object is used when theobjspec
is equal the the string "WebStudio". -
If the
objspec
points to an object outside of the WebStudio folder the post processing is not invoked.
-
-
Webstudio then makes a call to the a compilation loader Lua function in the system, passing in the collected parameters.
-
The loader retrieves the model JSON from the custom property on the referenced
objspec
. In this example the host object is called "Hello World Compilation Holder" and the property containing the static JSON is called "hello-compilation". -
Before the JSON is returned to WebStudio, the loader tries to execute a function called "webstudio-<property name>", which in this case resolves to "webstudio-hello-compilation".
It does this by accessing the function using a "require" statement:-- Build up the name of the script to require, based in the name of the -- custom property from which the compilation was loaded local scriptName = ("webstudio-%s"):format(propertyName) -- Get the function based on the name local postProcessingScript = require( scriptName ) -- Call the function to do the post processing if type(postProcessingScript) == "function" then postProcessingScript( args, req, hlp, compilation) end
Note that the above code shows the gist of loader logic, not the actual implementation which has to deal with permission and error checking…
-
Since the function exists in the context of the holder object, it gets executed and the result is returned to the front end.
Generating Compilations Dynamically
For more flexibility, Lua functions can be implemented in the core to dynamically generate and return the compilation JSON. Using this approach parameters passed provide dynamic context to the generation logic.
In the example, a Lua function called GetCompilation
is used to generate a simple view containing a singe text
widget who’s text
property is passed in as a parameter.
{
"type": "load-compilation",
"subType": "function", // Get the compilation from a advanced endpoint
"history": { // Update the browser history
"type": "pushState" // Add an entry to the browser's history stack
},
"ctx": "/system/Core/MyObject", // Function execution context
"lib": "CompMainLib", // Name of library containing the advanced endpoint
"func": "GetCompilation", // Name of the function
"farg": { // Optional function arguments.
"showCap": true,
"text": "This is a generated compilation"
}
}
To see this in action apply the Lua snippet shown as a custom library to a folder object in the IO model called "MyObject" and name the library: "CompMainLib". Invoking the above action from a button object’s onClick
, for example, will result in the content being replaced by the constructed compilation:
local lib = {}
function lib.GetCompilation( _, arg, req, hlp)
return { -- The Lua table returned by the advanced endpoint is automatically converted to JSON
version = "1",
widgets = {
{
["type"] = "text",
text = arg.text or "Hello World", -- Assign the passed in text to the widget
captionBar = arg.showCap or false, -- Optionally show the caption bar
layout = { x = 30, y = 9, w = 25, h = 10, static = false },
id = "txt1"
}
},
options = {
stacking = "none",
numberOfColumns = 96,
padding= { x = 0, y = 2},
spacing= { x = 2, y = 2}
}
}
end
return lib
Modify
With modify
the model of a widget can be changed. Rather than directly altering the underlying widget, this action applies changes by performing the following steps:
-
Copy the work model of the widget being modified into the
model
field of the message received by the action.There is a special case to consider, which arises when the message received by the modify action already contains the model of the widget being modified. In this case, the model is not copied into the message again but the one present is used.
Consider the situation when a widget is modified in a pipeline action and also contains a
dataSource
in which a modify of "self" is used, typically to read values from the model for use in thedataSource
logic. The sequence of execution would look something like this:-
Property x of Widget y is modified in an action pipeline by assigning a new value to
message.model.x
using an expression like.{ "type": "modify", "id": "y", "set": [ { "name": "model.x", "value": 123 } ] }
-
The change triggers a refresh of widget y, which in turn causes the
dataSource
to evaluate. -
The dataSource pipeline receives a message containing the model loaded and changed in the first modify action.
-
Any modify actions in the pipeline will use the provided model rather than retrieving it from the widget again. This ensured the changes made in the first modify, but not yet committed are available in the later actions.
-
-
Merge operator fields from the
message.payload
into the action definition. (Have a look at this example to see the process in action). -
Apply one or more modification operator. These operators typically change values in the
message.model
. -
Update the the work-model of the widget with the content of
message.model
. This triggers update and possibly refresh lifecycle hooks depending on what was changed and the action settings. (Also see Using modify to collect model information)
If no modification operator is specified a mergeObjects
will be performed. This will merge / overwrite the properties from the message.payload
into the model rather than applying the message.model
. (See example below)
The id
field is used to point to the widget which needs to be modified. The value is normally a string with the ID of
the relevant widget.
{
"type": "modify",
"id": "TextWidget" // Pointing to a Text widget which has an ID of 'TextWidget`.
}
Should you need to reference a nested widget, which is to say a tab instance of a Tabs widget or a widget which resides within a tab, you need to use a route
expression.
{
"type": "modify",
"id": {
"route" : [
"MyTabs", // ID of the Tabs widget.
"Tab01", // ID of the single tab.
"TextWidget" // Pointing to a Text widget which has an ID of 'TextWidget` which.
]
}
}
The id field may be set to "self" resulting in the action being applied to the currently scoped widget.
|
Modification Operators
The following modification operators are supported, multiples of which can be specified in the same action. They are
evaluated in the order shown. In other words, if multiple modification operators are used, the set
modification is
done first, followed by unset
and so on.
-
set
: Add field to model or update field. -
unset
: Remove field from model. -
addToArray
: Adds an item to an array field. -
removeFromArray
: Removes one or more items from an array that matches the provided fields. -
filter
: Removes items from an array field based on a condition.
A transform
action will be performed under the hood with completeMsgObject
set to true. As stated earlier, the model
field is added to the input message by the modify action before the transform
actions starts. It is read from the model of the widget being modified (designated by the id
field).
The message payload
, which is passed down from the originating widget or previous action in the pipeline is typically
used as the source for setting model fields.
Main signature of the modify
action is:
{
"type": "modify",
"id": "TextWidget",
"refresh": true, // Default is true. Set it to false to prevent a widget refresh.
"debug": false // If true, writes the model to the console log, after the modification.
}
Examples
A number of examples are presented to illustrate how modify actions are used and what the underlying transform logic looks like
Merge the widget model with the message payload
{
"type": "modify",
"id": "TextWidget"
}
Will result in a transform
action:
{
"type": "transform",
"completeMsgObject": true,
"aggregateOne": [
{
"$project": {
"model": {
"$mergeObjects": [
"$model",
"$payload"
]
},
"payload": "$payload"
}
}
]
}
To modify a specific sub document of the model, make use of the supported modification operators.
Example to update the font size:
{
"type": "modify",
"id": "TextWidget",
"set": [
{
"name": "model.options.style.fontSize",
"value": "50px"
},
{
"name": "model.options.style.fontFamily",
"value": "Courier New"
}
]
}
Will result in a transform
action:
{
"type": "transform",
"aggregateOne": [
{
"$set": {
"model.options.style.fontSize": "50px",
"model.options.style.fontFamily": "Courier New"
}
}
]
}
Example to update the style:
With this example the custom style will only include fontSize
.
{
"type": "modify",
"id": "TextWidget",
"set": [
{
"name": "model.options.style",
"value": {
"fontSize": "50px"
}
}
]
}
Will result in a transform
action:
{
"type": "transform",
"aggregateOne": [
{
"$set": {
"model.options.style": {
"fontSize": "50px"
}
}
}
]
}
Example to remove the style example:
This way the widget uses the default style.
{
"type": "modify",
"id": "TextWidget",
"unset": ["model.options.style"]
}
Will result in a transform
action:
{
"type": "transform",
"aggregateOne": [
{
"$project": {
"model.options.style": 0
}
}
]
}
Example to update an item by index:
This example changes the aggregate
of the first pen in a chart. Since the field notation is one-on-one used in the
Aggregation Pipeline, the index is zero based.
{
"type": "modify",
"id": "chartWidget",
"set": [
{
"name": "model.chart.pens.0.aggregate",
"value": "AGG_TYPE_AVERAGE"
}
]
}
Will result in a transform
action:
{
"type": "transform",
"completeMsgObject": true,
"aggregateOne": [
{
"$set": {
"model.chart.pens.0.aggregate": "AGG_TYPE_AVERAGE"
}
}
]
}
Example to add an item to the data
array of a table widget:
{
"type": "modify",
"id": "fixedTableWidget",
"addToArray": [
{
"name": "model.data",
"value": {
"column1": 1,
"column2": 2
}
}
]
}
Will result in a transform
action:
{
"type": "transform",
"completeMsgObject": true,
"aggregateOne": [
{
"$set": {
"model.data": {
"$concatArrays": [
{
"$cond": [
{
"$isArray": [
"$model.data"
]
},
"$model.data",
[]
]
},
[
{
"column1": 1,
"column2": 2
}
]
]
}
}
}
]
}
Example to remove an item from an array by index:
Remove the third row from a fixed data table.
{
"type": "modify",
"id": "fixedTableWidget",
"removeFromArray": [
{
"name": "model.data",
"idx": 2
}
]
}
Will result in a transform
action:
{
"type": "transform",
"completeMsgObject": true,
"aggregateOne": [
{
"$project": {
"model.data.1": 0
}
}
]
}
The index provided in the removeFromArray action is zero based, and must be a static number. This begs the question "How do I delete a row based on user input?". The answer is to do the modify in two steps.* Set the removeFromArray property in the message payload passed to the modify action* Then do the 'modify', leaving out the removeFromArray property, which will be taken from the message payload.The following example should make this clearer. |
Example to remove an item from an array based on a match expression:
Remove the rows from a fixed data table for which the name
field equals Inside Temperature
and the value
field is equal to 26.
{
"type": "modify",
"id": "fixedTableWidget",
"removeFromArray": [
{
"name": "model.data",
"item": {
"name": "Inside Temperature",
"value": 26
}
}
]
}
Alternative structure:
{
"type": "modify",
"id": "fixedTableWidget",
"removeFromArray": [
{
"name": "model.data",
"item": [
{
"name": "name",
"value": "Inside Temperature"
},
{
"name": "value",
"value": 26
}
]
}
]
}
Will result in a transform
action:
{
"type": "transform",
"completeMsgObject": true,
"aggregateOne": [
{
"$set": {
"model.data": {
"$filter": {
"input": "$model.data",
"as": "item",
"cond": {
"$or": [
{
"$ne": [
"$$item.name",
"Inside Temperature"
]
},
{
"$ne": [
"$$item.value",
26
]
}
]
}
}
}
}
}
]
}
Make sure the fields referenced in the item property exist in the array to be modified. Referring to model fields which are not present while deciding which rows to remove from an array, can yield unexpected results.
|
To illustrate the point, consider the following:
Suppose we have a table with name
and value
fields as before and want to remove all rows where the value
is null. We create the modify
action with a typo in the item
property (referring to valueX rather than value):
{
"type": "modify",
"id": "fixedTableWidget",
"removeFromArray": [
{
"name": "model.data",
"item": {
"valueX": null
}
}
]
}
When this action executes, the transform filter will look up model.data.valueX
in each row, which always yields null since the field is not there, and compare it to the target value of null, resulting in all rows being deleted.
Example to filter items from an array:
Filters the rows from a fixed data table of which the column value
is greater than or equal to 20. The condition
is
an Aggregation Pipeline filter condition. Within the condition $$item
is reserved for referencing an item in the
array.
{
"type": "modify",
"id": "fixedTableWidget",
"filter": [
{
"name": "model.data",
"condition": {
"$gte" : [
"$$item.value", 20
]
}
}
]
}
Will result in a transform
action:
{
"type": "transform",
"completeMsgObject": true,
"aggregateOne": [
{
"$set": {
"model.data": {
"$filter": {
"input": "$model.data",
"as": "item",
"cond": {
"$gte" : [
"$$item.value", 20
]
}
}
}
}
}
]
}
Using modify
to collect model information
The modify
action can also be used to copy properties from the widget’s work model into the message payload from where these are available to downstream actions. In doing so, the source widget is not changes at all. Consequently there is no need to invoke update and refresh lifecyle hooks which are not triggered in this scenario.
The table below summarizes the behavior of the modify
action based on the setting of the refresh
flag, and whether the model was modified.
refresh value | Model altered | Lifecycle hooks triggered |
---|---|---|
undefined |
false |
No refresh or update |
undefined |
true |
Refresh and update |
false |
false |
No refresh and no update |
false |
true |
Only update |
true |
false |
Refresh and update |
true |
true |
Refresh and update |
Example:
Suppose we want to toggle the background color of a text widget between say green and transparent each time
it is clicked. To achieve this a switch action can be used which "looks at" the current background color
and set the opposite one in onClick
. The tricky part is getting access to the current color. The json snippet below
shows how this might be achieved:
{
"actions": {
"onClick": [
{ // Start by reading the current background color.
"type": "modify",
"id": "self",
"set": [
{ // Note how the model and payload are referenced to
// read the background color and save it in the message payload
"name": "payload",
"value": "$model.options.style.backgroundColor"
}
]
},
{ // Swop the colors in the message payload.
"type": "switch",
"case": [
{
"match": { // if the current bg is transparent
"payload": "transparent"
},
"action": { // then set it to green
"type": "passthrough",
"message": {
"payload": "green"
}
}
},
{
"match": {}, // otherwise
"action": { // set to back to transparent
"type": "passthrough",
"message": {
"payload": "transparent"
}
}
}
]
},
{ // Apply the payload color to the widget background.
"type": "modify",
"id": "self",
"set": [
{
"name": "model.options.style.backgroundColor",
"value": "$payload"
}
]
}
]
},
}
Notify
Displays a notification in the top right of WebStudio.
{
"type": "notify",
"title": "Copied",
"text": "📋 Copied to Clipboard",
"duration": 2500,
"transition": "slide",
"style": {},
"styleByTheme" : {}
}
name | description | ||
---|---|---|---|
|
This is an optional field. Set by default to the |
||
|
Optional text to display under the title. The field can be provided, either explicitly in the action or in the message |
||
|
Optional, defaulting to |
||
|
Optional with a default value of
|
||
|
Optional properties used to adjust the appearance of the notify bubble. As with most action properties, they can also be provided in the payload of the message passed to the notify action. If both the incoming message and the action model contain style settings, these are merged at run-time, with the ones from the message taking precedence over those from the model
In the example below, the text color will be "black" for both the light and dark themes, since it is defined in the message payload rather than in the action model.
|
Open-File
Loads a file from disk into the payload
of the action message.
{
"type": "open-file"
}
Example: Load table data from disk.
In this example a "load" button is added to a table widget’s toolbar. The action pipeline of the new button performs an open-file
action, prompting the user to select a file.
Once selected, the content of the file is assigned to the message payload
. Since the intent here is to use the loaded text to populate the table data, the payload
is decoded back to JSON and applied to the table data
property.
{
"type": "table",
"name": "Fixed Data",
"toolbars": {
"top": {
"tools": {
"load": { // Add a "Load" button
"type": "button",
"title": "Load",
"actions": {
"onClick": [ // Click Action Pipeline
{ // Open a file
"type": "open-file"
},
{ // Convert the content to JSON
"type": "convert",
"decode": "json"
},
{ // Apply the payload to the data field
"type": "modify",
"id": "self",
"set": [
{
"name": "model.data",
"value": "$payload"
}
]
}
]
}
}
},
"toolsOrder": {
"leftOrTop": [
"load"
]
}
}
},
"actions": {
"onSave": [
{
"type": "convert",
"encode": "json"
},
{
"type": "save-file",
"filename": "table.json"
}
]
}
// etc...
}
In the example, it is expected that the file being loaded will contain valid JSON, matching the structure of the data expected by the table. |
Open Link
Opens a hypermedia link.
{
"type": "openLink",
"url": "https://www.lipsum.com",
"target": "_blank"
}
target
is optional and by default _blank
.
-
_self
Opens the document in the same window/tab as it was clicked. -
_blank
Opens the document in a new window or tab.
Passthrough
The input message will be passed through to the next action with the option to merge with a defined message.
{
"type": "passthrough",
"message": {
"someField": "This field will be merged besides input message topic and payload",
"payload": {
"attrib": "This field will be merged within the input message payload"
}
}
}
Prompt (Dialog)
This action causes a popup dialog (prompt) to be shown.
-
Content: The content/model of the prompt is declared in a message payload and must be a single widget. It is possible to indirectly show a complete compilation by using a tabs widget containing one or more tab instances. The appearance of a single compilation is achieved by defining one tab and hiding the indicator.
-
Dialog title: Text to display in the dialog title bar can be defined using the
captionBar.title
element of the widget.{ "type": "prompt", "width": "500px", "height": "500px", "message": { "payload": { "type": "text", "text": "Hello world", "captionBar": { "title": "Prompt Title" }, "actions": { "onClick": { "type": "action", "name": "some-action" } } } } }
Since the prompt repurposes the widget’s
captionBar
, any of thecaptionBar
properties can be set. If the widget-edit button {} is visible (see showDevTools), the work model of the prompt can be inspected, albeit not edited. -
Closing the prompt: The most direct way to close the prompt dialog is to click on the "X" icon in the title bar. Pressing the ESC key on the keyboard will also hide the prompt.
If you choose not to show the title bar, then users of your compilations may be left wondering how to close the prompt, since there is no visual indication that the ESC key can be used. In this situation you can invoke the
dismiss
action from anonClick
handler as shown below.{ "type": "prompt", "message": { "payload": { "type": "text", "text": "Click or press ESC to Hide prompt", "captionBar": { "hidden": true }, "options": { "style": { "textAlign": "center", "fontSize": "20px", "fontWeight": "bold" } }, "actions": { "onClick": { "type": "dismiss" } } } } }
-
Using Named Actions: Actions implemented inside the
prompt
can be made to manipulate widgets outside its scope by using named actions.When invoking named
actions
declared at root compilation or source widget level, the delegate action is usually required to ensure that the changes are applied in the appropriate context and directed at the right widget.To make sense of this statement it might be helpful to refer to the prompt-02 example. It shows how to load a prompt from a click in the main compilation, containing a tabs widget with its indicator hidden. The text widgets in the prompt are clickable, resulting in a further popup and changes to other text widgets.
Named actions declared on the widget from which the prompt was invoked take precedence over named actions at compilation level.
Read
Reads the dynamic value of an object or the value of an object property by executing a Read endpoint request. The action model comes in three variants depending on how the path and options are specified:
Path and options provided at root level
This form of the read request returns only the value of the object or property referred to in the path, omitting the timestamp, id and quality.
{
"type": "read",
"path": "/System/Core/Examples/Demo Data/Process Data/FC4711",
"opt" : {
"q": "COREPATH"
}
}
Name | Description |
---|---|
|
Path to the object or property. |
|
Optional query details field, used to request information other than the object value. Supported arguments are described in the Read endpoint section in the Web API documentation. |
Example read object’s dynamic value.
{
"type": "read",
"path": "/System/Core/Examples/Demo Data/Process Data/FC4711"
}
The result is returned as a single value assigned to the message payload. For example:
{
"payload": 49.87
}
Example read object’s property value.
{
"type": "read",
"path": "/System/Core/Examples/Demo Data/Process Data/FC4711.OPCEngUnit"
}
Example to read a Lua KPI table object.
{
"type": "read",
"path": "/System/Core/Examples/Demo Data/Process Data/FC4711",
"opt": {
"STARTTIME": "2020-10-13T00:00:00.000Z",
"ENDTIME": "2020-10-14T00:00:00.000Z",
"TIMESTAMP": "1602633600"
}
}
Example of the query mechanism support by the Read Web API endpoint.
{
"type": "read",
"path": "/System/Core/Examples/Demo Data/Process Data/DC4711",
"opt": {
"q": "COREPATH"
}
}
Path and options provided in the item property
Read all VQT properties for a single entity.
{
"type": "read",
"item": {
"p": "/System/Core/Examples/Demo Data/Process Data/FC4711",
"opt": {} // options
}
}
Name | Description | |
---|---|---|
|
Single item for which to read the VQT values. |
|
|
Path to the object or property. |
|
|
Optional query details field, used to request information other than the object value. Supported arguments are described in the Read endpoint section in the Web API documentation. |
The output from the read looks something like this:
{
"i": 281474980511744,
"p": "/System/Core/Examples/Demo Data/Process Data/FC4711",
"q": 0,
"t": "2022-04-27T14:07:06.781Z",
"v": 36.58641815185547
}
Path and options provided in the items property
Read all VQT properties for a list of entities.
{
"type": "read",
"items": [
{
"p": "/System/Core/Examples/Demo Data/Process Data/FC4711",
"opt": {} // options
},
{
"p": "/System/Core/Examples/Demo Data/Process Data/DC4711",
"opt": {} // options
}
]
}
Name | Description | |
---|---|---|
|
Array of items for which to read the VQT values. |
|
|
Path to the object or property. |
|
|
Optional query details field, used to request information other than the object value. Supported arguments are described in the Read endpoint section in the Web API documentation. |
The output from the read looks something like this:
[
{
"i": 281474980511744,
"p": "/System/Core/Examples/Demo Data/Process Data/FC4711",
"q": 0,
"t": "2022-04-27T14:12:45.781Z",
"v": 43.22888946533203
},
{
"i": 281474980118528,
"p": "/System/Core/Examples/Demo Data/Process Data/DC4711",
"q": 0,
"t": "2022-04-27T14:12:24.781Z",
"v": 10.687927246093746
}
]
Read-historical-data
Returns aggregated time series data for one or more system objects. The action properties largely Lua API documentation
{
"type": "read-historical-data",
"query": {
"aggregate": "AGG_TYPE_INTERPOLATIVE",
"fields_as_lowercase": true,
"identifier": "/System/Core/Examples/Demo Data/Process Data/DC4711",
"starttime": "2022-05-24T08:00:00.000Z",
"endtime": "2022-05-24T08:00:30.000Z",
"numberOfIntervals": 10,
"treat_uncertain_as_bad": true
}
}
Query fields
name | description |
---|---|
|
Optional parameter to indicate how data should be aggregated. The default value is "AGG_TYPE_INTERPOLATIVE". Refer to the Aggregate Coding Group documentation for a description of all available aggregation types. |
|
Optional strorespec to a data store from which the time-series data should be retrieved. |
|
Optional boolean to control the character case used for the VQT field names in the returned dataset. By default the keys are in uppercase, but can be forced to be lowercase by assigning true to this parameter. |
|
The To read the history for multiple tags at once, use the
|
|
Number of equally spaced intervals to divide the query time range ( If the |
|
This parameter affects the calculated status of an aggregated data point. The value chosen determines what percentage of the data points in the aggregation interval need to be bad for the status of the aggregate to be set to bad as well. The default is 100 |
|
This parameter affects the calculated status of an aggregated data point. The value chosen determines what percentage of the data points in the aggregation interval need to be good for the status of the aggregate to be set to good as well. The default is 100 |
|
Optional property indicating how the returned data set should be structured.
|
|
Indicates how the server interpolates data when no boundary value exists (i.e. extrapolating into the future from the last known value); This property is optional and is set to false by default. |
|
Time range for the query. The values may be expressed as epoch numbers in milliseconds or as ISO strings. Relative timestamps are not supported. |
|
Currently the only options that can be selected are:
|
|
Indicates that data points in the raw data with a status code of "Uncertain" are treated as "Bad" values by the aggregation algorithm. The value is optional and defaults to "False" |
Example
In this example the history data for two tags are retrieved, each using their own aggregation type.
{
"dataSource": {
"type": "read-historical-data",
"query": {
"items": [
{
"p": "/System/Core/Examples/Demo Data/Process Data/DC4711",
"aggregate": "AGG_TYPE_INTERPOLATIVE"
},
{
"p": "/System/Core/Examples/Demo Data/Process Data/DC666",
"aggregate": "AGG_TYPE_AVERAGE"
}
],
"starttime": "2022-05-19T12:30:00.000Z",
"endtime": "2022-05-19T13:30:00.000",
"fields_as_lowercase": true,
"numberOfIntervals": 3,
"res_as_item_values": true
}
}
}
The dataset returned from this call looks like this:
{
"payload": [
{
"aggregate": "AGG_TYPE_INTERPOLATIVE",
"intervals": [
{
"q": 17179869184,
"t": 1652963400000,
"v": 10.89651489257811
},
{
"q": 17179869184,
"t": 1652964600000,
"v": 9.790826416015614
},
{
"q": 17179869184,
"t": 1652965800000,
"v": 9.618603515624994
}
],
"p": "/System/Core/Examples/Demo Data/Process Data/DC4711"
},
{
"aggregate": "AGG_TYPE_AVERAGE",
"intervals": [
{
"q": 8589934592,
"t": 1652963400000,
"v": 11.617119598388705
},
{
"q": 8589934592,
"t": 1652964600000,
"v": 11.729839935302772
},
{
"q": 8589934592,
"t": 1652965800000,
"v": 11.395158386230502
}
],
"p": "/System/Core/Examples/Demo Data/Process Data/DC666"
}
]
}
Read-raw-historical-data
Read raw, that is un-aggregated, historical data for one or more objects. The action properties largely match those describes in the Web API documentation. Also refer to the Lua API documentation
{
"type": "read-raw-historical-data",
"query": {
"modified_data_mode": true,
"identifier": "/System/Core/ModHistDataTest2",
"starttime": "2022-05-19T12:30:00.000Z",
"endtime": "2022-05-19T13:30:00.000",
"bounds": false,
"data_store": "",
"fields": [],
"filter": {},
"queryCountLimit": 0,
"time_zone": "UTC"
}
}
Query fields
name | description | ||
---|---|---|---|
|
Indicates whether additional bounding values (at the beginning or end of the time range) should be returned. Bounding values are data points which are not necessarily part of the original data set, but may be used to interpolate the actual data for calculations. Options are:
|
||
|
Optional strorespec to a data store from which the time-series data should be retrieved. |
||
|
String array of fields to included in the response.
The Example:
|
||
|
Used to restrict the data returned by the query. Refer to the example below and the Web API documentation for more details. |
||
|
The |
||
|
Indicates what modification information to return. Options are:
|
||
|
Time range for the query. The values may be expressed as epoch numbers in milliseconds or as ISO strings. Relative timestamps are not supported. |
||
|
Currently the only options that can be selected are:
|
Example: modified_data_mode
= false
Get the current data but indicate which points have been changed.
{
"dataSource": {
"type": "read-historical-data",
"query": {
"modified_data_mode": false, // Flag changed data, but return latest values
"identifier": "/System/Core/ModHistDataTest2",
"starttime": "2022-05-19T12:30:00.000Z",
"endtime": "2022-05-19T13:30:00.000Z",
"time_zone": "UTC" // Show timestamps in string format.
}
}
}
Returned data:
{
"payload": {
"p": "/System/Core/ModHistDataTest2",
"q": [
1032, // The first two values have been updated.
1032,
0
],
"t": [
"2022-05-19T12:45:00.000Z",
"2022-05-19T13:00:00.000Z",
"2022-05-19T13:15:00.000Z"
],
"v": [
158.67,
165.16,
144
]
}
}
Example: modified_data_mode
= true
Now the returned data-set only contains the points that have changed.
{
"payload": {
"p": "/System/Core/ModHistDataTest2",
"q": [
0, // Note the change bits are no longer set
0,
0,
0
],
"t": [
"2022-05-19T12:45:00.000Z", // The first 2 entries refer to the same data-point
"2022-05-19T12:45:00.000Z",
"2022-05-19T13:00:00.000Z",
"2022-05-19T13:00:00.000Z"
],
"v": [
158.67,
160.38,
165.16,
163.16
]
}
}
Example: Filter
In this example the data is filtered to return only data points that have values between 150 and 200
{
"dataSource": {
"type": "read-raw-historical-data",
"query": {
"_modified_data_mode": false,
"identifier": "/System/Core/ModHistDataTest2",
"starttime": "2022-05-19T12:30:00.000Z",
"endtime": "2022-05-19T13:30:00.000Z",
"time_zone": "UTC",
"filter": {
"$match": {
"$and": [
{
"v": {
"$gt": 150
}
},
{
"v": {
"$lt": 200
}
}
]
}
}
}
}
}
Read-write
Can be used for widgets which support reading and writing.
{
"dataSource": {
"type": "read-write",
"path": "/System/Core/Examples/WebStudio/Tables/SalesOrders.TableData"
}
}
Refresh
Refresh a widget. No message will be send to the target widget. In case you want to send a message to a widget, use a
send
action with the message topic
set to refresh
.
{
"type": "refresh",
"id": "Place the ID of the widget here"
}
The id
field can have the value self
should the action pipeline need to refresh its own widget. It can also be a
route expression allowing widgets embedded in nested tabs compilations to be refreshed.
Save-File
This action is currently in preview. It is used to save data from the message payload to file.
{
"type": "save-file",
"filename": "data.txt", // name of the output file. saved to the download folder
"options": { // Optional mime type
"type": "application/json;charset=utf-8"
}
}
The action inspects the incoming message, locating the data to be saved as follows:
-
If the message
payload
is a string, this string is saved -
If the
payload
is an object, which does not contain acontent
field, the payload object as a whole is save.
JSON payloads must first be encoded to strings before being saved as shown in the example below. |
-
If the
payload
is an object with acontent
field, its value is saved to file, rather than the whole payload.
Example: Save the data of a table widget to file:
To see this work, add the following JSON to a table widget, starting from the "Fixed Data" template. When the save toolbar button is pressed, the data content of the table will be saved to a file called table-data.json in the browser download folder.
{
"actions": {
"onSave": [
{
"type": "convert",
"encode": "json" // Encode in incoming payload data to a string
},
{
"type": "save-file",
"filename": "table-data.json"
}
]
},
}
Send
Widgets data exchange
Sending message from one widget to another can be done using the send
action. This action does not change the output
message. The output message is the same as the input message.
{
"type": "send",
"to": "Place the ID of the widget here"
}
The value of the to
field can be set to self
in case the pipeline needs to send data to its own widget. The update
behavior differs per widget.
Using a route
to point to a widget within a tab
{
"type": "send",
"to": {
"route" : [
"MyTabs", // ID of the Tabs widget.
"Tab01", // ID of the single tab.
"TextWidget" // Pointing to a Text widget which has an ID of 'TextWidget` which.
]
}
}
Supported topics:
-
refresh
: the recipient widget will perform a refresh. (default) -
update
: the receiving widget will perform an update. This will bypass the data source action.
Refresh Topic
The recipient widget will perform a refresh. It will execute the data source action (pipeline) with the provided message
payload
. The refresh life cycle will be performed including a
fetch if a data source is present.
Example to send a message to another component to refresh itself:
{
"type" : "send",
"to" : "Place the ID of the widget here",
"message": {
"topic": "refresh" // Can be omitted because it is default.
"payload": {} // Can be any type of value. Typically an object is used.
}
}
Update Topic
The recipient widget will perform an update. It only updates its known properties with the provided message payload
.
The update life cycle will be performed.
{
"type" : "send",
"to" : "Place the ID of the widget here",
"message": {
"topic": "update",
"payload": {} // Can be any type of value. Typically an object is used.
}
}
Subscribe
Subscribe to object data changes in the system. Typically used in dataSource
configurations. If necessary, use the
willUpdate
action hook to transform the data.
{
"type": "subscribe",
"path": "/System/Core/Examples/Variable"
}
Switch
Execute actions based on rules. A rule will be checked by performing a queryOne
transformation. If
the result of the queryOne transformation is something other than null
the action(s) defined in the case
statement
will be executed. If one rule matches, its action
will be executed and further testing of the subsequent rules will be
stopped.
When checkAll
is set to true
, the 'initial' input message of the switch will be passed to each action pipeline of
the matched rules. The output message of the last executed action pipeline, will be the output message of this switch
action. When no rule matches, the output of the switch action is the same as the input.
To declare a default action which applies when none of the rules match, add a extra rule at the end of the case
array with an empty match
condition.
{
"type": "switch",
"checkAll": false,
"case": [
{
"match" : { // test if payload.temp == 10
"temp": 10
},
"action" : { // Can be a single action or action pipeline.
"type": "action",
"name": "doSomething"
}
},
{
"match" : { // test if payload.temp >= 20
"temp": {
"$gte": 20
}
},
"action" : [ // Can be a single action or action pipeline.
{
"type": "action",
"name": "doSomethingFirst"
},
{
"type": "action",
"name": "doSomethingExtra"
}
]
},
{
"match" : {}, // Default action
"action" : [] // Can be a single action or action pipeline.
}
]
}
In its simplest form, a match expressions tests if the named payload field equals the provided value. An explicit comparison operator can also be used as illustrated above.
Even more complex match conditions can be formulated by using the mongoDB $expr
operator. For example:
{
"match" : {
"$expr": {
"$lt": [ // check if payload.value < payload.maxValue
"$value",
"$maxValue"
]
}
}, // ...
}
TPM-OEE
This action reads table models relating to OEE monitoring from the back end. It is typically used in the dataSource
of either a timeperiodtable
or a table
widget. Depending on the value of the subject
property, the action returns widget schema
, actions
and data
elements for specific OEE tables, allowing data records to be edited in WebStudio.
{
"type": "tpm-oee",
"subject": "big-table_stops",
"path": "/Enterprise/Site/Area-S1-A2/Cell-S1-A2-C1/OEE/Production/Equipment Stops"
}
Name | Description |
---|---|
|
Path to an OEE object. The specific one referred to depends on the selected |
|
The selected
|
Query parameters
Listed in the table below are the additional function arguments that can be provided with the tpm-oee
action when the subject is either "big-table_production-runs" or "big-table_stops". While these can be assigned directly in the action farg
property, it is more common to add them to the message payload received by the action (see example). Any payload properties that match farg
parameters will be read from the incoming message.
{
"type": "tpm-oee",
"subject": "big-table_stops",
"path": "/Enterprise/Site/Area/Cell/OEE/Production/Equipment Stops",
"farg": {
"StartUTC": "2022-07-01T10:21:34.747Z", // override starttime and endtime
"EndUTC": "2022-07-01T11:21:34.747Z"
}
}
Name | Description |
---|---|
|
ISO time strings denoting the time period for which to retrieve big-table records. The message received by the |
|
These should be ISO time strings and will override the default |
|
When provided replaces the |
|
When provided replaces the default aggregation expression used to query big-table records. The default expression looks something like this:
|
Example custom aggregation pipeline
Let’s say we want to show a timeperiodtable
listing all the recorded OEE stops for a given time interval, which are as yet "un-classified" in that they don’t have a proper "reason" code assigned to them. That is, their reason code is 0. We also want to exclude any stops shorter than a certain duration, say 4 seconds, which are treated as micro-stops, for which no specific reason code is required. This is achieved by configuring the widget dataSource
as shown:
{
"dataSource": [
{
// 1. set up the query pipeline by adding it to the incoming message
"type": "passthrough",
"message": {
"payload": {
"pipeline": [
{
"$match": {
"$and": [
{
"$or": [
{
"StartUTC": {
"$gte": {
// placeholder for actual start-time
"$date": "DUMMY_STARTTIME"
}
}
},
{
"EndUTC": {
"$gte": {
"$date": "DUMMY_STARTTIME"
}
}
},
{
"EndUTC": 0
}
]
},
{
"StartUTC": {
"$lt": {
"$date": "DUMMY_ENDTIME"
}
}
},
{
"Reason": {
"$eq": 0
}
},
{
"Duration": {
"$gt": 4000
}
}
]
}
},
{
"$sort": {
"StartUTC": -1
}
}
]
}
}
},
{
// 2. replace the values for start and end time in the expression.
"type": "transform",
"aggregateOne": [
{
"$set": {
"pipeline.0.$match.$and.0.$or.0.StartUTC.$gte.$date": "$starttime",
"pipeline.0.$match.$and.0.$or.1.EndUTC.$gte.$date": "$starttime",
"pipeline.0.$match.$and.1.StartUTC.$lt.$date": "$endtime"
}
}
]
},
{
// 3. Invoke the action to get the table data and schema. The args will be read from the message payload
"type": "tpm-oee",
"subject": "big-table_stops",
"path": "/Enterprise/Site/Area/Cell/OEE/Production/Equipment Stops"
}
]
}
Transform
Transformation of data can be performed by means of the MongoDB Aggregation framework. The input data is normally the
value of the payload
field of the input message. In case the whole message object needs to be available to the
transform logic you can set completeMsgObject
to true
.
The aggregation pipeline often returns an array of one or more elements. In most cases, pipeline actions which ingest
the output of the transformation are however looking for a single object. As a convenience the WebStudio specific
aggregateOne
options can be used instead of aggregate
. It return the first element from the resulting
transformation.
Besides aggregate
and aggregateOne
, query
and queryOne
can also be used in scenarios where filtering is
required. As expected queryOne
returns the first matching object.
MongoDB Documentation:
Transform Example 01
This example is to transform the data to an object with a different key.
-
Input message payload:
{
"name": "Company A",
"location": "Eindhoven"
}
-
Transform logic:
{
"type": "transform",
"aggregateOne": [
{
"$project": {
"company": "$name"
}
}
]
}
Output message payload:
{
"company": "Company A"
}
Example to search for the object with 'location' value 'Cologne':
-
Input message payload:
[
{
"name": "Company A",
"location": "Cologne"
},
{
"name": "Company B",
"location": "Eindhoven"
}
]
-
Transform logic:
{
"type": "transform",
"aggregate": [
{
"$match": {
"location": "Cologne"
}
}
]
}
-
Output message payload:
[
{
"name": "Company A",
"location": "Cologne"
}
]
Note: Even though only one element was matched, the returned massage payload is still an array. If aggregateOne
were
used instead of aggregate
, the output would look like this:
{
"name": "Company A",
"location": "Cologne"
}
Transform Example 02
This example uses a query
to search for the object with 'location' value of 'Cologne'.
-
Input message payload:
[
{
"name": "Company A",
"location": "Cologne"
},
{
"name": "Company B",
"location": "Eindhoven"
},
{
"name": "Company C",
"location": "Cologne"
}
]
-
Transform logic:
{
"type": "transform",
"query": {
"location": "Cologne"
}
}
-
Output message payload:
[
{
"name": "Company A",
"location": "Cologne"
},
{
"name": "Company C",
"location": "Cologne"
}
]
Transform Example 03
This example uses queryOne
to search for the object with 'location' value 'Cologne'. Like aggregateOne
, it returns
only the first match.
-
Input message payload:
[
{
"name": "Company A",
"location": "Cologne"
},
{
"name": "Company B",
"location": "Eindhoven"
},
{
"name": "Company C",
"location": "Cologne"
}
]
-
Transform logic:
{
"type": "transform",
"queryOne": {
"location": "Cologne"
}
}
-
Output message payload:
{
"name": "Company A",
"location": "Cologne"
}
Wait
Wait before the next actions will be executed. duration
is in milliseconds.
{
"type": "wait",
"duration": 1000
}
Write
Writes a value to an object in the system. The write action inspects the message payload passed to it, looking for
variables v
, item
or items
which it uses to update the object identified by the path
or p
property, depending on the variant of the action used.
Fixed path with "v" read from message payload
{
"type": "write",
"path": "/System/Core/Examples/Variable"
}
Name | Description |
---|---|
|
Path to the object or property to update. |
|
Value to write. The value is read from the payload of the incoming message. It can be a property of the payload.
The value parameter can also be assign directly to the payload itself.
The write action assigns the |
Write parameters are provided in the "item" property
This version of the write
action can be used to update any of the VQT properties of an object. Only the action type is mandatory. The item
can be statically set or read from the message payload.
{
"type": "write",
"item": {
"p": "/System/Core/Examples/Variable",
"v": 215.24,
"t": "2022-04-27T15:25:10.331Z",
"q": 0 // Quality is GOOD
}
}
Name | Description | |
---|---|---|
|
Details of item to be written to. The property is usually not explicitly configured, be read from the |
|
|
Path to the object or property to update. |
|
|
Value to write. |
|
|
If the timestamp field |
|
|
The OPC quality value can be omitted in most cases in which case 0 (GOOD) is assumed. |
if the write was successful, the message is updated with the values written.
{
"payload": {
"i": 281474983198720,
"p": "/System/Core/Examples/Variable",
"v": 215.24,
"q": 0,
"t": "2022-04-27T15:25:10.331Z",
}
}
Write parameters are provided in the "items" property
This variant of the write action is very similar to the previous one, except that instead of a single item
, an array of items
is provided. The items
can be statically configured or read from the message payload.
{
"type": "write",
"items": [
{
"p": "/System/Core/Examples/Variable",
"v": 215.24,
"t": "2022-04-27T15:25:10.331Z",
"q": 0 // Quality is GOOD
},
{
"p": "/System/Core/Examples/Variable",
"v": 215.24,
"t": "2022-04-27T15:26:13.331Z",
"q": 0 // Quality is GOOD
}
]
}
Nested Action Arrays
As stated in the definition section, action pipelines can be made up of arrays of actions which execute one after the other.
The example below shows a simple onClick
handler for a button
widget, in which a message with a payload is initialized and the content written to the browser console.
{
"actions": {
"onClick": [
{
"type": "passthrough",
"message": {
"payload": {
"prop1": 123
}
}
},
{
"type": "consoleLog"
}
]
}
}
In addition, pipelines can also contain nested arrays of actions as shown below.
{
"actions": {
"onClick": [
{
"type": "passthrough",
"message": {
"payload": {
"prop1": 123
}
}
},
[ // Nested array with one element.
{
"type": "transform",
"aggregateOne": [
{
"$set": { // Add another property to the payload
"prop2": "Hello world"
}
}
]
}
],
{
"type": "consoleLog"
}
]
}
}
Ordinarily, there would be little point in adding a nested array as shown. The result from executing this pipeline is exactly the same as if the actions were all defined at the same level. The message returned from the pipeline looks like this:
{
"payload": {
"prop1": 123,
"prop2": "Hello world"
}
}
Where things get interesting is when we add another array immediately after the first one. For example:
{
"actions": {
"onClick": [
{
"type": "passthrough",
"message": {
"payload": {
"prop1": 123
}
}
},
[ // Nested array with one element.
{
"type": "transform",
"aggregateOne": [
{
"$set": { // Add another property to the payload
"prop2": "Hello world"
}
}
]
}
],
[ // Second action array immediately following the first
{
"type": "transform",
"aggregateOne": [
{
"$set": { // Add prop3 to the payload
"prop3": "Another property"
}
}
]
}
],
{
"type": "consoleLog"
}
]
}
}
Ordinarily, you would expect the input to the second array to the the output message from the first. What happens instead is that both arrays receive the same message as the first array. In addition, only the output from the last array in the sequence is passed on to the next action in the outer pipeline.
The message written to the browser console in this example therefore contains only prop1
and prop3
.
{
"payload": {
"prop1": 123,
"prop3": "Another property"
}
}
At this point you might be wondering what the point of all this is… the answer to which is that it provides a way of executing multiple action sequences that all start with the same input message. Within each sequence, the message can be modified as much as required, without affecting the other sequences.