Generic Time Series Buffer

The Generic Time Series Buffer object is used to receive VQT data at a Core or Connector level from any other component of a distributed system. The object enables buffered and delivery-ensured data transport from one component to another component using the Store and Forward mechanism. This includes Connector to Connector, Master Core to Local Core amongst others.

The Generic Time Series Buffer works similar to the Custom Time Series Data Store, except the destination for the data is a buffer that can be read by Lua script instead of MongoDB. Data being received by the Generic Time Series Buffer will be buffered in memory of the object hosting it (Connector, Core etc.), and made available through a Lua script for consumption. The data is only deleted from the buffer when an indication from the Lua script is received.

It can be used to push VQT data to external interfaces such as Kafka and MQTT.

Use Cases

  • Transfer data between components without having to first archive data in MongoDB

  • Custom processing of VQT data using Lua scripting

  • Forward VQT data to external interfaces - Kafka, MQTT

Quick Configuration

  1. Select a parent object (Core, Connector, DataStoreGroup) and right-click, select Admin  New  Data Stores  Generic Time Series Buffer from the context menu to open the Create Object wizard.

  2. Give the object a unique name and enter a description if necessary. The Lua processing script will be added later.

  3. Select the Disabled SaF Mode. The default is "Discard" which means that any data in the Saf buffer will be purged if the generic buffer is disabled. Please see the property page for more details.

  4. Click on Processing in the side bar to open the processing options:

    • The Parallel Processing option allows multiple threads to process the received data. When selected, a separate thread for invoking the Lua script is used for each source component delivering data to the Generic Time Series Buffer. If all received data originates from the same component (same source Connector for example), this setting has no effect.

      All threads execute the Lua script for data processing independently from each other, however, all will stop working if one of the threads hits a runtime error.
    • The Suppress Acknowledgements option enables you receive data for processing without subsequently deleting it from the source buffer. This can be used for testing purposes to prevent accidental acknowledgement and therefore deletion of source data, while implementing the Lua script for data consumption.

      Note that if you leave this option on, data will never be removed from the source component buffer!
    • The Retry Latency value specifies after what amount of seconds an object in error state will try to start processing data again.

  5. Click on Limits in the side bar to define what time or condition received data will be accessed for consumption:

    • The Latency value defines a limit in seconds after which the processing Lua script will be automatically invoked, independent from the amount of data received.

    • Minimum Buffer Size and Minimum Buffer Items define limits in kilobytes and number of data points of received data, that if reached, trigger the invocation of the processing Lua script.

    • Maximum Buffer Size and Maximum Buffer Items are upper limits in kilobyte and number of data point to host in the Generic Time Series Buffer (memory of the hosting component) for data processing. If one of the limits is reached, no further data will be send from the source component until some of it was processed.

  6. Click Create to create the object in the I/O model tree.

Configure Lua Processing Script

  1. Select the Generic Time Series Buffer and open the Lua Script Body script editor from the Object Properties panel.

    For data received, the same concept applies like when data needs to be stored in MongoDB in a time series store: data is buffered in the source component, until the consumer acknowledges a successful processing. While this happens automatically within the system for data being stored in MongoDB, it has to be done explicitly by the Lua script of a Generic Time Series Buffer. The following script provides an example of this and can be used for the purposes of this configuration.
  2. Enter the following script into the Lua Script Body editor (Script is annotated to highlight key parts). Click OK to save the script:

    --dynamic iterator (1)
    local iter = ...
    --keep track of processed saf_id
    local last_saf_id
    
    --objectid of source component
    local comp_id =  iter.compoid (3)
    
    --iterate over currently available data to consume (2)
    for saf_id, prp_id, v, q, t, d in iter() do
        --saf_id: id of data point from source, required for acknowledgement
        --prp_id: id of ItemValue property of source, required for acknowledgement
        --v:      value of data point
        --q:      inmation quality of data point
        --t:      timestamp of data point as UTC posix msec
        --d:      code of source data type (syslib.model.codes.VariantTypes)
    
        -- track saf_id, we acknowledge not every single point for performance
       last_saf_id = saf_id
    
       --do something with the data, here store in receiver item (4)
       syslib.setvalue("/System/Core/Online Connector/Generic Buffers/Time Series/Data Receiver", v, q, t)
       --do something with metadata, here store in separate items for debugging (5)
       syslib.setvalue("/System/Core/Online Connector/Generic Buffers/Time Series/Data Receiver/last_saf_id", last_saf_id)
       syslib.setvalue("/System/Core/Online Connector/Generic Buffers/Time Series/Data Receiver/prp_id", prp_id)
       syslib.setvalue("/System/Core/Online Connector/Generic Buffers/Time Series/Data Receiver/comp_id", comp_id)
    end
    
    --finally acknowledge the last saf_id and delete data at the source (6)
    if last_saf_id then
       iter:deleteupto(last_saf_id)
    end
    1 The Lua script body receives an iterator that can be used to process a list of data points handed over to it on execution of the script. The iterator supports the following methods and fields on itself:
    • (): the call operator returns a Lua iterator which can be used in for loops to iterate over the currently available data

    • :deleteupto(saf_id): acknowledge (mark for deletion) all SaF data up to and including saf_id (which is the first value returned when iterating over items in the buffer)

    • :ack(saf_id): same as deleteupto(saf_id)

    • .length: the number of items available in the iterator

    • .size: the amount of bytes used by the items available in the iterator

    • .consumeraddr: an integer identifying this buffer object as a consumer of Store and Forward data

    • .senderoid: the inmation object id of the object sending the data

    • .senderaddr: an integer identifying the sender of the data

    • .sendercompoid: the inmation object id of the Core or Connector component the sending object belongs to

    • .compoid: same as sendercompoid

    2 The list can be processed in a for loop by calling the iterator, which returns each data point with:
    • saf_id: the data points unique ID from the source component that send that data. This is used for the acknowledgement of successful processing.

    • prp_id: the inmation ID of the ItemValue property of the object the data point originates from. This is used for the acknowledgement of successful processing.

    • v,q,t: the value, quality, and timestamp of the data point

    • d: the source’s original data type code as defined in the VariantType coding group.

    3 In addition to the data points’ information, the iterator also contains information about the source component whose data is currently being processed, in form of its inmation ID. This can be used in parallel processing scenarios to identify for what source component the current thread is processing data.
    4 In the above example data is written to receiver items for demo purposes
    5 Metadata is stored in variables for debugging. In real life use cases data could for example be send to a DataSink object for forwarding to cloud systems via MQTT.
    6 After the data is processed the script calls the acknowledgement function “ack” of the iterator. Although only the ID of that last data point is set, the acknowledgement implicitly contains also the information about the source component and the source object (ref. comp_id and prp_id above). With these three IDs the system knows what source component can be informed about what data point can be released from its Store and Forward buffer as it was successfully processed.
If a source object uses a Data Store Set with multiple stores/buffers, all of these have to acknowledge successful processing before data will be released from the Store and Forward buffer. Also note the above script does not acknowledge each data point individually but only the last one. Acknowledging a data point ID of a certain source object automatically acknowledges all lower IDs of that source object. This causes less traffic back to the source, improving performance.

Configure Data Store Set and Objects for sending Data

  1. To add the new Generic Time Series Buffer as an archive option for the Archive Selector property for I/O items under the respective component, it is also necessary to configure the Data Store Sets table property in the relevant Core object.

  2. Select the Core in the I/O Model tree and open the Data Store Configuration property compound in the Object Properties panel.

  3. Open the Data Store Sets table property (the property grid should be empty if this is the first data store set being configured).

  4. Enter a unique name in the 'Name' column. This is the name that will be displayed in the drop down menu of the Archive Selector property for I/O items.

  5. From the drop down menu in the 'Data Stores' column, select the name of the Generic Time Series Buffer object that you just created. You can also select the System Production/Test archive, to archive the data in the centralized System archives.

  6. Add a description to the 'Description' column if required.

  7. Click OK to close the property grid.

  8. Click Apply in the Core Object Properties panel to apply the changes.

  9. To check that the new Generic Time Series Buffer is now available, select an I/O item (or other time series data producing object) in the namespace below the respective Core and navigate to the Archive Selector property in the Object Properties panel.

  10. Select the data store from the drop-down menu and click Apply to archive the data for that object in the new Generic Time Series Buffer.

    If you are configuring I/O items that are below a Local Core to historize in a Generic Time Series Buffer that is below the Master Core, the Data Store Sets property on the Local Core must also be configured for the Time Series Buffer to be available as an archive option. To do this, you must be logged into the Master Core with DataStudio when configuring the Local Core properties because the buffer will not be visible if you are logged into the Local Core.
  11. Once data starts to accumulate in the buffer, any errors generated by the Lua script will be displayed in the volatile properties of the Status property compound. The 'Generic Time Series Buffer' object will also show a "bad" red light status in the I/O model.

Advanced Configuration

Configure Cloud Interface extension

  1. In the Object Properties window for Generic Time Series Buffer, open the Interface Extension property compound and choose one of the cloud interface options from the Extension Selector drop-down menu ("Kafka" or "MQTT"). This will open up further configuration options.

  2. Both Kafka and MQTT extensions can be configured in the same way as the Cloud Sink object. Please refer to Cloud Sink with Kafka Interface or to Cloud Sink with MQTT Interface for more information.

  3. Enter the following script into the Lua Script Body in the Common property compound of the object (Script is annotated to highlight key parts). Click OK to save the script:

    return function(...)
        local iter, sink = ... (1)
        if iter.length > 0 then
            local payload = { id = {}, message = {} } (2)
            for saf_id, prp_id, v, q, t, d in iter() do (3)
                table.insert(payload.id, saf_id)
                table.insert(payload.message, ("%s: %s"):format(prp_id, v))
            end
            local suc, err, context = sink:SEND(payload.message) (4)
            if context.acked > 0 then (5)
                iter:ack(payload.id[context.acked])
            end
            if not suc then error(err) end (6)
        end
    end
    1 Some variables passed to the script by Generic Time Series Buffer:
    • iter - the iterator object (as per GTSB documentation)

    • sink - the Virtual Cloud Sink object that is automatically created when a cloud interface extension is selected.

    2 For better performance, we will send multiple messages in a single batch.
    3 When creating the message batch, we also need to remember the message IDs for proper acknowledgement.
    4 Sending messages to the cloud interface.
    5 When messages have been successfully acknowledged by the cloud interface, these messages can be deleted from the buffer. Now we need to use our stored IDs to properly acknowledge our message batch.
    6 An exception is thrown if an error occurs in the cloud interface.
  4. In case if a custom callback is required, add the following code to the script above

    sink:SETWRITECALLBACK(function(sink_data) end) -- write callback function
    sink:SETSUCCESSCALLBACK(function(sink_data, sink_perf) end) -- success callback function
    sink:SETERRORCALLBACK(function(sink_data, sink_perf, err_msg) end) -- error callback function
    sink:SETLOGCALLBACK(function(log_msg, other_data) end) -- log callback function

Object Properties

Common

Object Name

The user-modifiable object name. This name overrides the name which has been supplied by the external system. It must be unique within the collection of objects of the parent object.

Object Description

This is the user-modifiable object description. This text overrides the description which has been supplied by the external system.

Disabled SaF Mode

SaF behavior if the object is disabled.

Display Alias

Alternate label for objects to be used for easier identification in the displays.

Lua Script Body

Script editor to enter an advanced Lua script.

Attachments

File attachments stored in MongoDB file store.

Limits

Configuration of data processing limits.

Latency

Wait at most the specified number of seconds until invoking the script.

Minimum Buffer Size

Wait until the specified number of bytes is available in the buffer, before invoking the script.

Maximum Buffer Size

The maximum receive buffer size for each component serving data to this generic SaF buffer.

Minimum Buffer Items

Wait until the specified number of items is available in the buffer, before invoking the script.

Maximum Buffer Items

The maximum number of buffered items for each component serving data to this generic SaF buffer.

Processing

Configuration of parallel data processing.

Parallel Component Processing

Enables parallel execution of script instances, one for each component serving data to this generic SaF buffer.

Suppress Acknowledgments

Suppress acknowledgments for data executed by the script, preventing the deletion from SaF.

Retry Latency

If a script error occurred, wait the specified number of seconds before invoking the script again.

Interface Extension

Configuration for available interface extensions.

Extension Selector

Select one of available extensions to expand Generic Time Series Buffer capabilities.

None

No extension.

Kafka Producer Parameters

The parameters for the Kafka Producer protocol.

Topic

A topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.

Partition

Each partition is an ordered, immutable sequence of records that is continually appended to - a structured commit log. Please see the datasource documenation for the effect if this property belongs to an IO-Item.

Key

Key is an optional part of a message payload. Please see the datasource documenation for the effect if this property belongs to an IO-Item.

Offset

The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.

Polling Parameters

The Kafka Producer polling parameters.

Timeout

This timeout is used internal for polling only. The default of 1000 ms is recommended for remote servers. For local servers 100 ms should be fine.

Repeat

The number of polling repetitions.

Enable Polling Timeout Errors

Set to true to rise an error in case of a polling timeout. It is recommended set this property value to false if .PollingTimeout is 0.

Global Configuration Properties

Kafka Producer’s global configuration properties.

Bootstrap Servers

Initial list of brokers as a CSV list of broker host or host:port.

Security Protocol

Protocol used to communicate with brokers.

Plaintext

Plaintext.

SSL Parameters

SSL configuraional parameters.

SSL Certificate Authority Location

Certificate Authority (CA certificate) file for verifying the broker’s key. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslCaLocation or .SslCa should be specified; .SslCaLocation property will be prioritized over .SslCa if both specified.

SSL Certificate Authority

Certificate Authority (CA certificate) for verifying the broker’s key. Note: Either .SslCaLocation or .SslCa should be specified; .SslCaLocation property will be prioritized over .SslCa if both specified.

SSL Certificate Revocation List Location

Path to Certificate Revocation List (CLR certificate) file for verifying broker’s certificate validity. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslCrlLocation or .SslCrl should be specified; .SslCrlLocation property will be prioritized over .SslCrl if both specified.

SSL Certificate Revocation List

Certificate Revocation List (CLR certificate) for verifying broker’s certificate validity. Note: Either .SslCrlLocation or .SslCrl should be specified; .SslCrlLocation property will be prioritized over .SslCrl if both specified.

SSL Key Location

Path to client’s private key file (PEM) used for authentication. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslKeyLocation or .SslKey should be specified; .SslKeyLocation property will be prioritized over .SslKey if both specified.

SSL Key

Client’s private key (PEM) used for authentication. Either .SslKeyLocation or .SslKey should be specified; .SslKeyLocation property will be prioritized over .SslKey if both specified.

SSL Key Password

Private key passphrase.

SSL Key PEM

Client’s private key string (PEM format) used for authentication.

SSL Certificate Location

Path to client’s public key file (PEM) used for authentication. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslCertificateLocation or .SslCertificate should be specified; .SslCertificateLocation property will be prioritized over .SslCertificate if both specified.

SSL Certificate

Client’s public key (PEM) used for authentication. Note: Either .SslCertificateLocation or .SslCertificate should be specified; .SslCertificateLocation property will be prioritized over .SslCertificate if both specified.

SSL Certificate PEM

Client’s public key string (PEM format) used for authentication.

SSL Keystore Location

Path to client’s keystore file (PKCS#12) used for authentication. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslKeystoreLocation or .SslKeystoreLocation should be specified; .SslKeystoreLocation property will be prioritized over .SslKeystore if both specified.

SSL Keystore

Client’s keystore (PKCS#12) used for authentication. Note: Either .SslKeystoreLocation or .SslKeystoreLocation should be specified; .SslKeystoreLocation property will be prioritized over .SslKeystore if both specified.

SSL Keystore Password

Client’s keystore (PKCS#12) password.

SSL Cipher Suites

A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.

SSL Curves List

The supported-curves extension in the TLS ClientHello message specifies the curves the client is willing to have the server use.

SSL Signal Algorithms List

The client uses the TLS ClientHello signature_algorithms extension to indicate to the server which signature/hash algorithm pairs may be used in digital signatures.

SASL Parameters

The SASL parameters.

SASL Mechanism

SASL mechanism to use for authentication.

  • GSSAPI: GSSAPI.

  • PLAIN: PLAIN.

  • SCRAM-SHA-256: SCRAM-SHA-256.

  • SCRAM-SHA-512: SCRAM-SHA-512.

  • OAUTHBEARER: OAUTHBEARER.

SASL Username

SASL username for use with the PLAIN and SASL-SCRAM-.. Mechanisms.

SASL Password

SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism.

SASL/OAUTHBEARER configuration

SASL/OAUTHBEARER configuration.

SASL-SSL Parameters

SASL-SSL configurable parameters.

SASL Parameters

The SASL parameters.

SASL Username

SASL username for use with the PLAIN and SASL-SCRAM-.. Mechanisms.

SASL Password

SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism.

SASL/OAUTHBEARER configuration

SASL/OAUTHBEARER configuration.

SSL Parameters

SSL configuraional parameters.

SSL Certificate Authority Location

Certificate Authority (CA certificate) file for verifying the broker’s key. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslCaLocation or .SslCa should be specified; .SslCaLocation property will be prioritized over .SslCa if both specified.

SSL Certificate Authority

Certificate Authority (CA certificate) for verifying the broker’s key. Note: Either .SslCaLocation or .SslCa should be specified; .SslCaLocation property will be prioritized over .SslCa if both specified.

SSL Certificate Revocation List Location

Path to Certificate Revocation List (CLR certificate) file for verifying broker’s certificate validity. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslCrlLocation or .SslCrl should be specified; .SslCrlLocation property will be prioritized over .SslCrl if both specified.

SSL Certificate Revocation List

Certificate Revocation List (CLR certificate) for verifying broker’s certificate validity. Note: Either .SslCrlLocation or .SslCrl should be specified; .SslCrlLocation property will be prioritized over .SslCrl if both specified.

SSL Key Location

Path to client’s private key file (PEM) used for authentication. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslKeyLocation or .SslKey should be specified; .SslKeyLocation property will be prioritized over .SslKey if both specified.

SSL Key

Client’s private key (PEM) used for authentication. Either .SslKeyLocation or .SslKey should be specified; .SslKeyLocation property will be prioritized over .SslKey if both specified.

SSL Key Password

Private key passphrase.

SSL Key PEM

Client’s private key string (PEM format) used for authentication.

SSL Certificate Location

Path to client’s public key file (PEM) used for authentication. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslCertificateLocation or .SslCertificate should be specified; .SslCertificateLocation property will be prioritized over .SslCertificate if both specified.

SSL Certificate

Client’s public key (PEM) used for authentication. Note: Either .SslCertificateLocation or .SslCertificate should be specified; .SslCertificateLocation property will be prioritized over .SslCertificate if both specified.

SSL Certificate PEM

Client’s public key string (PEM format) used for authentication.

SSL Keystore Location

Path to client’s keystore file (PKCS#12) used for authentication. Note: The path specified in this property is related to the system where Connector is running. Note: Either .SslKeystoreLocation or .SslKeystoreLocation should be specified; .SslKeystoreLocation property will be prioritized over .SslKeystore if both specified.

SSL Keystore

Client’s keystore (PKCS#12) used for authentication. Note: Either .SslKeystoreLocation or .SslKeystoreLocation should be specified; .SslKeystoreLocation property will be prioritized over .SslKeystore if both specified.

SSL Keystore Password

Client’s keystore (PKCS#12) password.

SSL Cipher Suites

A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.

SSL Curves List

The supported-curves extension in the TLS ClientHello message specifies the curves the client is willing to have the server use.

SSL Signal Algorithms List

The client uses the TLS ClientHello signature_algorithms extension to indicate to the server which signature/hash algorithm pairs may be used in digital signatures.

Advanced Configuration

Kafka Producer’s advanced configuration options.

Builtin Features

Indicates the builtin features for this build of librdkafka.

  • Gzip: Gzip.

  • Ssl: Ssl.

  • Sasl: Sasl.

  • Regex: Regex.

  • Lz4: Lz4.

  • Sasl-Gssapi: Sasl-Gssapi.

  • Sasl-Plain: Sasl-Plain.

  • Sasl-Scram: Sasl-Scram.

  • Zstd: Zstd.

  • Sasl-Oauthbearer: Sasl-Oauthbearer.

Client ID

Client identifier.

Message Max

Maximum Kafka protocol request message size.

Message Copy Max

Maximum size for message to be copied to buffer. Messages larger than this will be passed by reference (zero-copy) at the expense of larger iovecs.

Receive Message Max

Maximum Kafka protocol response message size. This serves as a safety precaution to avoid memory exhaustion in case of protocol hickups.

Max In-Flight Requests Per Connection

Maximum number of in-flight requests per broker connection. This is a generic property applied to all broker communication, however it is primarily relevant to produce requests. In particular, note that other mechanisms limit the number of outstanding consumer fetch request per broker to one.

Metadata Request Timeout

Non-topic request timeout in milliseconds. This is for metadata requests, etc.

Topic Metadata Refresh Interval

Topic metadata refresh interval in milliseconds. The metadata is automatically refreshed on error and connect. Use -1 to disable the intervalled refresh.

Metadata Max Age

Metadata cache max age. Defaults to topic.metadata.refresh.interval.ms * 3.

Topic Metadata Refresh Fast Interval

When a topic loses its leader a new metadata request will be enqueued with this initial interval, exponentially increasing until the topic metadata has been refreshed. This is used to recover quickly from transitioning leader brokers.

Topic Metadata Refresh Sparse

Sparse metadata requests (consumes less network bandwidth).

Topic Blacklist

Topic blacklist, a comma-separated list of regular expressions for matching topic names that should be ignored in broker metadata information as if the topics did not exist.

Socket Timeout

Default timeout for network requests.

Socket Send Buffer

Broker socket send buffer size. System default is used if 0.

Socket Receive Buffer

Broker socket receive buffer size. System default is used if 0.

Socket Keep-Alive Enable

Enable TCP keep-alives (SO_KEEPALIVE) on broker sockets.

Socket Nagle Disable

Disable the Nagle algorithm (TCP_NODELAY) on broker sockets.

Socket Max Fails

Disconnect from broker when this number of send failures (e.g., timed out requests) is reached. Disable with 0. WARNING: It is highly recommended to leave this setting at its default value of 1 to avoid the client and broker to become desynchronized in case of request timeouts. NOTE: The connection is automatically re-established.

Broker Address TTL

How long to cache the broker address resolving results.

Reconnect Backoff

The initial time to wait before reconnecting to a broker after the connection has been closed. The time is increased exponentially until .ReconnectBackoffMaxMs is reached. -25% to +50% jitter is applied to each reconnect backoff. A value of 0 disables the backoff and reconnects immediately.

Reconnect Backoff Max

The maximum time to wait before reconnecting to a broker after the connection has been closed.

Internal Termination Signal

Signal that librdkafka will use to quickly terminate on rd_kafka_destroy(). If this signal is not set then there will be a delay before rd_kafka_wait_destroyed() returns true as internal threads are timing out their system calls. If this signal is set however the delay will be minimal.

API Version Request

Request broker’s supported API versions to adjust functionality to available protocol features.

API Version Request Timeout

Timeout for broker API version requests.

API Version Fallback

Dictates how long the fallback is used in the case the ApiVersionRequest fails. NOTE: The ApiVersionRequest is only issued when a new connection to the broker is made (such as after an upgrade).

Broker Version Fallback

Older broker versions (before 0.10.0) provide no way for a client to query for supported protocol features (ApiVersionRequest) making it impossible for the client to know what features it may use. As a workaround a user may set this property to the expected broker version and the client will automatically adjust its feature set accordingly if the ApiVersionRequest fails (or is disabled). The fallback broker version will be used for .ApiVersionFallbackMs. Valid values are: 0.9.0, 0.8.2, 0.8.1, 0.8.0. Any other value >= 0.10, such as 0.10.2.1, enables ApiVersionRequests.

Enable Idempotence

When set to true, the producer will ensure that messages are successfully produced exactly once and in the original produce order.

Queue Buffering Max Messages

Maximum number of messages allowed on the producer queue. This queue is shared by all topics and partitions.

Queue Buffering Max Kbytes

Maximum total message size sum allowed on the producer queue. This queue is shared by all topics and partitions. This property has higher priority than .QueueBufferingMaxMessages.

Queue Buffering Max Ms

Delay in milliseconds to wait for messages in the producer queue to accumulate before constructing message batches (MessageSets) to transmit to brokers. A higher value allows larger and more effective (less overhead, improved compression) batches of messages to accumulate at the expense of increased message delivery latency.

Message Send Max Retries

How many times to retry sending a failing Message. Note: retrying may cause reordering unless .EnableIdempotence is set to true.

Retry Backoff

The backoff time in milliseconds before retrying a protocol request.

Queue Buffering Backpressure Threshold

The threshold of outstanding not yet transmitted broker requests needed to backpressure the producer’s message accumulator. If the number of not yet transmitted requests equals or exceeds this number, produce request creation that would have otherwise been triggered will be delayed. A lower number yields larger and more effective batches. A higher value can improve latency when using compression on slow machines.

Compression Level

Compression level parameter for algorithm selected by configuration property .CompressionCodec. Higher values will result in better compression at the cost of more CPU usage.

Topic Configuration Properties

Kafka Producer’s topic configuration properties.

Request Required Acks

This field indicates the number of acknowledgements the leader broker must receive from in sync replica (ISR) brokers before responding to the request: 0=Broker does not send any response/ack to client, -1=Broker will block until message is committed and confirmed by all ISRs.

Request Timeout

The ack timeout of the producer request in milliseconds.

Message Timeout

Local message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery. A time of 0 is infinite. This is the maximum time librdkafka may use to deliver a message (including retries). Delivery error occurs when either the retry count or the message timeout are exceeded.

MQTT Publisher Parameters

The MQTT client configuration parameters.

Topic

String that the broker uses to filter messages for each connected client.

Quality Of Service

The agreement between the sender of a message and the receiver of a message that defines the guarantee of delivery for a specific message.

Retain

Sets the retained flag to true. The broker stores the last retained message and the corresponding QoS for that topic.

Connection Parameters

The MQTT client connection parameters.

Broker Address

The hostname or IP address of the broker to connect to.

Broker Port

The network port to connect to. Usually 1883.

Client ID

String to use as the client id. If NULL, a random client id will be generated. If id is NULL, .CleanSession must be true.

Clean Session

Set to true to instruct the broker to clean all messages and subscriptions on disconnect, false to instruct it to keep them.

Keep-Alive Interval

The number of seconds after which the broker should send a PING message to the client if no other messages have been exchanged in that time.

Credentials

Configure username and password for the MQTT client. This is only supported by brokers that implement the MQTT spec v3.1. By default, no username or password will be sent. If username is NULL, the password argument is ignored.

Username

The username to send as a string, or NULL to disable authentication.

Password

The password to send as a string. Set to NULL in order to send just the username.

SSL/TLS Support

Configure the MQTT client for certificate based SSL/TLS support.

Certificate Authority File

Path to a file containing the PEM encoded trusted CA certificate files. Either .CertificateAuthorityPath or .CertificateAuthority should be specified to enable certificate based SSL/TLS support. Note: The path specified in this property is related to the system where Connector is running. Note: .CertificateAuthorityFile property will be prioritized over .CertificateAuthority if both specified.

Certificate Authority

A file containing the PEM encoded trusted CA certificate files. Either .CertificateAuthorityPath or .CertificateAuthority should be specified to enable certificate based SSL/TLS support. Note: .CertificateAuthorityFile property will be prioritized over .CertificateAuthority if both specified.

Client Certificate File

Path to a file containing the PEM encoded certificate file for this client. If both .ClientCertificateFile and .ClientCertificate properties are NULL, .ClientKeyFile and .ClientKey properties must also be NULL and no client certificate will be used. Note: .ClientCertificateFile property will be prioritized over .ClientCertificate if both specified.

Client Certificate

A file containing the PEM encoded certificate file for this client. If both .ClientCertificateFile and .ClientCertificate properties are NULL, .ClientKeyFile and .ClientKey properties must also be NULL and no client certificate will be used. Note: .ClientCertificateFile property will be prioritized over .ClientCertificate if both specified.

Client Key File

Path to a file containing the PEM encoded private key for this client. If both .ClientKeyFile and .ClientKey properties are NULL, .ClientCertificateFile and .ClientCertificate properties must also be NULL and no client certificate will be used. Note: .ClientKeyFile property will be prioritized over .ClientKey if both specified.

Client Key

A file containing the PEM encoded private key for this client. If both .ClientKeyFile and .ClientKey properties are NULL, .ClientCertificateFile and .ClientCertificate properties must also be NULL and no client certificate will be used. Note: .ClientKeyFile property will be prioritized over .ClientKey if both specified.

Publish Timeout

Maximum number of milliseconds to wait for network activity in the select() call before timing out. Set to 0 for instant return.

Enable Timeout Errors

Set to true to rise an error in case of a publishing timeout. It is not recommended not set this property value to true if .PublishTimeout is 0.

Status

Information about the buffer’s script execution.

Last Execution Time

The last successful script execution time.

Script Error Time

The time when the last script execution error was encountered.

Last Error

The last encountered script execution error.

Script Error Count

The total number of script errors during this component’s uptime and since the last script update.

Custom Options

Compound to hold various structures to customize the object and to be read and written to by Lua-Script code or external interfaces.

Custom String

A generic string buffer to be used programmatically for custom purposes.

Custom Properties

This is an extensible set of named strings which can be used programmatically for custom purposes.

Property Name

A custom property name which can be used programmatically.

Property Value

The value of the custom property which can be read and written programmatically.

Custom Tables

This is an extensible set of named tables which can be used programmatically for custom purposes.

Table Name

A custom table name which can be used programmatically.

Table Data

Handles an entire table organized in columns and rows. The data can easily (cut, copy and paste) be exchanged with table-oriented data of other software products, e.g. MS Excel.