Getting Started with Fast IoT Platform #
Introduction #
The purpose of this tutorial is to showcase the fundamental usage of the most widely used functionalities of ThingsBoard. By following this tutorial, you will be able to:
-
Connect your devices to ThingsBoard.
-
Transmit data from your devices to ThingsBoard.
-
Create real-time dashboards for end-users.
-
Set up thresholds and trigger alarms.
-
Send notifications via email, SMS, or other systems when new alarms are triggered.
For the purpose of simplicity, we will be using a temperature sensor to connect and visualize data.
Step 1. Provision Device #
To keep things simple, we will manually provision the device using the user interface (UI).
-
To begin, log in to your ThingsBoard instance and navigate to the Devices page.
-
Next, click on the “+” icon located at the top-right corner of the table and select “Add new device”. Enter the name of your device in the input field. For example, you can name it “My New Device”. No additional changes are required at this stage. Click on the “Add” button to add the device.
-
Once added, your device should appear at the top of the list, since the table sorts devices by the time of creation by default.
Fig. 1.1 – Go to the devices page from the home page.
Fig. 1.2 – Click on the “+” icon located at the top right corner of the Devices page.
Fig. 1.3 – Enter the device name.
Fig. 1.4 – Once added, your device should appear at the top of the list, since the table sorts devices by the time of creation by default.
Additionally, you have the following options available:
-
Bulk provisioning: You can provision multiple devices by uploading a CSV file through the UI.
-
Device provisioning: You can enable device firmware to automatically provision the device, eliminating the need for manual configuration.
-
REST API: You can programmatically provision devices and other entities using the ThingsBoard REST API.
Step 2. Connect device #
Before connecting the device, you need to obtain its credentials. ThingsBoard supports various types of device credentials, but for the purpose of this guide, we recommend using the default auto-generated credentials, which is the access token.
To obtain the access token:
-
Click on the row corresponding to your device in the table to open its details.
-
Click on the “Copy access token” button. The token will be copied to your clipboard.
-
Save the access token to a safe location for later use.
Fig. 2.1 – Click on the row corresponding to your device in the table to open its details.
Fig. 2.2 – Click on the “Copy access token” button.
You are now ready to publish telemetry data on behalf of your device. In this example, we will use simple commands to publish data over HTTP or MQTT.
Step 3. Create Dashboard #
Next, we will create a dashboard and add the most commonly used widgets. Please follow the instructions below.
Step 3.1 Create Empty Dashboard #
To create a new dashboard and add widgets to it, please follow these steps:
-
Open the Dashboards page on ThingsBoard.
-
Click on the “+” icon located at the top right corner of the page, and select “Create new dashboard”.
-
Enter a name for your dashboard, such as “My New Dashboard”.
-
Click on the “Add” button to create the dashboard.
-
Your newly created dashboard should appear at the top of the list, as the table sorts dashboards by the time of creation by default.
-
Click on the “Open dashboard” icon to start adding widgets to your dashboard.
Fig. 3.1.1 – Open the Dashboards page. Select “Create new dashboard”.
Fig. 3.1.2 – Input dashboard name.
Fig. 3.1.3 – Click on the “Open dashboard” icon to start adding widgets to your dashboard.
Step 3.2 Add Entity Alias #
An alias is a reference to a single entity or group of entities that are used in widgets. Aliases can be either static or dynamic. For the purpose of simplicity, we will use a “Single entity” alias that references the one and only entity (“My New Device” in our case). However, it is possible to configure an alias that references multiple devices, such as devices of a certain type or those related to a specific asset. You can learn more about different aliases here.
To add an alias to your dashboard, please follow these steps:
-
Enter edit mode by clicking on the pencil button located in the bottom right corner of the screen.
-
Click on the “Entity Aliases” icon located at the top right corner of the screen. This will display an empty list of entity aliases.
-
Click on the “Add alias” button.
-
Enter an alias name, such as “MyDevice”.
-
Select “Single entity” as the filter type.
-
Select “Device” as the type, and type “My New” to enable autocomplete. Choose your device from the list of autocomplete options, and click on it.
-
Click on the “Add” button, followed by the “Save” button.
-
Finally, click on the “Apply changes” button in the dashboard editor to save your changes. You will then need to enter edit mode again.
Fig. 3.2.1 – Enter edit mode.
Fig. 3.2.2 – Click on the “Entity Aliases” icon located at the top right corner of the screen.
Fig. 3.2.3 – Click on the “Add alias” button.
Fig. 3.2.4 – Select “Device” as the type, and type “My New” to enable autocomplete. Choose your device from the list of autocomplete options, and click on it.
Fig. 3.2.5 – Click “Add” and then “Save”.
Fig. 3.2.6 – Finally, click on the “Apply changes” button in the dashboard editor to save your changes.
Step 3.3 Add Table Widget #
In order to incorporate the table widget, we must access the widget library and select it from there. The widgets are organized into widget bundles, with each widget having its own data source that informs it what data to display. To display the most recent temperature value we sent in Step 2, we must configure the data source accordingly.
-
First, enter edit mode and click on the “Add new widget” button. Next, select the “Cards” widget bundle and go to the “Latest values” tab. From there, click on the header of the Entities widget to bring up the “Add Widget” window.
-
To add the data source, click “Add”. While a widget may have multiple data sources, we will use only one in this case. Select the “MyDevice” entity alias, then click on the input field on the right. An auto-complete function will appear, allowing you to select the available data points. Select the “temperature” data point and click “Add”.
-
Finally, resize the widget by dragging the bottom right corner to make it slightly larger. If you desire, you may also experiment with the advanced settings to modify the widget.
Fig. 3.3.1 – Enter edit mode. Click on the “Add new widget” button.
Fig. 3.3.2 – Select the Cards package and then choose the Entities table indicator.
Fig. 3.3.3 – Click “Add” to add the data source.
Fig. 3.3.4 – Select “MyDevice” entity alias. Select “temperature” data point and click “Add”.
Fig. 3.3.5 – Resize the widget to make it a little bigger.
Congratulations on adding your first widget! You can now send new telemetry readings and they will instantly appear in the table.
Step 3.4 Add Chart Widget #
In order to include the chart widget, we need to access the widget library and select the appropriate one. This widget is designed to display various historical values of a specific data key, such as “temperature.” Additionally, we must configure the time window that the chart widget will use. Here are the steps to accomplish this task:
-
Enter Edit mode on the screen where you wish to add the chart widget.
-
Click on the “Add new widget” icon located in the bottom right corner of the screen.
-
Choose the “Create new widget” icon.
-
Locate the “Charts” bundle and select it. Scroll down and click on the “Timeseries – Flot” chart widget.
-
Press the “Add Datasource” button.
-
Choose the “MyDevice” Alias and then select the “temperature” key. Click “Add.”
-
Drag and drop the widget to the desired location on the screen. Resize the widget as needed, and then apply any changes.
-
Publish various telemetry values multiple times, making note that the widget only displays one minute of data by default.
-
Re-enter Edit mode and open the time selection window.
-
Modify the interval and aggregation function as desired. Update the time window and apply changes as needed.
Fig. 3.4.1 – Enter Edit mode.
Fig. 3.4.2 – Click the “Add new widget” icon.
Fig. 3.4.3 – Choose the “Create new widget” icon.
Fig. 3.4.4 – Select the “Charts” bundle and click on the “Timeseries Line Chart” chart widget.
Fig. 3.4.5 – Click the “Add Datasource” button.
Fig. 3.4.6 – Select “MyDevice” Alias. Select or create the “temperature” key.
Fig. 3.4.7 – Drag and Drop your widget to the desired space.
Fig. 3.4.8 – Modify the interval and aggregation function as desired. Update the time window and apply changes.
Well done! You have successfully incorporated a chart widget. Whenever you send fresh telemetry data, it will promptly reflect in the chart.
Step 3.5 Add Alarm Widget #
-
To initiate Edit mode, click on the corresponding button.
-
Then, go to the bottom right corner of the screen and click on the “Add new widget” icon.
-
Select the “Create new widget” icon.
-
Next, choose the “Alarm widgets” bundle and click on the header of the “Alarms table” widget.
-
Select “Entity” as the alarm source and “MyDevice” as the alias. Click on “Add”.
-
After that, find the newly created “Alarms” widget and move it to the top right corner of the dashboard by dragging and dropping it.
-
Finally, adjust the widget’s size and save the changes made.
Fig. 3.5.1 – Enter edit mode.
Fig. 3.5.2 – Click ‘+’ icon.
Fig. 3.5.3 – Click the “Create new widget” icon.
Fig. 3.5.4 – Choose the “Alarm widgets” bundle and click on the header of the “Alarms table” widget.
Fig. 3.5.5 – Select “Entity” as the alarm source and “MyDevice” as the alias. Click on “Add”.
Fig. 3.5.6 – Scroll down and locate the new “Alarms” widget.
Fig. 3.5.7 – Resize the widget and apply changes.
Well done! You’ve successfully added an alarm widget. The next step is to set up some alarm rules and trigger some alarms.
Step 4. Configure Alarm Rules #
To trigger an alarm when the temperature reading goes above 25 degrees, we’ll utilize the alarm rules feature. To do so, we need to modify the device profile by adding a new alarm rule. The device “My New Device” is currently using the “Default” profile, although creating individual device profiles for each device type is recommended, we’ll skip this step for simplicity.
Here are the steps to follow:
-
Go to the device profiles page.
-
Click on the row corresponding to the default profile to open the profile details.
-
Select the “Alarm Rules” tab and toggle edit mode.
-
Click on “Add alarm rule”.
-
Specify the alarm type and click the “+” icon to add an alarm rule condition.
-
Click on the “Add key filter” button to specify a condition.
-
Select the key type, input the key name, select the value type, and click “Add”.
-
Choose the operation and input the threshold value, then click “Add”.
-
Click “Save”.
-
Finally, click “Apply changes”.
Fig. 4.1 – Click the default profile row.
Fig. 4.2 – Select the “Alarm Rules” tab and toggle edit mode.
Fig. 4.3 – Click on “Add alarm rule”.
Fig. 4.4 – Specify the alarm type and click the “+” icon to add an alarm rule condition.
Fig. 4.5 – Click on the “Add key filter” button.
Fig. 4.6 – Select the key type, input the key name, select the value type, and click “Add”.
Fig. 4.7 – Choose the operation and input the threshold value, then click “Add”.
Fig. 4.8 – Click “Save”.
Fig. 4.9 – Click “Apply changes”.
Step 5. Create Alarm #
After activating the alarm rule (as outlined in Step 4), it is necessary to send new telemetry on behalf of the device (as explained in Step 2) in order to trigger the alarm. It’s important to note that the temperature value needs to be 26 or higher in order to raise the alarm. Once the new temperature reading is sent, a new alarm should appear on the dashboard right away.
-
It’s worth noting that the new temperature telemetry will result in a fresh active alarm being generated. As a user, it is possible to acknowledge and clear the alarms.
Fig. 5.1 – Observe that the recent temperature readings are leading to the triggering of a fresh active alarm.
Fig. 5.2 – The individual utilizing the system has the option to acknowledge and dismiss the alarms.
Step 6. Alarm notifications #
Setting up email or SMS notifications for alarms is a straightforward process. We suggest examining the provided examples of alarm rules and perusing the documentation related to alarm notifications.
Please take note that ThingsBoard presently facilitates the utilization of AWS SNS and Twilio for sending SMS notifications, both of which require the creation of an account and are not free of charge. Nonetheless, you can integrate with other SMS/EMAIL gateways through the REST API call node.
Step 7. Assign Device and Dashboard to Customer #
One of the key functionalities provided by ThingsBoard is the ability to allocate Dashboards to Customers. This enables the segregation of various devices to different customers. Furthermore, you have the flexibility to generate one or more Dashboards and assign them to multiple customers. Consequently, each user belonging to a specific customer will only be able to view the devices associated with their account, and won’t be granted access to any data or devices belonging to other customers.
Step 7.1 Create customer #
We can generate a new customer titled “My New Customer” by following the instructions outlined below:
-
Proceed to the Customers page.
-
Select the “+” symbol to initiate the addition of a new customer.
-
Enter the desired title for the customer and click on the “Add” button to complete the process.
Fig. 7.1.1 – To create a new customer, go to the Customers page and click on the symbol “+” to initiate the process.
Fig. 7.1.2 – Add customer title and click “Add”.
Step 7.2 Assign device to Customer #
We can allocate a device to the newly created customer by completing the steps below:
-
Navigate to the Devices page.
-
Click on “Assign to customer” for the device “My New Device”.
-
Choose “My New Customer” from the available options and click on “Assign”. This will enable the customer’s users to view and modify the device’s telemetry, as well as send commands to it.
Fig. 7.2.1 – Open Devices page and click “Assign to customer” for “My New Device”.
Fig. 7.2.2 – Select “My New Customer” and click “Assign”.
Step 7.3 Assign dashboard to Customer #
To share a dashboard with the newly created customer, please follow the instructions given below:
-
Open the Dashboards page.
-
Click on “Manage assigned customers”.
-
Select “My New Customer” from the list of available customers.
-
Click on “Update”. This will provide read-only access to the Dashboard for the customer’s users.
Fig. 7.3.1 – Open Dashboards and click “Manage assigned customers”.
Fig. 7.3.2 – Select “My New Customer” and click “Update”.
Step 7.4 Create customer user #
To create a user who belongs to the customer and only has read-only access to the dashboard and device, follow these steps:
-
Go to the “Customers” page and click the “manage customer users” icon.
-
Click the “Add user” icon.
-
Enter the email address that you will use to log in as a customer user and click “Add”.
-
Copy the activation link and save it in a secure location. You will need it later to set the password.
-
Open the user details.
-
Toggle the edit mode.
-
Select the default dashboard and check the “Always fullscreen” option.
-
Apply the changes.
-
Additionally, if you want the dashboard to appear immediately after the user logs in to the platform web UI, you can configure it accordingly.
Fig. 7.4.1 – To proceed, you should go back to the “Customers” page and then click on the icon that says “manage customer users”.
Fig. 7.4.2 – Click the “Add user” icon.
Fig. 7.4.3 – Enter the email address that you plan to use for logging in as a customer user and then click the “Add” button.
Fig. 7.4.4 – Copy the activation link and save it in a secure location.
Fig. 7.4.5 – Click to open user details.
Fig. 7.4.6 – Toggle the edit mode.
Fig. 7.4.7 – Select default dashboard and check “Always fullscreen”. Apply changes.
Step 7.5 Activate customer user #
To set a password for the customer user, use the activation link that you previously saved, and then click the “Create Password” button. Once you have done this, you will automatically be logged in as a customer user. From here, you can browse the data and take action to acknowledge or clear alarms as necessary.
Fig. 7.5.1 – Use the activation link to set the password. Click “Create Password”. You will automatically login as a customer user.
Fig. 7.5.2 – You have logged in as a Customer User. You may browse the data and acknowledge/clear alarms.
Rule Engine Overview #
https://Fast IoT Platform.io/docs/user-guide/rule-engine-2-0/overview/
FAST IoT PLATFORM Rule Engine is a sophisticated system designed for processing complex events, which is highly customizable and configurable. Using the rule engine, you can filter, enrich and transform incoming messages from IoT devices and associated assets. Additionally, you can trigger a range of actions, such as notifications or communication with external systems.
Key Concepts #
Rule Engine Message #
A Rule Engine Message is a serializable and immutable data structure that represents various messages within the system. These can include:
-
Incoming telemetry, attribute updates, or RPC calls from devices
-
Entity lifecycle events, such as when an entity is created, updated, deleted, assigned, unassigned, or when attributes are updated
-
Device status events, such as when a device is connected, disconnected, active, inactive, etc.
-
Other system events
A Rule Engine Message contains the following information:
-
Message ID: a universally unique identifier based on time
-
Originator of the message: the identifier for the device, asset, or other entity that originated the message
-
Type of message: e.g. “Post telemetry” or “Inactivity Event”
-
Payload of the message: a JSON body containing the actual message payload
-
Metadata: a list of key-value pairs containing additional data about the message.
Rule Node #
A Rule Node is a fundamental component of the Rule Engine that processes a single incoming message at a time and produces one or more outgoing messages. The Rule Node is the main logical unit of the Rule Engine. A Rule Node can filter, enrich, or transform incoming messages, perform actions, or communicate with external systems.
Rule Node Connection #
Rule Nodes can be connected to other Rule Nodes, with each connection having a relation type that labels the logical meaning of the relation. When a Rule Node produces an outgoing message, it always specifies the relation type, which is used to route the message to the next nodes.
Typical relation types for Rule Nodes include “Success” and “Failure”. Rule Nodes representing logical operations may use “True” or “False”. Some specific Rule Nodes may use completely different relation types, such as “Post Telemetry”, “Attributes Updated”, “Entity Created”, and so on.
Some Rule Nodes support custom connection names. Simply enter your custom connection name and click the “Create a new one!” link:
Fig. 1.1 – Add Rule Chain.
All connection names are case-sensitive.
Rule Chain #
In the FAST IoT PLATFORM Rule Engine, a Rule Chain is a logical group of Rule Nodes and their relations. For example, the following Rule Chain:
-
Saves all telemetry messages to the database
-
Raises a “High Temperature Alarm” if the temperature field in the message is higher than 50 degrees
-
Raises a “Low Temperature Alarm” if the temperature field in the message is lower than -40 degrees
-
Logs failures to execute the temperature check scripts to the console in case of logical or syntax errors in the script.
Fig. 1.2 – Rule Chain.
The tenant administrator has the ability to define a single Root Rule Chain, and optionally multiple other rule chains. The Root Rule Chain will handle all incoming messages and can forward them to other rule chains for further processing. Other rule chains may also forward messages to different rule chains.
For example, the following rule chain will:
-
Trigger a “High Temperature Alarm” if the temperature field in the message is greater than 50 degrees
-
Clear the “High Temperature Alarm” if the temperature field in the message is less than 50 degrees
-
Forward notifications about “Created” and “Cleared” alarms to an external rule chain that handles notifications to the corresponding users.
Message Processing Result #
There are three potential outcomes of message processing: Success, Failure, and Timeout. A message processing attempt is marked as “Success” if the last rule node in the processing chain successfully processes the message. A message processing attempt is marked as “Failure” if one of the rule nodes produces a “Failure” in message processing and there are no rule nodes to handle that failure. A message processing attempt is marked as “Timeout” when the overall processing time exceeds a configurable threshold.
Please see the diagram below and let’s review the possible scenarios:
If the “Transformation” script fails, the message is not marked as “Failed” because there is a “Save to DB” node connected with a “Failure” relation. If the “Transformation” script is successful, it will be pushed to the “External System” with a REST API call. If the external system is overloaded, the REST API call may hang for some time. Let’s assume the overall timeout for message pack processing is 20 seconds. Let’s ignore the Transformation script execution time because it is <1ms. If the “External System” replies within 20 seconds, the message will be successfully processed. Similarly, if the “Save to DB” call succeeds, the message will be successfully processed. However, if the external system does not reply within 20 seconds, the message processing attempt will be marked as “timed-out”. Similarly, if the “Save to DB” call fails, the message will be marked as failed.
Rule Engine Queue #
See new documentation
Queue submit strategy #
See new documentation
Queue processing strategy #
See new documentation
Default queues #
See new documentation
Predefined Message Types #
List of the predefined Message Types is presented in the following table:
code
Message Type |
Display Name |
Description |
Message metadata |
Message payload |
POST_ATTRIBUTES_REQUEST |
Post attributes |
Request from device to publish client side attributes (see attributes api for reference) |
deviceName – originator device name, deviceType – originator device type |
key/value json: { “currentState”: “IDLE” } |
POST_TELEMETRY_REQUEST |
Post telemetry |
Request from device to publish telemetry (see telemetry upload api for reference) |
deviceName – originator device name, deviceType – originator device type, ts – timestamp (milliseconds) |
key/value json: { “temperature”: 22.7 } |
TO_SERVER_RPC_REQUEST |
RPC Request from Device |
RPC request from device (see client side rpc for reference) |
deviceName – originator device name, deviceType – originator device type, requestId – RPC request Id provided by client |
json containing method and params: { “method”: “getTime”, “params”: { “param1”: “val1” } } |
RPC_CALL_FROM_SERVER_TO_DEVICE |
RPC Request to Device |
RPC request from server to device (see server side rpc api for reference) |
requestUUID – internal request id used by sustem to identify reply target, expirationTime – time when request will be expired, oneway – specifies request type: true – without response, false – with response |
json containing method and params: { “method”: “getGpioStatus”, “params”: { “param1”: “val1” } } |
ACTIVITY_EVENT |
Activity Event |
Event indicating that device becomes active |
deviceName – originator device name, deviceType – originator device type |
json containing device activity information: { “active”: true, “lastConnectTime”: 1526979083267, “lastActivityTime”: 1526979083270, “lastDisconnectTime”: 1526978493963, “lastInactivityAlarmTime”: 1526978512339, “inactivityTimeout”: 10000 } |
INACTIVITY_EVENT |
Inactivity Event |
Event indicating that device becomes inactive |
deviceName – originator device name, deviceType – originator device type |
json containing device activity information, see Activity Event payload |
CONNECT_EVENT |
Connect Event |
Event produced when device is connected |
deviceName – originator device name, deviceType – originator device type |
json containing device activity information, see Activity Event payload |
DISCONNECT_EVENT |
Disconnect Event |
Event produced when device is disconnected |
deviceName – originator device name, deviceType – originator device type |
json containing device activity information, see Activity Event payload |
ENTITY_CREATED |
Entity Created |
Event produced when new entity was created in system |
userName – name of the user who created the entity, userId – the user Id |
json containing created entity details: { “id”: { “entityType”: “DEVICE”, “id”: “efc4b9e0-5d0f-11e8-8559-37a7f8cdca74” }, “createdTime”: 1526918366334, … “name”: “my-device”, “type”: “temp-sensor” } |
ENTITY_UPDATED |
Entity Updated |
Event produced when existing entity was updated |
userName – name of the user who updated the entity, userId – the user Id |
json containing updated entity details, see Entity Created payload |
ENTITY_DELETED |
Entity Deleted |
Event produced when existing entity was deleted |
userName – name of the user who deleted the entity, userId – the user Id |
json containing deleted entity details, see Entity Created payload |
ENTITY_ASSIGNED |
Entity Assigned |
Event produced when existing entity was assigned to customer |
userName – name of the user who performed assignment operation, userId – the user Id, assignedCustomerName – assigned customer name, assignedCustomerId – Id of assigned customer |
json containing assigned entity details, see Entity Created payload |
ENTITY_UNASSIGNED |
Entity Unassigned |
Event produced when existing entity was unassigned from customer |
userName – name of the user who performed unassignment operation, userId – the user Id, unassignedCustomerName – unassigned customer name, unassignedCustomerId – Id of unassigned customer |
json containing unassigned entity details, see Entity Created payload |
ADDED_TO_ENTITY_GROUP |
Added to Group |
Event produced when entity was added to Entity Group. This Message Type is specific to Fast IoT Platform PE. |
userName – name of the user who performed assignment operation, userId – the user Id, addedToEntityGroupName – entity group name, addedToEntityGroupId – Id of entity group |
empty json payload |
REMOVED_FROM_ENTITY_GROUP |
Removed from Group |
Event produced when entity was removed from Entity Group. This Message Type is specific to Fast IoT Platform PE. |
userName – name of the user who performed unassignment operation, userId – the user Id, removedFromEntityGroupName – entity group name, removedFromEntityGroupId – Id of entity group |
empty json payload |
ATTRIBUTES_UPDATED |
Attributes Updated |
Event produced when entity attributes update was performed |
userName – name of the user who performed attributes update, userId – the user Id, scope – updated attributes scope (can be either SERVER_SCOPE or SHARED_SCOPE) |
key/value json with updated attributes: { “softwareVersion”: “1.2.3” } |
ATTRIBUTES_DELETED |
Attributes Deleted |
Event produced when some of entity attributes were deleted |
userName – name of the user who deleted attributes, userId – the user Id, scope – deleted attributes scope (can be either SERVER_SCOPE or SHARED_SCOPE) |
json with attributes field containing list of deleted attributes keys: { “attributes”: [“modelNumber”, “serial”] } |
ALARM |
Alarm event |
Event produced when an alarm was created, updated or deleted |
All fields from original Message Metadata isNewAlarm – true if a new alram was just created isExistingAlarm – true if an alarm is existing already isClearedAlarm – true if an alarm was cleared |
json containing created alarm details: { “tenantId”: { … }, “type”: “High Temperature Alarm”, “originator”: { … }, “severity”: “CRITICAL”, “status”: “CLEARED_UNACK”, “startTs”: 1526985698000, “endTs”: 1526985698000, “ackTs”: 0, “clearTs”: 1526985712000, “details”: { “temperature”: 70, “ts”: 1526985696000 }, “propagate”: true, “id”: “33cd8999-5dac-11e8-bbab-ad47060c9431”, “createdTime”: 1526985698000, “name”: “High Temperature Alarm” } |
REST_API_REQUEST |
REST API Request to Rule Engine |
Event produced when user executes REST API call |
requestUUID – the unique request id, expirationTime – the expiration time of the request |
json with request payload |
Rule Node Types #
The available rule nodes are categorized based on their purpose:
-
Filter nodes are used to filter and route messages.
-
Enrichment nodes are used to update metadata of incoming messages.
-
Transformation nodes are used to modify incoming message fields such as originator, type, payload, and metadata.
-
Action nodes execute various actions based on incoming messages.
-
External nodes are used to interact with external systems.
Configuration #
Each Rule Node may have specific configuration parameters that depend on the Rule Node Implementation. For example, the “Filter – script” rule node is configurable via a custom JS function that processes incoming data. The “External – send email” node configuration allows you to specify mail server connection parameters.
The Rule Node configuration window can be opened by double-clicking on the node in the Rule Chain editor.
Debugging #
Fast IoT Platform offers users the ability to review incoming and outgoing messages for each Rule Node, which can aid in the process of debugging. To enable the debug feature, users need to ensure that the “Debug mode” checkbox is checked in the main configuration window (the first image in the Configuration section illustrates this).
After enabling debug mode, users can view information about incoming and outgoing messages, along with their corresponding relation types.
Import/Export #
To transfer your rule chain to another Fast IoT Platform instance or simply save it in JSON format, you can utilize the export function provided by the platform. To begin, navigate to the Rule Chains page and locate the specific rule chain that you wish to export. Then, click on the export button that is situated on the corresponding rule chain card. This will generate a JSON file that contains all the necessary data to recreate the same rule chain in a different Fast IoT Platform instance.
To import a rule chain in JSON format to your Fast IoT Platform instance, follow these steps: go to the Rule Chains page, locate the large “+” button situated in the lower-right corner of the screen, and click on it. This will bring up a new window where you can select the import button. Once you click on the import button, you can choose the JSON file containing the rule chain data that you wish to import. After selecting the file, Fast IoT Platform will validate the data and import the rule chain to your instance.
Custom REST API calls to Rule Engine #
Fast IoT Platform provides an API for sending custom REST API calls to the rule engine, processing the payload of the request, and returning the result of the processing in the response body. This feature is useful for various use cases, such as:
Extending the existing REST API of the platform with custom API calls.
Enriching a REST API call with the attributes of a device/asset/customer and forwarding it to an external system for complex processing.
Providing a custom API for your custom widgets.
To execute a REST API call, you can use the rule-engine-controller REST APIs.
Note that if you specify an entity ID in your call, that entity will be designated as the originator of the Rule Engine message. If you do not include entity ID parameters, the originator of the message will default to your user entity.
Troubleshooting #
If you are using Kafka queue to process messages, Fast IoT Platform offers the capability to monitor if the rate of pushing messages to Kafka is faster than the rate of consuming and processing them. In such cases, there will be an increasing delay in message processing. To activate this functionality, you must ensure that Kafka consumer-stats are enabled by referring to the queue.kafka.consumer-stats section of the Configuration properties.
Once Kafka consumer-stats are enabled, you will be able to view logs (refer to Troubleshooting) that show the offset lag for consumer groups. These logs include consumer-group logs for tb-core, tb-rule-engine, and all transport services.
Here is an example of the log message:
2021-03-19 15:01:59,794 [kafka-consumer-stats-11-thread-1] INFO o.t.s.q.k.TbKafkaConsumerStatsService – [re-Main-consumer] Topic partitions with lag: [[topic=[tb_rule_engine.main.0], partition=[0], committedOffset=[5413], endOffset=[5418], lag=[5]]]. |
This message indicates that there are 5 messages that have been pushed to the Main queue (tb_rule_engine.main.0 Kafka topic) but have not yet been processed. This can be deduced from the difference between the two offset values, where 5418 is the latest offset and 5413 is the offset of the last processed message.
In general, the logs follow the following structure:
TIME [STATS_PRINTING_THREAD_NAME] INFO o.t.s.q.k.TbKafkaConsumerStatsService – [CONSUMER_GROUP_NAME] Topic partitions with lag: [[topic=[KAFKA_TOPIC], partition=[KAFKA_TOPIC_PARTITION], committedOffset=[LAST_PROCESSED_MESSAGE_OFFSET], endOffset=[LAST_QUEUED_MESSAGE_OFFSET], lag=[LAG]],[topic=[ANOTHER_TOPIC], partition=[], committedOffset=[], endOffset=[], lag=[]],…]. |
The logs have the following structure:
CONSUMER_GROUP_NAME – the name of the consumer group responsible for processing messages (could be any of the rule-engine queues, core queue, etc.)
KAFKA_TOPIC – the name of the Kafka topic being monitored
KAFKA_TOPIC_PARTITION – the partition number of the Kafka topic
LAST_PROCESSED_MESSAGE_OFFSET – the sequence number of the last message processed by the consumer (last acknowledged message in the Rule Engine, etc.)
LAST_QUEUED_MESSAGE_OFFSET – the sequence number of the last message successfully pushed to the Kafka topic
LAG – the number of unprocessed messages still present in the Kafka topic partition.
NOTE: Logs regarding consumer lag will only be displayed if there is a lag present for that particular consumer group.
Filter Nodes #
Filter Nodes are utilized for message filtering and routing purposes. Below is a list of available nodes.
Asset profile switch #
This feature allows incoming messages to be routed based on the name of the asset profile, taking into account the case sensitivity of the profile name. It has been available since version 3.4.4.
Output
The output connection of the rule node is linked to the asset profile name, such as “Freezer Room”, “Building”, and so on. Please refer to the connections of the rule node for further information.
Usage example
Experienced users of the platform make use of Asset Profiles and configure specific rule chains for each profile. This is a convenient way to automatically direct the platform-generated messages, such as Asset Created, Deleted, Attribute Updated, etc. However, most of the messages are generated from the sensor data. Let’s say we have temperature sensors installed in the assets of two rooms, namely “Freezer Room” and “Boiler Room”, which are related to the temperature devices using the “Contains” type relation. The following rule chain changes the source of the message from the device to the associated asset, and then directs incoming messages to the appropriate rule chain for either “Freezer Room” or “Boiler Room”.
Fig. 1 – Rule chain: asset profile switch
You can download and import the rule chain, but keep in mind that the “rule chain” nodes may refer to non-existent rule chains in your specific environment.
Device profile switch #
Incoming messages can be routed according to the device profile name, with attention paid to case sensitivity. This feature has been available since v3.4.4.
Output
The output connection of the rule node corresponds to the device profile name, such as “Temperature sensor”, “Humidity sensor”, and so on. For additional information, please refer to the connections of the rule node.
Usage example
Experienced users of the platform make use of Device Profiles and configure specific Rule Chains for each Device Profile. This is generally a helpful approach, except in situations where the device data is obtained from another message source. For instance, if you are using a BLE to MQTT gateway in combination with BLE beacons, the payload of the gateway usually includes the MAC address of the beacon as well as the beacon data:
{“mac”: “7085C2F13DCD”, “rssi”: -25, “payload”: “AABBCC”} |
Let’s assume you have different beacon profiles – indoor air quality (“IAQ sensor”) and leak sensors (“Leak sensor”). The following rule chain will change the message source from the gateway to the device and direct the message to the appropriate rule chain:
Fig. 2 – Rule chain: device profile switch
You can download and import the rule chain, but please note that the “rule chain” nodes may refer to non-existent rule chains in your specific environment.
Check alarm status #
This function verifies whether the Alarm status corresponds to one of the specified statuses.
Configuration
The alarm status filter contains a list of alarm statuses, including: ‘Active Acknowledged’, ‘Active Unacknowledged’, ‘Cleared Acknowledged’, and ‘Cleared Unacknowledged’.
Fig. 3.1 – Add check alarm status
Output
Output connection types: “True” or “False”.
Example
Rule chain listed below verifies whether the acknowledged alarm is currently active or has already been cleared.
Fig. 3.2 – Rule chain: check alarm status
You have the option to download and import the rule chain.
Check fields presence #
The function verifies whether the designated fields exist in the message and/or metadata. The message and metadata are usually represented as a JSON object. The user defines the names of the message and/or metadata fields in the configuration.
Configuration
-
Message field names refer to the list of field names that must be present in the message.
-
Metadata field names refer to the list of field names that must be present in the metadata.
-
The ‘Check that all specified fields are present’ checkbox allows the user to check the presence of all specified fields (if checked) or at least one field (if unchecked).
Output
Output connection types: “True” or “False”.
Example
See configuration screenshot.
Check relation #
The function verifies the existence of the relationship between the sender of the message and other entities. If ‘check relation to specific entity’ is chosen, the user must provide the name of the related entity. If not, the rule node verifies the presence of a relationship with any entity that meets the direction and relationship type criteria.
Configuration
-
The ‘check relation to specific entity’ checkbox allows the user to specify a particular entity to check the relationship with.
-
The ‘Direction’ option configures the direction of the relationship, which can be either ‘From’ or ‘To’. This value determines the direction of the relationship from the specified/any entity to the originator. See the example below.
-
The ‘Relation type’ option allows the user to define an arbitrary relation type. The default relation types are ‘Contains’ and ‘Manages’, but the user can create a relation of any type.
Fig. 4 – Add check relation
Output
Output connection types: “True” or “False”.
Example
Suppose you have a temperature sensor located both inside the office and in the warehouses. When processing the data, you may want to know whether the sensor is located in the office or the warehouse. To achieve this, you need to create an “OfficeToDevice” relationship from the Office asset to the sensor device located in the office.
Please refer to the configuration screenshot to learn how to configure the rule node for this specific case.
Entity type #
“The function filters incoming messages based on the type of entity that originated the message. It verifies whether the entity type of the message originator matches one of the specified values in the filter.
Configuration
-
The Originator Types Filter is a list of entity types, such as Device, Asset, User, etc.
Output
Output connection types: “True” or “False”.
Example
See configuration screenshot.
Entity type switch #
The function routes incoming messages based on the type of entity that originated the message.
Output
The output connection of the rule node corresponds to the entity type of the message originator, such as “Device”, “Asset”, “User”, etc.
Example
Assuming that you have messages from various entities processed within a single rule chain, you may want to split the message flow based on the entity type. An example is shown below:
Fig. 5 – Rule chain: entity type switch
Message type #
The process involves screening incoming messages and sorting them according to predefined or customized message types. It verifies whether the message type of each incoming message corresponds to any of the designated filter values.
Configuration
-
The filter for message types includes a roster of predetermined message types, with the added option of incorporating custom message types.
Fig. 6 – Add message type
Output
Output connection types: “True” or “False”.
Example
See configuration screenshot.
Message type switch #
Incoming messages are directed to their respective chains based on the message type value. If the incoming message has a recognized message type, it is routed to the corresponding chain. Otherwise, it is sent to the “Other” chain.
In case custom message types are utilized, they can be routed through the “Other” chain of the Message Type Switch Node to the message type with the necessary routing logic configured.
Output
The output connection of the rule node corresponds to the entity type of the message originator, such as “Device”, “Asset”, “User”, etc.
Example
Assuming that you have messages from various entities processed within a single rule chain, you may want to split the message flow based on the entity type. An example is shown below:
Fig. 7 – Rule chain: message type switch
Script #
An incoming message is used to assess a Boolean function, which can be composed using TBEL (recommended) or plain JavaScript. The script function should return a Boolean value and accept three parameters.
Configuration
The TBEL/JavaScript function accepts three input parameters:
-
msg is the message payload, usually presented as a JSON object or array.
-
metadata is the metadata associated with the message. It is represented as a Key-Value map, with both keys and values in string format.
-
msgType is the string message type.
Fig. 8 – Add script
Output
Output connection types: “True” or “False”.
Example
The message payload can be accessed using the “msg” variable, for instance: “msg.temperature < 10;”
The message metadata can be accessed through the “metadata” variable, for example: “metadata.deviceType === ‘DHT11’;”
The message type can be accessed using the “msgType” variable, for example: “msgType === ‘POST_TELEMETRY_REQUEST'”
Here is a full example of a script:
if(msgType === ‘POST_TELEMETRY_REQUEST‘) { if(metadata.deviceType === ‘vehicle‘) { return msg.humidity > 50; } else if(metadata.deviceType === ‘controller‘) { return msg.temperature > 20 && msg.humidity > 60; } } return false; |
The TBEL/JavaScript condition can be validated using the test filter function.
Switch #
This node directs incoming messages to one or more output connections. The node executes a configured TBEL (recommended) or JavaScript function that returns an array of strings (connection names).
Configuration
The TBEL/JavaScript function accepts three input parameters:
-
msg represents the message payload, usually as a JSON object or array.
-
metadata represents the message metadata and is presented as a Key-Value map. Both keys and values are in string format.
-
msgType is the string message type.
The script must return an array of the next relation names where the message should be directed. If the returned array is empty, the message will not be routed to any node and will be discarded.
Fig. 9.1 – Add switch
Output
The output connection of the rule node corresponds to the result of the script execution. For instance, it could be “Low Temperature Telemetry,” “Normal Temperature Telemetry,” “Idle State,” and so on. For more information, see the rule node connections.
Example
Here are some possible ways to rewrite the given sentences:
The message payload can be accessed through the “msg” variable. For instance, you can check if the temperature is less than 10 by using the expression “msg.temperature < 10”.
The metadata of the message is available through the “metadata” variable. For instance, you can verify if the customer’s name is “John” by using the expression “metadata.customerName === ‘John'”.
The type of the message is available through the “msgType” variable. For instance, you can check if the message type is “POST_TELEMETRY_REQUEST” by using the expression “msgType === ‘POST_TELEMETRY_REQUEST'”.
Full script example:
if (msgType === ‘POST_TELEMETRY_REQUEST‘) { if (msg.temperature < 18) { return [‘Low Temperature Telemetry‘]; } else { return [‘Normal Temperature Telemetry‘]; } } else if (msgType === ‘POST_ATTRIBUTES_REQUEST‘) { if (msg.currentState === ‘IDLE‘) { return [‘Idle State‘, ‘Update State Attribute‘]; } else if (msg.currentState === ‘RUNNING‘) { return [‘Running State‘, ‘Update State Attribute‘]; } else { return [‘Unknown State‘]; } } return []; |
To verify a JavaScript condition in TBEL, you can use the test filter function.
Fig. 9.2 – Rule chain – switch
GPS geofencing filter #
The incoming messages can be filtered based on GPS geofencing. This involves extracting the latitude and longitude parameters from the message and comparing them against the configured perimeter.
Here are the definitions for some of the key terms related to location-based messages:
-
Latitude key name: This is the name of the message field that contains the latitude information for a location-based message.
-
Longitude key name: This is the name of the message field that contains the longitude information for a location-based message.
-
Perimeter type: This refers to the shape of the area around the location that should trigger an action or notification. It can be either a Polygon or a Circle.
-
Fetch perimeter from message metadata: This is a checkbox option that should be enabled if the perimeter for the location is specific to a device or asset and is stored as a device/asset attribute.
-
Perimeter key name: This is the name of the metadata key that stores the perimeter information.
-
For Polygon perimeter type: The polygon definition is a string that contains an array of coordinates. The format for each coordinate is [latitude, longitude]. The array should be enclosed in double brackets and each coordinate should be separated by a comma. Example format: [[lat1, lon1],[lat2, lon2],[lat3, lon3], … , [latN, lonN]]
-
For Circle perimeter type: The circle perimeter is defined by its center point and its range. The center latitude and longitude are the coordinates of the center of the circle. The range is a double-precision floating-point value that specifies the radius of the circle. Range units can be specified as one of the following options: Meter, Kilometer, Foot, Mile, Nautical Mile.
If the “Fetch perimeter from message metadata” option is enabled and the “Perimeter key name” is not configured, the rule node will use default metadata key names. For the polygon perimeter type, the default metadata key name is “perimeter”. For the circle perimeter type, the default metadata key names are “centerLatitude”, “centerLongitude”, “range”, and “rangeUnit”.
The definition of the perimeter of a circle is stored as a server-side attribute, and its structure is determined by the specific implementation.
{“latitude”: 48.198618758582384, “longitude”: 24.65322245153503, “radius”: 100.0, “radiusUnit”: “METER” } |
Available radius units: METER, KILOMETER, FOOT, MILE, NAUTICAL_MILE;
Output
Output connection types can be set as either “True” or “False”. The “Failure” connection will be utilized under two circumstances: a) If the incoming message does not contain a configured latitude or longitude key in its data or metadata, or b) if there is a missing perimeter definition.
Examples
Static circle perimeter
Suppose you want to verify if the device is within 100 meters of the Ukraine’s Independence Monument, situated in the center of Kyiv. The latitude and longitude coordinates of the monument are as follows: latitude = 50.4515652, longitude = 0.5236963. The setup for the rule node is straightforward:
Fig. 10.1 – Add gps geofencing filter – circle perimeter
Static polygon perimeter
Consider a basic scenario of monitoring the location of livestock. Configure the rule node to monitor whether the sheep is within the specified area perimeter:
We will use the static polygon coordinates of the farm field:
[[48.19736726399899, 24.652353415807884], [48.19800374220741, 24.65060461551745], [48.19918370897885, 24.65317953619048], [48.19849718616351, 24.65420950445969]] |
If you provide the following coordinates in the message, you can test whether the rule node returns ‘True’:
{ latitude: 48.198618758582384, longitude: 24.65322245153503 } |
Fig. 10.2 – Add gps geofencing filter – polygon perimeter
Dynamic circle/polygon perimeter
Let’s take a look at a more intricate case of monitoring livestock locations, where sheep may be located in multiple farms. Suppose we have established two farms, namely Farm A and Farm B. Each device used for tracking livestock is associated with either Farm A or Farm B’s assets.
Fig. 10.3 – Each device used for tracking livestock is associated with either Farm A or Farm B’s assets.
We will configure server-side attribute called “perimeter” with the JSON value: “[[48.19736726399899, 24.652353415807884], [48.19800374220741, 24.65060461551745], [48.19918370897885, 24.65317953619048], [48.19849718616351, 24.65420950445969]]”;
Fig. 10.4 – We will configure server-side attribute called “perimeter” with the JSON value.
The rule chain below will retrieve the attribute from the associated asset (Farm A) and employ it in the geofencing node:
Fig. 10.5 – Rule chain: gps geofencing filter
The configuration of the rule node is quite straightforward. It should be noted that the key name for perimeter is without any prefix:
Fig. 10.6 – It should be noted that the key name for perimeter is without any prefix.
You can download and import the rule chain, but please note that the “rule chain” nodes will point to a non-existent device in the “Sheep Tracker Generator” node. To replicate the example, you will need to provision the device and asset.
Enrichment Nodes #
Enrichment nodes are utilized to update the metadata of incoming messages.
Calculate delta #
Fig. 1.1 – Rule node: Calculate delta
The process of calculating the ‘delta’ by comparing the previous and current readings of a time-series data, and appending it to the message is performed within the context of the message originator, such as a device, asset, or customer. This functionality is particularly useful for applications related to smart metering. For instance, if a water metering device reports the absolute value of the pulse counter only once a day, to determine the consumption for the current day, one needs to compare the values of the previous and current days.
Here are the configuration parameters:
Input value key (default: ‘pulseCounter’): specifies the key used for delta calculation.
Output value key (default: ‘delta’): specifies the key that will store the delta value in the enriched message.
Decimals: sets the precision for delta calculation.
Use cache for latest value (default: enabled): enables the caching of the latest values in memory.
Tell ‘Failure’ if delta is negative (default: enabled): forces message processing to fail if the delta value is negative.
Add period between messages (default: disabled): adds the value of the period between the current and previous messages.
Rule node relations:
The rule node generates a message with one of the following relations:
-
Success – if the key configured via the ‘Input value key’ parameter is present in the incoming message;
-
Other – if the key configured via the ‘Input value key’ parameter is not present in the incoming message;
-
Failure – if the ‘Tell ‘Failure’ if delta is negative’ option is enabled and the delta calculation returns a negative value;
Let’s examine the behavior of the rule node through an example. Let’s assume the following configuration:”
Fig. 1.2 – Add calculate delta
Let’s assume that the following messages are sent by the same device and arrive at the rule node in the order listed:
msg: {“pulseCounter”: 42}, metadata: {“ts”: “1616510425000”} msg: {“pulseCounter”: 73}, metadata: {“ts”: “1616510485000”} msg: {“temperature”: 22}, metadata: {“ts”: “1616510486000”} msg: {“pulseCounter”: 42}, metadata: {“ts”: “1616510487000”} |
The output will be the following:
msg: {“pulseCounter”: 42, “delta”: 0, “periodInMs”: 0}, metadata: {“ts”: “1616510425000”}, relation: Success msg: {“pulseCounter”: 73, “delta”: 31, “periodInMs”: 60000}, metadata: {“ts”: “1616510485000”}, relation: Success msg: {“temperature”: 22}, metadata: {“ts”: “1616510486000”}, relation: Other msg: {“pulseCounter”: 42}, metadata: {“ts”: “1616510487000”}, relation: Failure |
Customer attributes #
Fig. 2.1 – Rule node: customer attributes
The node searches for the customer entity of the message originator and adds the customer’s attributes or the latest telemetry value to the message metadata.
The administrator can configure the mapping between the original attribute name and the metadata attribute name.
The node configuration includes a ‘Latest Telemetry’ checkbox. If this checkbox is selected, the node will retrieve the latest telemetry for the configured keys. Otherwise, the node will retrieve the server scope attributes.
Fig. 2.2 – Add customer attributes.
If the configured attributes exist, they will be included in the outbound message metadata. To access the fetched attributes in other nodes, you can use the template ‘metadata.temperature’.
The following message originator types are allowed: customer, user, asset, and device. If an unsupported originator type is found, an error will be thrown.
If the originator does not have an assigned customer entity, the failure chain is used; otherwise, the success chain is used.
Note: You can use ${metadataKey} to retrieve a value from metadata and $[messageKey] to retrieve a value from the message body.
Example: Suppose you have the following metadata: {“country”: “England”}. Additionally, you have an attribute with a key that is a country name and a value that is a capital city ({“England”: “London”}).
The aim is to retrieve the capital city from the attribute for the country from the metadata and add the result to the metadata with the key “city”. To achieve this, you can use ${country} as the source attribute and “city” as the target attribute.
The result would be {“city”: “London”}.”
Device attributes #
Fig. 3.1 – Rule node: device attributes
The node searches for the related device entity of the message originator using the configured query and adds the attributes (client/shared/server scope) and latest telemetry value to the message metadata.
Attributes are added into metadata with scope prefix:
-
shared attribute -> shared_
-
client attribute -> cs_
-
server attribute -> ss_
-
telemetry -> no prefix used
Örneğin, paylaşılan öznitelik ‘version’, ‘shared_version’ adıyla Metadata’ya eklenecektir. İstemci öznitelikleri ‘cs_’ ön eki kullanacak. Sunucu öznitelikleri ‘ss_’ ön eki kullanacak. Son telemetri değeri, ön ek kullanılmadan Mesaj Metadata’sına eklenir.
‘Cihaz ilişkisi sorgusu’ yapılandırmasında yönetici gerekli Yönlendirme ve ilişki derinlik seviyesini seçebilir. Ayrıca İlişki türü, gereksinim duyulan Cihaz türleri kümesiyle yapılandırılabilir.
If multiple related entities are found, only the first entity is used for attribute enrichment, and the other entities will be discarded.
If no related entity is found, the failure chain is used; otherwise, the success chain is used.
If an attribute or telemetry is not found, it will not be added to the message metadata and will still be routed via the success chain.
The outbound message metadata will only contain configured attributes if they exist.
To access the fetched attributes in other nodes, you can use the template ‘metadata.temperature’.
Note: The Rule Node has the ability to enable/disable reporting failures if at least one selected key does not exist in the outbound message.
Fig. 3.2 – Add device attributes (1)
Fig. 3.3 – Add device attributes (2)
Originator attributes #
Fig. 4.1 – Rule node: originator attributes
The node adds message originator attributes (client/shared/server scope) and the latest telemetry value to the message metadata.
The attributes are added to the metadata with a scope prefix as follows:
-
shared attribute -> shared_
-
client attribute -> cs_
-
server attribute -> ss_
-
telemetry -> no prefix used
As an example, a shared attribute named ‘version’ will be added to the metadata with the name ‘shared_version’. Client attributes will use the ‘cs_’ prefix, while server attributes will use the ‘ss_’ prefix. The latest telemetry value is added to the message metadata as is, without a prefix.
Fig. 4.2 – Add originator attributes
If the configured attributes exist, the outbound message metadata will contain them.
To access the fetched attributes in other nodes, you can use the template ‘metadata.cs_temperature’.
Note: The Rule Node has the ability to enable/disable reporting failures if at least one selected key does not exist in the outbound message.
Originator fields #
Fig. 5.1 – Rule node: originator fields
The node fetches the field values of the message originator entity and adds them to the message metadata. The administrator can configure the mapping between the field name and the metadata attribute name. If a specified field is not part of the message originator entity fields, it will be ignored.
Fig. 5.2 – Add originator fields
The following message originator types are allowed: Tenant, Customer, User, Asset, Device, Alarm, Rule Chain.
If an unsupported originator type is found, the failure chain is used; otherwise, the success chain is used.
If a field value is not found, it is not added to the message metadata and will still be routed via the success chain.
The outbound message metadata will only contain configured attributes if they exist.
To access fetched attributes in other nodes, you can use the template ‘metadata.devType’.
Related attributes #
Fig. 6.1 – Rule node: related attributes
The node finds the related entity of the message originator entity using the configured query and adds attributes or the latest telemetry value to the message metadata.
The administrator can configure the mapping between the original attribute name and the metadata attribute name.
In the ‘Relations query’ configuration, the administrator can select the required direction and relation depth level. A set of relation filters can also be configured with the required relation type and entity types.
There is a ‘Latest Telemetry’ checkbox in the node configuration. If this checkbox is selected, the node will fetch the latest telemetry for the configured keys. Otherwise, the node will fetch server scope attributes.
If multiple related entities are found, only the first entity is used for attribute enrichment, and the other entities are discarded.
Fig. 6.2 – Add related attributes
If no related entity is found, the failure chain is used; otherwise, the success chain is used.
The outbound message metadata will contain configured attributes if they exist.
To access fetched attributes in other nodes, you can use the template ‘metadata.tempo’.
Note: Since TB Version 3.3.3, you can use ‘${metadataKey}’ for a value from metadata and ‘$[messageKey]’ for a value from the message body.
An example of this feature can be seen in the description for the Customer Attributes node.
Tenant attributes #
Fig. 7.1 – Rule node: tenant attributes
The Node identifies the Tenant of the entity that originated the message and incorporates the Tenant Attributes or the latest telemetry value into the Message Metadata.
The Administrator has the ability to customize the correlation between the original attribute name and the corresponding Metadata attribute name.
The Node configuration includes a checkbox labeled “Latest Telemetry.” If this option is enabled, the Node will retrieve the most up-to-date telemetry data for the specified keys. If the checkbox is not selected, the Node will retrieve attributes scoped to the server.
Fig. 7.2 – Add tenant attributes
If configured attributes exist, the Outbound Message Metadata will include them. To access retrieved attributes in other nodes, you can use the ‘metadata.tempo’ template.
The Message Originator can be one of the following types: Tenant, Customer, User, Asset, Device, Alarm, Rule Chain. If an unsupported Originator type is detected, an error will be generated.
If the Originator has no assigned Tenant Entity, the Failure chain will be used; otherwise, the Success chain will be used.
Note: Starting from TB Version 3.3.3, you can use ${metadataKey} to retrieve a value from metadata and $[messageKey] to retrieve a value from the message body.
An example of this feature can be found in the description for the Customer attributes node.
Originator telemetry #
Fig. 8.1 – Rule node: originator telemetry
The Node adds the telemetry values of the Message Originator from a specific time range, which was selected in the node configuration, to the Message Metadata.
Fig. 8.2 – Add originator telemetry
Telemetry values are added to Message Metadata without a prefix.
The rule node has three fetch modes:
-
FIRST: retrieves telemetry from the database that is closest to the beginning of the time range.
-
LAST: retrieves telemetry from the database that is closest to the end of the time range.
-
ALL: retrieves all telemetry from the database that falls within the specified time range.
Fig. 8.3 – The rule node has three fetch modes.
If the fetch mode FIRST or LAST is selected, the Outbound Message Metadata will contain JSON elements in the form of key/value pairs.
Alternatively, if the fetch mode ALL is selected, the telemetry data will be fetched as an array.
Note: The rule node is capable of extracting a limited number of records, up to 1000, into an array.
The resulting array will consist of JSON objects with the timestamp and corresponding values.
Note: The end of the time interval must always be greater than the beginning of the interval.
If the ‘Use metadata interval patterns’ checkbox is selected, the rule node will utilize the Start Interval and End Interval patterns from the metadata.
The units of these patterns are in milliseconds since the UNIX epoch (January 1, 1970 00:00:00 UTC).
Fig. 8.4 – If the ‘Use metadata interval patterns’ checkbox is selected, the rule node will utilize the Start Interval and End Interval patterns from the metadata.
-
If any pattern is missing from the Message metadata, the outbound message will be routed via the failure chain.
-
Additionally, if any pattern has an invalid data type, the outbound message will also be routed via the failure chain.
If the configured telemetry fields exist and belong to the selected range, the outbound message metadata will contain them.
If an attribute or telemetry value is not found, it will not be added to the Message Metadata and will still be routed via the Success chain.
To access the fetched telemetry in other nodes, you can use this template: JSON.parse(metadata.temperature).
Note: The rule node has the capability to choose the telemetry sampling order when the Fetch mode is set to ALL.
Fig. 8.5 – The rule node has the capability to choose the telemetry sampling order when the Fetch mode is set to ALL.
Tenant details #
Fig. 9.1 – Rule node: tenant details
The Rule Node adds fields from the Tenant details to either the message body or metadata.
In the Node configuration, there is a checkbox labeled ‘Add selected details to the message metadata.’ If this checkbox is selected, the existing fields will be added to the message metadata instead of the message data.
Fig. 9.2 – Add tenant details
The selected details are added to the metadata with the prefix ‘tenant_‘. The outbound message will contain the configured details if they exist.
To access the fetched details in other nodes, you can use one of the following templates:
-
metadata.tenant_address
-
msg.tenant_address
If the Originator does not have an assigned Tenant Entity, the Failure chain is used. Otherwise, the Success chain is used.
Customer details #
Fig. 10.1 – Rule node: customer details
The Rule Node adds fields from the Customer details to either the message body or metadata.
In the Node configuration, there is a checkbox labeled ‘Add selected details to the message metadata.’ If this checkbox is selected, the existing fields will be added to the message metadata instead of the message data.
Fig. 10.2 – Add customer details
The selected details are added to the metadata with the prefix ‘customer_‘. The outbound message will contain the configured details if they exist.
To access the fetched details in other nodes, you can use one of the following templates:
-
metadata.customer_email
-
msg.customer_email
The following Message Originator types are allowed: Asset, Device, Entity View. If an unsupported Originator type is found, an error is thrown.
If the Originator does not have an assigned Customer Entity, the Failure chain is used. Otherwise, the Success chain is used.
https://Fast IoT Platform.io/docs/user-guide/rule-engine-2-0/transformation-nodes/
Transformation Nodes #
Transformation Nodes are utilized to modify incoming Message fields such as Originator, Message Type, Payload, and Metadata.
Change originator #
Fig. 1.1 – Rule node: change originator
All incoming Messages in Fast IoT Platform have an originator field that identifies the entity that submits the Message. It could be a Device, Asset, Customer, Tenant, and so on.
This node is used in cases where a submitted Message should be processed as a Message from another entity. For example, a Device submits telemetry and the telemetry should be copied into a higher level Asset or to a Customer. In this case, the Administrator should add this node before the Save Timeseries Node.
The originator can be changed to:
-
Originator’s Customer
-
Originator’s Tenant
-
Related Entity that is identified by Relations Query
In the ‘Relations query’ configuration, the Administrator can select the required Direction and relation depth level. A set of Relation filters can also be configured with the required Relation type and Entity Types.
Fig. 1.2 – Add change originator
If multiple Related Entities are found, only the first Entity is used as the new originator, and the other entities are discarded.
The Failure chain is used if no Related Entity/Customer/Tenant was found; otherwise, the Success chain is used.
The outbound Message will have a new originator ID.
Script Transformation Node #
Fig. 2.1 – Rule node: Transformation script
This Node modifies the Message payload, Metadata, or Message type using a configured JavaScript function.
The JavaScript function receives 3 input parameters:
-
msg – the Message payload
-
metadata – the Message metadata
-
msgType – the Message type
The script should return the following structure:
{ msg: new payload, metadata: new metadata, msgType: new msgType } |
Fig. 2.2 – Add transformation script
All fields in the resulting object are optional and will be taken from the original Message if not specified.
The outbound Message from this Node will be a new Message that was constructed using the configured JavaScript function.
The JavaScript transform function can be verified using the “Test JavaScript Function” option.
Example
Node receives Message with payload:
{ “temperature”: 22.4, “humidity”: 78 } |
Original Metadata:
{ “sensorType” : “temperature” } |
Node receives a Message with the payload:
Original Message Type – POST_TELEMETRY_REQUEST
The following modifications should be performed:
-
Change the Message type to ‘CUSTOM_UPDATE’.
-
Add an additional attribute ‘version’ to the payload with the value ‘v1.1’.
-
Change the ‘sensorType’ attribute value in Metadata to ‘roomTemp’.
The following transform function will perform all necessary modifications:
var newType = “CUSTOM_UPDATE”; msg.version = “v1.1”; metadata.sensorType = “roomTemp” return {msg: msg, metadata: metadata, msgType: newType}; |
#
To Email Node #
This Node transforms a Message into an Email Message by populating email fields using values derived from Message metadata. It sets the output Message type as ‘SEND_EMAIL’, which can be accepted later by the Send Email Node. All email fields can be configured to use values from metadata. This Node supports sending HTML pages and images.
Fig. 3 – Rule node: to email
For example, if the incoming Message has a ‘deviceName’ field in the metadata, and the email body should contain its value, the value of ‘deviceName’ can be referenced as “${deviceName}” in the email template, as shown in the following example:
Device ${deviceName} has high temperature |
If you would like to send HTML or images, you can choose HTML or Dynamic as the Mail Body type. You can refer to the “Send HTML or Image Inside Email” examples for more information.
Moreover, if the incoming Message metadata contains an ‘attachments’ field with a reference to files stored in a database, this Node can prepare email attachments. Note that this feature is part of the File Storage feature supported by Fast IoT Platform.
Flow Nodes #
Flow Nodes are utilized to manage the processing flow of messages.
Acknowledge Node #
This node is designed to mark messages as successfully processed or acknowledged. For more information about message processing results, please refer to the documentation. This indicates to the rule engine that the message has been successfully processed.
This node is particularly useful when you do not want to reprocess failed messages. For example, the rule chain below will only reprocess failed messages for important messages. If an unimportant message fails, the failure will be ignored.
Fig. 1 – Acknowledge Node
Note that we advise the “acknowledge” rule node to be placed at the end of the message processing chain. While it’s possible to add additional rule nodes after the “acknowledge” node in theory, doing so may lead to out-of-memory (OOM) errors. For instance, if subsequent rule nodes process messages slowly, unprocessed messages will accumulate in memory and consume excessive amounts of RAM.
Checkpoint Node #
This node publishes a copy of the message to the selected rule engine queue. The original message is marked as successfully processed once the target queue acknowledges the publish of the copied message.
This is useful when you want to mark a message as high priority or process messages sequentially grouped by the originator of the message. Please refer to the default queues or define your own queue for more information.
Rule Chain Node #
This node forwards the message to the selected rule chain. The target rule chain may also output the results of processing using an output node. The output node enables the reuse of rule chains and extraction of the processing logic into modules (rule chains).
For instance, you can create a rule chain that validates incoming messages and processes valid and invalid messages separately.
Fig. 2 – Rule chain example
The logic for validating messages can be reused in other rule chains. To achieve this, the validation logic is extracted and placed in a separate rule chain.
Fig. 3 – Rule chain example
Please note the usage of the “Output” nodes in the validation rule chain. The names of these output nodes should match the outgoing relations of the “rule chain node” in the main rule chain.
Output Node #
The output node is used in conjunction with the rule chain node to publish the result of message processing to the caller rule chain. The name of the output rule node corresponds to the relation type of the output message and is used to forward messages to other rule nodes in the caller rule chain. Please refer to the documentation for the rule chain node for an example.
External Nodes #
External Nodes are utilized to interact with external systems.
AWS SNS Node #
Fig. 1.1 – Rule node: AWS SNS node
This node publishes messages to AWS SNS (Amazon Simple Notification Service).
Configuration:
Fig. 1.2 – Add AWS SNS
-
The topic ARN pattern can be set by providing either the direct topic name for message publishing or by using a pattern that will be resolved to the actual ARN topic name using message metadata.
-
The AWS Access Key ID and AWS Secret Access Key refer to the credentials of an AWS IAM User with programmatic access. More information on AWS access keys can be found here.
-
The AWS Region must correspond to the one in which the SNS Topic(s) are created. A current list of AWS Regions can be found here.
For instance, in the following example, the topic name depends on the device type, and there is a message that contains the “deviceType” field in the metadata:
To publish a message in the controller’s topic, we need to set this pattern in the Topic ARN pattern:
During runtime, the pattern will be resolved to arn:aws:sns:us-east-1:123456789012:controller.
The node will publish the full message payload to the SNS. If necessary, the rule chain can be configured to use a chain of transformation nodes to send the correct payload to the SNS.
The outbound message from this node will contain the response messageId and requestId in the message metadata. The original message payload, type, and originator will not be modified.
AWS SQS Node #
Fig. 2.1 – Rule node: AWS SQS
Node publish messages to the AWS SQS (Amazon Simple Queue Service).
Configuration:
Fig. 2.2 – Add AWS SQS
-
The queue type for this node is SQS (Amazon Simple Queue Service), which can be either Standard or FIFO.
-
The queue URL pattern is used for building the queue URL and can either be a direct queue URL for message publishing or a pattern that will be resolved to the actual queue URL using message metadata.
-
The delay parameter is used to delay a specific message by a certain number of seconds.
-
The message attributes are an optional list of attributes to be published with the message.
-
The AWS Access Key ID and AWS Secret Access Key refer to the credentials of an AWS IAM User with programmatic access. More information on AWS access keys can be found here.
-
The AWS Region must correspond to the one in which the SQS Queue(s) are created. A current list of AWS Regions can be found here.
For instance, in the following example, the queue URL depends on the device type, and there is a message that contains the “deviceType” field in the metadata:
To publish a message in the controller’s queue, we need to set this pattern in the Queue URL pattern:
During runtime, the pattern will be resolved to https://sqs.us-east-1.amazonaws.com/123456789012/controller.
The node will publish the full message payload to the SQS. If necessary, the rule chain can be configured to use a chain of transformation nodes to send the correct payload to the SQS.
The optional list of attributes can be added for publishing a message in the SQS, which is a collection of pairs. Both the name and value can be static values or patterns that will be resolved using message metadata.
If a FIFO queue is selected, the message ID will be used as the deduplication ID and the message originator as the group ID.
The outbound message from this node will contain the response messageId, requestId, messageBodyMd5, messageAttributesMd5, and sequenceNumber in the message metadata. The original message payload, type, and originator will not be modified.
Kafka Node #
Fig. 3.1 – Rule node: Kafka
The Kafka Node is responsible for sending messages to Kafka brokers. It can handle messages of any type and sends the record via a Kafka producer to the Kafka server.
Configuration:
Fig. 3.2 – Add Kafka Node
-
The Kafka Node can send messages to Kafka brokers and expects messages of any type. It sends the record via a Kafka producer to the Kafka server.
-
The Topic pattern can be a static string or a pattern that is resolved using message metadata properties, such as ${deviceType}.
-
The bootstrap servers parameter specifies a list of Kafka brokers separated by a comma.
-
The automatically retry times parameter determines the number of attempts to resend a message if the connection fails.
-
The produces batch size parameter specifies the batch size in bytes for grouping messages with the same partition.
-
The time to buffer locally parameter sets the maximum local buffering window duration in milliseconds, while the client buffer max size parameter specifies the maximum buffer size in bytes for sending messages.
-
The number of acknowledgments parameter determines the number of acknowledgments the node requires to receive before considering a request complete.
-
The key serializer is, by default, org.apache.kafka.common.serialization.StringSerializer, and the value serializer is also, by default, org.apache.kafka.common.serialization.StringSerializer. Other properties can be added for the Kafka broker connection.
The node will send the full message payload to the Kafka topic. If required, the rule chain can be configured to use a chain of transformation nodes for sending the correct payload to the Kafka.
The outbound message from this node will contain the response offset, partition, and topic properties in the message metadata. The original message payload, type, and originator will not be modified.
Note that if you want to use Confluent Cloud as a Kafka broker, you should add the following properties:
-
CLUSTER_API_KEY – your access key from Cluster settings.
-
CLUSTER_API_SECRET – your access secret from Cluster settings.
MQTT Node #
Fig. 4.1 – Rule node: MQTT node
Publish the incoming message payload to the topic of the configured MQTT broker with at least once Quality of Service (QoS).
Configuration:
Fig. 4.2 – Add MQTT Node
-
Topic Pattern – It can be a static string or a pattern that can be resolved using Message Metadata properties, for example, ${deviceType}.
-
Host – The MQTT broker host.
-
Port – The MQTT broker port.
-
Connection Timeout – The timeout in seconds for connecting to the MQTT broker.
-
Client ID – An optional client identifier used for connecting to the MQTT broker. If not specified, the default generated clientId will be used.
-
Add Service ID as Suffix to Client ID – An optional flag. When enabled, the server ID will be added as a suffix to the client ID. It is useful when running in microservices mode to allow rule nodes on each node to connect to the broker without errors.
-
Clean Session – Establishes a non-persistent connection with the broker when enabled.
-
SSL Enable/Disable – Enables/disables secure communication.
-
Credentials – MQTT connection credentials. It can be Anonymous, Basic, or PEM.
Different authentication credentials are supported for an external MQTT broker:
-
Anonymous – No authentication
-
Basic – Username/password pair is used for authentication
-
PEM – PEM certificates are used for authentication
If the PEM credentials type is selected, the following configuration should be provided:
-
CA certificate file
-
Certificate file
-
Private key file
-
Private key password
Published Body – The node will send the full Message payload to the MQTT topic. If required, the rule chain can be configured to use a chain of Transformation Nodes for sending the correct Payload to the MQTT broker.
In case of a successful message publishing, the original Message will be passed to the next nodes via the Success chain. Otherwise, the Failure chain is used.
Azure IoT Hub Node #
Fig. 5.1 – Rule node: Azure IoT Hub Node
Configuration:
Fig. 5.2 – Add Azure IoT Hub Node
-
Topic – For more information about IoT Hub topics, use the following link.
-
Hostname – The Azure IoT Hub hostname.
-
Device ID – Your device ID from Azure IoT Hub.
-
Credentials – Azure IoT Hub connection credentials. It can be either Shared Access Signature or PEM.
Different authentication credentials are supported for Azure IoT Hub:
-
Shared Access Signature – SAS key is used for authentication.
-
PEM – PEM certificates are used for authentication.
If Shared Access Signature credentials type is selected, the following configuration should be provided:
-
SAS Key – It is the key from your device in Azure IoT Hub.
-
CA certificate file, by default Baltimore certificate is used. More about certificates can be found here.
If PEM credentials type is selected, the following configuration should be provided:
-
CA certificate file, by default Baltimore certificate is used. More about certificates can be found here.
-
Certificate file
-
Private key file
-
Private key password.
Published body – The node will send the full Message payload to the Azure IoT Hub device. If required, the rule chain can be configured to use a chain of Transformation Nodes for sending the correct Payload to the Azure IoT Hub.
In case of a successful message publishing, the original Message will be passed to the next nodes via the Success chain. Otherwise, the Failure chain is used.
RabbitMQ Node #
Fig. 6.1 – Rule node: RabbitMQ
Publish the incoming message payload to RabbitMQ.
Configuration:
Fig. 6.2 – Add rabbitmq
-
Exchange name pattern – the exchange to which the message will be published. It can be a static string or a pattern that is resolved using Message Metadata properties. For example, ${deviceType}.
-
Routing key pattern – the routing key. It can be a static string or a pattern that is resolved using Message Metadata properties. For example, ${deviceType}.
-
Message properties – optional routing headers. Supported headers include BASIC, TEXT_PLAIN, MINIMAL_BASIC, MINIMAL_PERSISTENT_BASIC, PERSISTENT_BASIC, and PERSISTENT_TEXT_PLAIN.
-
Host – the default host to use for connections.
-
Port – the default port to use for connections.
-
Virtual host – the virtual host to use when connecting to the broker.
-
Username – the AMQP username to use when connecting to the broker.
-
Password – the AMQP password to use when connecting to the broker.
-
Automatic recovery – enables or disables automatic connection recovery.
-
Connection timeout – the TCP establishment timeout in milliseconds; zero for infinite.
-
Handshake timeout – the AMQP 0-9-1 protocol handshake timeout, in milliseconds.
-
Client properties – additional properties that are sent to the server during connection startup.
Published body – the node will send the full message payload to RabbitMQ. If required, the rule chain can be configured to use a chain of Transformation Nodes for sending the correct payload.
In case of successful message publishing, the original message will be passed to the next nodes via the Success chain. Otherwise, the Failure chain is used.
REST API Call Node #
Fig. 7.1 – Rule node: REST API Call Node
Make REST API calls to an external REST server.
Configuration:
Fig. 7.2 – Add Rest Api Call Node
-
Endpoint URL pattern – It can be a static string or a pattern that is resolved using Message Metadata properties. For example, ${deviceType}.
-
Request method – GET, POST, PUT, DELETE.
-
Headers – Request headers where the header or value can be a static string or a pattern that is resolved using Message Metadata properties.
Endpoint URL
The URL can be a static string or a pattern that uses only the message metadata for resolving patterns. Therefore, property names used in the patterns must exist in the message metadata. Otherwise, the raw pattern will be added to the URL.
For instance, if the message payload contains a property called deviceType with a value of container, the following pattern:
http://localhost/api/${deviceType}/update
will be resolved to:
http://localhost/api/container/update
Headers
A collection of header names and values can be configured, which will be added to the REST request. Patterns can be used to configure both the header name and value, for example, ${deviceType}. Only the message metadata is used for resolving patterns. Therefore, property names used in the pattern must exist in the message metadata. Otherwise, the raw pattern will be added to the header.
The node will send the full message payload to the configured REST endpoint as the request body. If required, the rule chain can be configured to use a chain of Transformation Nodes to send the correct payload.
The outbound message from this node will contain the response status, statusCode, statusReason, and response headers in the message metadata. The outbound message payload will be the same as the response body. The original message type and originator will not be changed.
To send a single file as the request body, add a field called “attachments” to the message metadata with the file UUID stored in the database. In that case, any message data will be ignored, and only the file content will be sent. To define the request content type, use the header setting, such as:
Content-Type: application/json; charset=UTF-8
Here is an example of message metadata to send a single file:
Note: This feature is part of the File Storage feature supported by Fast IoT Platform Professional Edition.
In the case of a successful request, the outbound message will be passed to the next nodes via the Success chain. Otherwise, the Failure chain will be used.
Send Email Node #
Fig. 8.1 – Rule node: Send Email
The node sends an incoming message using the configured mail server. This node only works with messages that were created using the To Email transformation node. Please connect this node with the To Email node using the Success chain.
Configuration:
Fig. 8.2 – Add send email
The node sends an incoming message using the configured mail server. If enabled, the default mail server configured at the system level will be used. The SMTP protocol and server host and port can be specified, along with the read timeout in milliseconds. If supported by the server, enabling TLS will use the STARTTLS command. If a username and password are required for the mail host, they can be specified as well.
This node can work with the default mail server configured at the system level. Please refer to the documentation for more details on how to configure the default system SMTP settings. If a specific mail server is required for this node, disable the ‘Use system SMTP settings’ checkbox and configure the mail server manually.
Additionally, this node can create email attachments if the incoming message has a ‘prepared attachments’ metadata field with references to files stored in the database. Multiple attachments are supported, and comma-separated UUIDs can be used to send multiple files. Here is an example of message metadata:
In the case of successful mail sending, the original message will be passed to the next nodes via the Success chain. Otherwise, the Failure chain will be used.
You can see a real-life example of this node in the following tutorial:
Send SMS Node #
The node can construct an SMS message based on the metadata fields from the incoming message and send it using AWS SNS or Twilio SMS providers. We recommend enabling debug mode for this rule node.
Configuration:
Fig. 9 – Add send sms
If enabled, the node will use the default SMS provider server configured at the system level. Please refer to the SMS provider settings for more details. The ‘Phone Numbers To’ field allows multiple phone numbers to be specified, and the SMS message template can be configured as well. Message metadata fields can also be referenced in both of these fields.
This node can work with the default SMS provider configured at the system level. If the SMS message is successfully sent to all recipients, the original message will be passed to the next nodes via the Success chain. Otherwise, the Failure chain will be used.
Action Nodes #
The Action Nodes execute different actions based on the incoming message.
Math Function Node #
The rule node applies mathematical functions and saves the result in the message and/or database.
This rule node supports five types of arguments:
-
Constant
-
Value from the message body
-
Value from the message metadata
-
Value of the attribute belonging to the message originator (device, asset, etc.) that should be of numeric type or a string convertible to a float
-
Value of the latest time-series belonging to the message originator (device, asset, etc.) that should be of numeric type or a string convertible to a float.
The primary use case for this rule node is to modify one or more values from the database based on data from the incoming message. For instance, you can increase the totalwaterconsumption based on the deltawaterconsumption reported by the device.
An alternative use case is to replace simple JS script nodes with a more lightweight and performant implementation. For example, you can use the CUSTOM operation to transform Fahrenheit to Celsius (C = (F – 32) / 1.8) using the expression: (x – 32) / 1.8).
The execution is synchronized within the scope of the message originator (e.g., device) and server node. If you have rule nodes in different rule chains, they will process messages from the same originator synchronously within the scope of the server node.
The result of the function can be added to the message body or metadata. You can also save the result as an attribute or time-series in the database.
Create Alarm Node #
Fig. 1.1 – Rule node: Create Alarm
This node attempts to load the latest alarm with the configured alarm type for the message originator. If an uncleared alarm exists, it will be updated; otherwise, a new alarm will be created.
Node Configuration:
-
Alarm Details Builder script
-
Alarm Type – any string that represents the alarm type
-
Alarm Severity – {CRITICAL | MAJOR | MINOR | WARNING | INDETERMINATE}
-
Is Propagate – whether the alarm should be propagated to all parent related entities.
Note: The rule node has the ability to:
-
Read alarm config from message
-
Get the alarm type using a pattern with fields from the message metadata.
Filter the propagation of alarms to parent entities based on their relation types:
Fig. 1.2 – Add create alarm
The Alarm Details Builder script is used to generate a JsonNode containing the alarm details. It is useful for storing additional parameters within the alarm. For example, you can save attribute name/value pairs from the original message payload or metadata.
The Alarm Details Builder script should return the details object.
Fig. 1.3 – Alarm Details Builder
-
The message payload can be accessed using the msg property, for example msg.temperature.
-
The message metadata can be accessed using the metadata property, for example metadata.customerName.
-
The message type can be accessed using the msgType property, for example msgType.