Time of Device vs. Cumulocity time issue

we have a finding related to model built in analytics builder:

we were facing inaccurate numbers (this time much less than expected) and after a long troubleshooting we found the following conclusion based on our simulations that we created for location updates:

    • if time of alarm (received from device and mentioned as “device time”) is greater than time of platform then the alarm is being processed by analytics model
    • if time of alarm (received from device and mentioned as device time) is less than time of platform the alarm is discarded and not being processed by analytics model as if it happened in the past

it seems analytics model is processing alarms synchronously and when alarm time is before ‘cumulocity platform current time’ it is being dropped. Is it an accurate conclusion? and how much delay in seconds does cumulocity tolerate as an alarm delay? and how to tune it?

Hi,

this is a correct observation but you can change the behavior inside models by enabling “Ignore Timestamps” on input blocks and change the timeout globally in a tenant option. The behavior is explained here:
https://cumulocity.com/docs/streaming-analytics/analytics-builder/#input-blocks-and-event-timing

Be sure to carefully consider both “Ignore timestamp” and increasing the timeout as both have consequences.

Best regards,
Harald

thanks, probably tuning the timeout is safer than setting ‘ignore timestamps’ which may result sometimes in models loops correct?

I am trying to check the current value of this key via postman using GET before changing using PUT: GET {{url}}/tenant/options/{{category}}/{{key}} (where category = analytics.builder and key = timedelay_secs

but the result is 404 Not found
“message”: “Unable to find option by given key: analytics.builder/timedelay_secs”,

by the way from where does our tenant take its time? via NTP? and we cannot adjust our tenant time to be late for example for 2 minutes?

so far we have 4 options:

  1. to keep the time provided by middleware to be “measurement time” just like now which is the time measurement was done by tag and to adjust the timedelay_secs to at least 120 seconds and maybe more. This is the most accurate time but since after this time there is RTLS network_time before reaching poistioning engine (the biggest bulk as shown in the example below it was more than a minute and it depends on the number of hops) needed to transport the tag’s measurements to positioning engine and there is also needed time by positioning engine to perform positioning (should be fast) and needed time by middleware (should be fast)

“measurement_time”:“3/24/2024, 7:09:06 AM”,“positioning_time_epoch”:1711264257770,“positioning_time”:“3/24/2024, 7:10:57 AM”,

  1. to use the positioning time instead of measurement time as the device time inserted by middleware. But also default timedelay_secs of 1 sec could not be enough sometimes so timedelay_secs needs also to be increased

  2. to adjust if possible the cumulocity time by having it less than actual time of around 2 minutes

  3. to set “ignore timestamps” check very selectively on the models which receive location updates directly from RTLS positioning engine (this one seems to be the most practical one)

which option do you think it would be the optimal one?

If the tenant option does not exist, the default is used. So unless you have changed the timeout, it is expected that the tenant option does not exist.

The tenant gets the time from the underlying C8Y instance infrastructure which is synced automatically. So there is no way of changing that.

Regarding your options:

  1. Be aware that if you increase timedelay_sec, you also increase the delay Analytics Builder can process the data. If you set it to 120s, Analytics Builder will wait up to 120s on received data (to be exact it will be (timestamp+120s) - current time) before the model gets it. This can be a problem if the outputs are time critical (note that outputs will have correct timestamps but that they will be created delayed). Also data will be buffered for timedelay_sec. In high-frequency use cases this can increase resource consumption.
  2. see 1
  3. this is not possible.
  4. Ignore timestamps can have other issues. The data will be sent to Analytics Builder as it arrives, which might be delayed or out of order. This can affect time-based aggregations or other logic you have implemented. If the cause for the problems you are observing are only unsynchronized clocks, you should be fine. If the device sends data in-order and in the expected cadence, the Analytics Builder should get them in-order and the expected cadence most of the time.

The best option would actually be to ensure that devices have synchronized clocks. I understand that this is not always possible. I would start by increasing the timedelay_sec but if you have to go to significantly higher values than 10-20s, I would consider using ignore timestamps. 10-20s is a general guideline. If devices do not send much data and if delays are acceptable, higher timedelay_sec configurations are acceptable.

ok thanks, so I will go for testing “ignore timestamps” selectively for the alarm inputs of the corresponding models where we have a problem

Hello,

we are facing a tricky challenge in the same analytics model: we would like to simply substract input1 - input2 but the challenge faced with having an expression block of (input1 - input2) or even with difference block is that the calculation happens when new measurements of all inputs is received. While we need to calculate the difference between the latest measurements of both. For example: input1 has changed 10 times during the day while input2 has changed 1 time only during the day-> we need to calculate (measurement#10 of input1 (latest one)) - ( (measurement#1 of input2 (latest one)).

is there a way to do it using analytics builder? we tried latsh before the expression block so that we provide latshed value only during a specific time which is when we will trigger this model based on a chron timer at the end of day

last update: testing now the following idea: doing the calculation during a specific time (using gate enabled by a crontimer) + using managed objects for their asynchronous nature in order to store synchronous measurements and to be able to insert them in the main model later on when this model’s main gate is opened by the chrontimer