TimeBasedFilter synchronization

8 posts / 0 new
Last post
Offline
Last seen: 11 years 8 months ago
Joined: 01/07/2013
Posts: 15
TimeBasedFilter synchronization

Hi,
I would understand a particular aspect of TimeBasedFilter. Suppose we have different datawriter (A, B, C) and one data reader (R) with TimeBasedFilter with a duration of 3 sec.

All DataWriters A, B, C will write on the queue of R each 3 sec or the DataReader is notified each 3 sec for new samples?

With an example, in the first case:

A write at t=0,3,6,9
B write at t=1,4,7,10
C write at t=2,5,8,11

R will receive samples at t=0,1,2,3,4,5,6,7,8,9,10,11 ?

In the second case:

R will receive samples at t=0,3,6,9,12

Which is the right behaviour?

Thanks

Keywords:
rip
rip's picture
Offline
Last seen: 3 days 23 hours ago
Joined: 04/06/2012
Posts: 324

TimeBasedFilter semantics work at both ends. 

T wants data with a minimum separation distance of 3 seconds.  Foo{app} indicates what the application is doing.  Foo{dds} indicates what the middleware is doing.

A{app} writes at 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ... sec
A{dds} sends  at 0,       3,       6,       9,         12,     ...


B{app} writes at 0,    2,    4,    6,    8,    10,     12,     ... sec
B{dds} sends  at 0,          4,          8,            12,     ...


C{app} writes at    1,       4,       7,       10,         13, ... sec
C{dds} sends  at    1,       4,       7,       10,         13, ...

NOTE: Time filtering has happened at the writer side

T{dds} receives 0a, 0b, 1c, 3a, 4b, 4c, 6a, 7c, 8b, 9a, 10c, 12a, 12b, 13c ...

NOTE:  ordering between "0a, 0b", "4b, 4c" and "12a, 12b" are indeterminant.

T{app} receives  (Either 0a or 0b), 3a, 6a, 9a, (Either 12a or 12b), ...

NOTE: Time filtering ALSO happens at the reader side

More notes:  Due to "slack" (latency) within the system, after a while the pattern won't be this neat.  This is because time-based filters are "minimum separation time", not rate periodic or monotonic. 

If using Reliable, and 6a has to be repaired AND that repair isn't done until after the 7c has arrived (and been delivered), the 6a will be dropped (and then the following pattern will be 7c, 10c, 13c, ...)

 

In your example, the second case is closer to what will happen (R{app} will receive approximately every 3 seconds an instance) because the reader side filter will drop the instances that arrive "too soon".  Your app has told the middleware that it wants an instance with a minimum separation time of 3 seconds, so that is what it should receive.

Regards,

Rip

Offline
Last seen: 11 years 8 months ago
Joined: 01/07/2013
Posts: 15

Thank you, very good explaination!

Gerardo Pardo's picture
Offline
Last seen: 3 weeks 4 hours ago
Joined: 06/02/2010
Posts: 602

Hi,

Just one additional clarification. Rip's excellent explanation applies to the situation where teh data-type associated with the Topic either has no key, or if it has a key all the writes are writing the same instance (key value).

Time-Based filters apply per instance. That is the algorithm described by Rip is applied separately to eack key value. The TimeBasedFilter contract is ensures samples for each key-value are received by the data-reader at most once per filter period.

For example if you use a time-based filter of 3 seconds and the Topic is a 'CarTopic' used to update the position of cars. The  CarType associated with the CarTopic has an attribute licensePlate that is the key.  In case the DataReader will see the position of each Car updated at most once per second.

Gerardo

Offline
Last seen: 10 years 5 months ago
Joined: 05/07/2014
Posts: 3

Given the scenarios above, if the writers suddenly stop sending out samples in the middle of 3-second separation, does it mean the reader will miss the last update? For example, if the last sample was sent out by {A,B,C} happens at 11 sec, reader will miss it and the last data it has is just 10c ??? If this is true, how would I make use of time-based filtering and still get the last update/sample ??

A useful scenario is this: A car sends out its position every seconds, a tracking device just wants to get the car's position every minute but no less.  If somehow the car broke down and stop sending out position, it makes sense the tracking device has the last position but not position at begin of the last minute.

Thanks,
-Kim

Offline
Last seen: 10 years 5 months ago
Joined: 05/07/2014
Posts: 3

To clarify my previous post, in the case of instances (key), I'm surprised to find out DDS drops samples instead of just updating the instances.  Ideally, instances should be updated with new samples while minimum separation has not been met and then, once the separation is met,  be available to the reader...  ie. the reader doesn't have to read every sample that comes in, but when it reads, it will have the latest sample. I understand the same effect can be achieved using polling, but it would be nice to achieve the same thing using call-back.

Thanks,
-Kim

 

rip
rip's picture
Offline
Last seen: 3 days 23 hours ago
Joined: 04/06/2012
Posts: 324

Hi,

For performance reasons, we don't do that, when Time-based Filtering is happening.  Keep in mind that performance is looking at both CPU cycles and at network bandwidth utilization.  By doing it this way, we keep unwanted samples off the network if possible, so it reduces the network load, and at the same time reduces the CPU (interrupt) load on the receiver. 

As you note, you can get the behavior you want ("it will have the latest sample") by not using a time filter, and simply setting the history.keep_last and depth to 1 and using polling at a 3s interval.  Also, "the reader doesn't have to read every sample that comes in", but that's exactly what happens if you use a Listener.  If not every sample is required, then a polling/waitset implementation is the better option for this use case.

 

 

Offline
Last seen: 10 years 5 months ago
Joined: 05/07/2014
Posts: 3

Thanks for the explanation. I understand better what to expect with time-based filtering.

-Kim