RTI Connext Traditional C++ API
Version 5.2.3
|
Filter that allows a DDSDataReader to specify that it is interested only in (potentially) a subset of the values of the data. More...
Public Attributes | |
struct DDS_Duration_t | minimum_separation |
The minimum separation duration between subsequent samples. | |
Filter that allows a DDSDataReader to specify that it is interested only in (potentially) a subset of the values of the data.
The filter states that the DDSDataReader does not want to receive more than one value each minimum_separation
, regardless of how fast the changes occur.
You can use this QoS policy to reduce the amount of data received by a DDSDataReader. DDSDataWriter entities may send data faster than needed by a DDSDataReader. For example, a DDSDataReader of sensor data that is displayed to a human operator in a GUI application does not need to receive data updates faster than a user can reasonably perceive changes in data values. This is often measured in tenths (0.1) of a second up to several seconds. However, a DDSDataWriter of sensor information may have other DDSDataReader entities that are processing the sensor information to control parts of the system and thus need new data updates in measures of hundredths (0.01) or thousandths (0.001) of a second.
With this QoS policy, different DDSDataReader entities can set their own time-based filters, so that data published faster than the period set by a each DDSDataReader will not be delivered to that DDSDataReader.
The TIME_BASED_FILTER also applies to each instance separately; that is, the constraint is that the DDSDataReader does not want to see more than one sample of each instance per minimum_separation
period.
This QoS policy allows you to optimize resource usage (CPU and possibly network bandwidth) by only delivering the required amount of data to each DDSDataReader, accommodating the fact that, for rapidly-changing data, different subscribers may have different requirements and constraints as to how frequently they need or can handle being notified of the most current values. As such, it can also be used to protect applications that are running on a heterogeneous network where some nodes are capable of generating data much faster than others can consume it.
For best effort data delivery, if the data type is unkeyed and the DDSDataWriter has an infinite DDS_LivelinessQosPolicy::lease_duration, RTI Connext will only send as many packets to a DDSDataReader as required by the TIME_BASED_FILTER, no matter how fast FooDataWriter::write is called.
For multicast data delivery to multiple DataReaders, the one with the lowest minimum_separation
determines the DataWriter's send rate. For example, if a DDSDataWriter sends over multicast to two DataReaders, one with minimum_separation
of 2 seconds and one with minimum_separation
of 1 second, the DataWriter will send every 1 second.
In configurations where RTI Connext must send all the data published by the DDSDataWriter (for example, when the DDSDataWriter is reliable, when the data type is keyed, or when the DDSDataWriter has a finite DDS_LivelinessQosPolicy::lease_duration), only the data that passes the TIME_BASED_FILTER will be stored in the receive queue of the DDSDataReader. Extra data will be accepted but dropped. Note that filtering is only applied on alive samples (that is, samples that have not been disposed/unregistered).
It is inconsistent for a DDSDataReader to have a minimum_separation
longer than its DEADLINE period
.
However, it is important to be aware of certain edge cases that can occur when your publication rate, minimum separation, and deadline period align and that can cause missed deadlines that you may not expect. For example, suppose that you nominally publish samples every second but that this rate can vary somewhat over time. You declare a minimum separation of 1 second to filter out rapid updates and set a deadline of two seconds so that you will be aware if the rate falls too low. Even if your update rate never wavers, you can still miss deadlines! Here's why:
Suppose you publish the first sample at time t=0 seconds. You then publish your next sample at t=1 seconds. Depending on how your operating system schedules the time-based filter execution relative to the publication, this second sample may be filtered. You then publish your third sample at t=2 seconds, and depending on how your OS schedules this publication in relation to the deadline check, you could miss the deadline.
This scenario demonstrates a couple of rules of thumb:
minimum_separation
to a value very close to your publication rate: you may filter more data than you intend to. minimum_separation
to a value that is too close to your deadline period relative to your publication rate. You may miss deadlines. See DDS_DeadlineQosPolicy for more information about the interactions between deadlines and time-based filters.
The setting of a TIME_BASED_FILTER – that is, the selection of a minimum_separation
with a value greater than zero – is consistent with all settings of the HISTORY and RELIABILITY QoS. The TIME_BASED_FILTER specifies the samples that are of interest to the DDSDataReader. The HISTORY and RELIABILITY QoS affect the behavior of the middleware with respect to the samples that have been determined to be of interest to the DDSDataReader; that is, they apply after the TIME_BASED_FILTER has been applied.
In the case where the reliability QoS kind is DDS_RELIABLE_RELIABILITY_QOS, in steady-state – defined as the situation where the DDSDataWriter does not write new samples for a period "long" compared to the minimum_separation
– the system should guarantee delivery of the last sample to the DDSDataReader.
struct DDS_Duration_t DDS_TimeBasedFilterQosPolicy::minimum_separation |
The minimum separation duration between subsequent samples.
[default] 0 (meaning the DDSDataReader is potentially interested in all values)
[range] [0,1 year], < DDS_DeadlineQosPolicy::period