RTI Connext Modern C++ API Version 7.2.0
dds::core::policy::TimeBasedFilter Class Reference

Allows a dds::sub::DataReader to indicate that it is not interested in all the sample updates that occur within a time period. More...

#include <dds/core/policy/CorePolicy.hpp>

Public Member Functions

 TimeBasedFilter ()
 Creates the default time-based filter. More...
 TimeBasedFilter (const dds::core::Duration &the_min_separation)
 Creates a policy with the specified minimum separation. More...
TimeBasedFilterminimum_separation (const dds::core::Duration &min_separation)
 Sets the minimum separation between subsequent samples. More...
const dds::core::Duration minimum_separation () const
 Getter (see setter with the same name) More...

Detailed Description

Allows a dds::sub::DataReader to indicate that it is not interested in all the sample updates that occur within a time period.

The filter states that the dds::sub::DataReader does not want to receive more than one value each minimum_separation, regardless of how fast the changes occur.

RxO = N/A
Changeable = YES


You can use this QoS policy to reduce the amount of data received by a dds::sub::DataReader. dds::pub::DataWriter entities may send data faster than needed by a dds::sub::DataReader. For example, a dds::sub::DataReader of sensor data that is displayed to a human operator in a GUI application does not need to receive data updates faster than a user can reasonably perceive changes in data values. This is often measured in tenths (0.1) of a second up to several seconds. However, a dds::pub::DataWriter of sensor information may have other dds::sub::DataReader entities that are processing the sensor information to control parts of the system and thus need new data updates in measures of hundredths (0.01) or thousandths (0.001) of a second.

With this QoS policy, different dds::sub::DataReader entities can set their own time-based filters, so that data published faster than the period set by a each dds::sub::DataReader will not be delivered to that dds::sub::DataReader.

The TIME_BASED_FILTER also applies to each instance separately; that is, the constraint is that the dds::sub::DataReader does not want to see more than one sample of each instance per minimum_separation period.

This QoS policy allows you to optimize resource usage (CPU and possibly network bandwidth) by only delivering the required amount of data to each dds::sub::DataReader, accommodating the fact that, for rapidly-changing data, different subscribers may have different requirements and constraints as to how frequently they need or can handle being notified of the most current values. As such, it can also be used to protect applications that are running on a heterogeneous network where some nodes are capable of generating data much faster than others can consume it.

For best effort data delivery, if the data type is unkeyed and the dds::pub::DataWriter has an infinite dds::core::policy::Liveliness::lease_duration, RTI Connext will only send as many packets to a dds::sub::DataReader as required by the TIME_BASED_FILTER, no matter how fast dds::pub::DataWriter::write() is called.

For multicast data delivery to multiple DataReaders, the one with the lowest minimum_separation determines the DataWriter's send rate. For example, if a dds::pub::DataWriter sends over multicast to two DataReaders, one with minimum_separation of 2 seconds and one with minimum_separation of 1 second, the DataWriter will send every 1 second.

In configurations where RTI Connext must send all the data published by the dds::pub::DataWriter (for example, when the dds::pub::DataWriter is reliable, when the data type is keyed, or when the dds::pub::DataWriter has a finite dds::core::policy::Liveliness::lease_duration), only the data that passes the TIME_BASED_FILTER will be stored in the receive queue of the dds::sub::DataReader. Extra data will be accepted but dropped. Note that filtering is only applied on alive samples (that is, samples that have not been disposed/unregistered).


It is inconsistent for a dds::sub::DataReader to have a minimum_separation longer than its DEADLINE period.

However, it is important to be aware of certain edge cases that can occur when your publication rate, minimum separation, and deadline period align and that can cause missed deadlines that you may not expect. For example, suppose that you nominally publish samples every second but that this rate can vary somewhat over time. You declare a minimum separation of 1 second to filter out rapid updates and set a deadline of two seconds so that you will be aware if the rate falls too low. Even if your update rate never wavers, you can still miss deadlines! Here's why:

Suppose you publish the first sample at time t=0 seconds. You then publish your next sample at t=1 seconds. Depending on how your operating system schedules the time-based filter execution relative to the publication, this second sample may be filtered. You then publish your third sample at t=2 seconds, and depending on how your OS schedules this publication in relation to the deadline check, you could miss the deadline.

This scenario demonstrates a couple of rules of thumb:

  • Beware of setting your minimum_separation to a value very close to your publication rate: you may filter more data than you intend to.
  • Beware of setting your minimum_separation to a value that is too close to your deadline period relative to your publication rate. You may miss deadlines.

See dds::core::policy::Deadline for more information about the interactions between deadlines and time-based filters.

The setting of a TIME_BASED_FILTER – that is, the selection of a minimum_separation with a value greater than zero – is consistent with all settings of the HISTORY and RELIABILITY QoS. The TIME_BASED_FILTER specifies the samples that are of interest to the dds::sub::DataReader. The HISTORY and RELIABILITY QoS affect the behavior of the middleware with respect to the samples that have been determined to be of interest to the dds::sub::DataReader; that is, they apply after the TIME_BASED_FILTER has been applied.

In the case where the reliability QoS kind is dds::core::policy::ReliabilityKind_def::RELIABLE, in steady-state – defined as the situation where the dds::pub::DataWriter does not write new samples for a period "long" compared to the minimum_separation – the system should guarantee delivery of the last sample to the dds::sub::DataReader.

See also

Constructor & Destructor Documentation

◆ TimeBasedFilter() [1/2]

dds::core::policy::TimeBasedFilter::TimeBasedFilter ( )

Creates the default time-based filter.

◆ TimeBasedFilter() [2/2]

dds::core::policy::TimeBasedFilter::TimeBasedFilter ( const dds::core::Duration the_min_separation)

Creates a policy with the specified minimum separation.

Member Function Documentation

◆ minimum_separation() [1/2]

TimeBasedFilter & dds::core::policy::TimeBasedFilter::minimum_separation ( const dds::core::Duration min_separation)

Sets the minimum separation between subsequent samples.

[default] 0 (meaning the dds::sub::DataReader is potentially interested in all values)

[range] [0,1 year], < dds::core::policy::Deadline::period

◆ minimum_separation() [2/2]

const dds::core::Duration dds::core::policy::TimeBasedFilter::minimum_separation ( ) const

Getter (see setter with the same name)