Use WaitSets, Except When You Need Extreme Latency

An application has multiple ways to be notified about data becoming available in a DataReader, depending on the application’s requirements - between these options, WaitSets are the safest.

The application designer can choose from three mechanisms to be notified that data is available:

  1. Receive a notification in an application thread that data is available
  2. Receive a notification in the middleware thread that data is available
  3. No not receive a notification that data is available – instead, poll for data.

These choices also apply for the application being notified of other events, such as a deadline miss, data lost, or incompatible QoS.

 

Listener

(i.e., Callback)

WaitSet

(i.e., Separate read thread)

  

Throughput

Lower than WaitSet

 

 

Higher than Callback 

Also can batch the received data by setting the max_event_count in the wait-set property to tweak the throughput

Latency*

 (Latency at a given throughput level)

Lower than WaitSet

 i.e., Faster data read

Higher than Callback

Due to context switching between wait and read threads

CPU Utilization**

Higher than WaitSet

Lower than Callback

Side Effects

Possible unless programmed carefully

Callbacks are executed by the middleware receive threads. Blocking or excessive delays within the Callback impacts data arrival and have other undesired side-effetcs (see ref. 3)

Safe no side-effects

As application has control over the thread that is waiting for data

Typical Use Case

Extreme latency performance

Safety and mission critical system

Table 1. Data reader using a Listener (callback) Vs. WaitSet (separate read threads) mechanism

Note: Also, long processing should never be done within the Callback, only quick operations. That's why certain operations are not allowed within a callback (see ref. 2).

Being notified in your thread using WaitSets:

The first option is the safest option, because the application has control over the thread that is waiting for data.  This means that the thread can block or perform long tasks without affecting the performance of the middleware.  

In addition, when you are in your own thread, there is no limitation on which APIs you can call.  There are restrictions on which APIs you can call which are described below.

The downside to being notified about data availability in your own thread is that there is a slight increase in latency due to thread context switching.  If you have an application that needs the smallest possible latency, you may want to use a middleware thread to be notified that data is available. However, this might not be a factor in many use cases because in multicore systems the delay introduced with a context switch may be in the order of microseconds.

To be notified about data in your own thread, you use a WaitSet object.  This object allows you to block your thread until some event becomes true.

Example code showing how to be notified that data is available in your own thread:

        // Create and configure the WaitSet
        WaitSet *waitset = new WaitSet();

        StatusCondition *condition = reader->get_statuscondition();

        // Configure the WaitSet to wake up the thread when data is available
        condition->set_enabled_statuses(
                        DDS_DATA_AVAILABLE_STATUS);
        waitset->attach_condition(condition);

        // ...

        // Block my thread until data is available or timeout
        ConditionSeq active_conditions_seq;
        DDS_Duration_t timeout = {1,0};
        DDS_ReturnCode_t retcode = waitSet->wait(active_conditions_seq, 
                                                 timeout);


        // Simple example that uses only one condition.  Iterate over the active 
        // conditions sequence if there are multiple active conditions
        if (active_conditions_seq[0] == condition) 
        {

            HelloMsgSeq data_seq;
            SampleInfoSeq info_seq;
 
            // Access data using read() or take().  If you fail to do this
            // the condition will remain true, and the WaitSet will wake up  
            // immediately.
            retcode = _reader->take(data_seq, info_seq);

            // ... process data normally and return loan
        }

If you decide to use conditions and waitsets to receive data, see this best practice to help decide when to use a ReadCondition or a StatusCondition.

An example of using a WaitSet to be notified that data is available is here: 
http://community.rti.com/examples/waitset-status-condition

Being notified in a middleware thread by using a Listener:

If you choose to be notified in the middleware thread, you do this by creating a listener and installing it on the DataReader.  The benefit of using a listener on the middleware’s thread is that you will get lower latency data than if you are accessing data using a WaitSet or by polling.

Caution: This listener will be called back from one of only a few middleware threads, which means that you must be careful not to block or do any long processing.  If you block in this thread, it will affect the performance of multiple DataReaders in the same application.  See this best practice for more information.

Also, if you use the middleware thread to be notified of events such as data availability, there are restrictions on which RTI API calls you can make, due to Exclusive Areas (EAs).  Exclusive Areas are the mechanism that RTI uses to prevent deadlocks in the middleware threads. Different EAs are defined at the level of the DomainParticipant, Subscriber/DataReader, and Publisher/Writer.  If you are being notified that data is available on the DataReader, you are in the Subscriber’s EA.  When you are in the Subscriber’s EA, you may call APIs on the Publisher/DataWriter, but not on the DomainParticipant.  This means you can write data, but you cannot create any entities in the listener callback.

Lastly, while using listeners, you don’t take advantage of multicore systems due to the RTI Connext receive thread executing all the callbacks. This is not the case for waitsets, where every user thread executes the processing by itself.
 
Example code using a listener:
 class DataListener : public DataReaderListener {

// ...
virtual void on_data_available(DataReader* reader);
};

void DataListener::on_data_available(DataReader* reader)
{
    HelloMsgDataReader *data_reader = NULL;
    HelloMsgSeq data_seq;
    SampleInfoSeq info_seq;

    data_reader = DataDataReader::narrow(reader);

    // Access data using read() or take()
    data_reader->take(data_seq, info_seq);

    // ... process data normally
	
}

// ... In the reader creation code, add the listener
HelloMsgListener *reader_listener = new HelloMsgListener ();

reader = subscriber->create_datareader(
        topic, DATAREADER_QOS_DEFAULT, reader_listener,
        STATUS_MASK_ALL);
 
 
If you decide to use listeners to be notified that data is available, there is an example here:

Polling for data:

You can call read() or take() anywhere in your application, without being notified.  This is useful for an application that proceses data periodically, such as a GUI application that periodically refreshes the screen.  This allows you to control the CPU usage of your HMI application by drawing at a certain frequency, instead of drawing when data is available.

Example code showing polling for data:
 while(!ShuttingDown())
{
    ReturnCode_t retcode;
    HelloMsgSeq data_seq;
    SampleInfoSeq info_seq;

    // Access data using read() or take()
    retcode = data_reader->take(data_seq, info_seq);

    if (retcode != DDS_RETCODE_NO_DATA)
    {
        // Access data using read() or take()
        data_reader->take(data_seq, info_seq);

        // ...

    }

    NDDSUtility::sleep(receive_period);

}
   

A more thorough example of polling read is here:
http://community.rti.com/examples/polling-read

*However, the throughput and latency of your data depends on several other factors, such as:

  • The latency of your network hardware
  • The congestion on your network
  • The priority of the threads that receive your data

**CPU utilization actually has two cases:

  1. If you are using a normal WaitSet, the CPU usage may be higher due to thread context switches
  2. If you are using a batched WaitSet, the CPU usage may be much lower, which increases throughput

--

References:

  1. http://community.rti.com/best-practices/never-block-listener-callbac
  2. http://community.rti.com/kb/how-can-i-prevent-deadlocks-while-invoking-rti-apis-listener

 

 

Comments

The link to best practice to help decide when to use a ReadCondition or a StatusCondition seems broken. I suppose it should refer to http://community.rti.com/best-practices/use-statusconditions-instead-readconditions-ddsanyreadstate-ddsanyviewstate.

Thanks for pointing that out. I've fixed it.