Multi-threaded Request-Reply

5 posts / 0 new
Last post
Offline
Last seen: 6 years 4 months ago
Joined: 11/09/2016
Posts: 4
Multi-threaded Request-Reply

Hi,

I am working on a communication adapter that uses the Request-Reply (RR) pattern heavily. To allow for parallel processing, I would like to use a certain number of threads N each sending a Request and waiting for the respective Reply in their own context before doing the same thing over again. I am not sure what is the best approach to implement this parallel processing. The different alternatives I thought of were:

1)     Create N threads. Each threads holds its own Requester object. N Requester objects create significant overhead and inflate the design which I am not especially fond of.

2)     Create N threads. Each thread uses the same commual Requester object. However, race conditions and deadlocks seem to be a problem. To prevent those, I am protecting all code between Requester.CreateRequestSample() and Requester.TakeReply with a mutex. This approach seems to hardly deserve the name parallel processing as I am serializing the flow with a mutex that encompasses nearly all of the time-consuming code.

3)     Use only one thread and instead of waiting for the reply (and being blocked), I could be notified via a handler. In this case, I could manage the Requests I am waiting for internally in my code, e.g. by allowing only a limited number of Replies to be pending. 

Now, I was wondering.

Regaring 2) Is it possible to use one communal Requester object and enable the Exclusive Area QoS setting to prevent race conditions and deadlocks? If so, how would I define this QoS setting in USER_QOS_PROFILES.xml and would this QoS setting alone already be sufficient without changing the code?

Regarding 3) Is it possible that the Requester is notified when the Reply has arrived? Basically using a non-blocking operation on the Requester side of the communication. 

Any helpful comments would be greatly appreciated.

Michael

asanchez's picture
Offline
Last seen: 3 years 10 months ago
Joined: 11/16/2011
Posts: 50

Hi Michael,


Let me comment over your thougths and see what the best alternatives are for the use case you described.


Proposal 1)

As you well mentioned, this solution represents the least efficient in terms of resources, at the application level as well as network level. For this reason I would definitely discard this option despite the implementation could be simple enough.

Proposal 2)

This solution is the one that represents the best architecture and provides the best resource-concurrency ratio. You have the right model in mind and I would just remove the need of an extra mutex. Both Requester.CreateRequestSample() and Requester.TakeReply() (and in general any Request-Reply API) are thread-safe operations hence you can call them concurrently. The extra mutex would be needed only if you have other resources in your application the threads need to share (e.g. request queue).

In addition you could go a step further in efficiency and avoid creating a new reply sample for every single iteration. You can simply create one object at the beggining of the thread execution and reuse it while the requester needs to send a request and wait for a reply. Let's consider the following Java code snippet:

// Thread entry point
void run() {
    MyRequest request = myRequester.createRequestSample();
    MyReply reply = myRequester.createReplySample();

   while(moreRequests) {
       // generate request or retrieve from a queue
       request = ...
       myRequester.sendRequest(request);
       myRequester.waitForReplies();
       myRequester.takeReply(reply);
       // process the reply
       ...  
   }
}

As a note, this model generates contention only by the sendRequest() and the takeReply() calls, since they are protected by the same underlying Exclusive Area (EA). This should not represent a limitation assuming that the bulk of the processing is caused by the actual processing of the received reply,


Proposal 3)

The only alternative I can see here is that you install a DataReaderListener on the Requester's reply DataReader. The listener would handle the on_data_available() callback that takes place from the DataReader's receive thread context. Unfortunately this solution is single threaded given that only one receive thread is created for the pair Subscriber-Requester's reply DataReader. In addition I would avoid any time consuming processing within a DataReaderListener's callback context, since that can slow down the underlying middleware processing.

To address your questions:

 Is it possible to use one communal Requester object and enable the Exclusive Area QoS setting to prevent race conditions and deadlocks? If so, how would I define this QoS setting in USER_QOS_PROFILES.xml and would this QoS setting alone already be sufficient without changing the code?

EAs are always present in the middleware and they are the reason the RTI Connext API is thread-safe so your application does not require an extra mutex to protect those calls. There are multiple EAs that are created by default to reduce the contention to a minimum. The Exclusive Area QoS only allows to indicate whether only one EA is created to protect all the RTI Connext APIs. This means less resources at the expense of more contention (i.e. all the threads block on the same mutex).

 Is it possible that the Requester is notified when the Reply has arrived? Basically using a non-blocking operation on the Requester side of the communication

By installing a DataReaderListener in the Requester's reply DataReader you can get asynchronous notifications from the middleware on reception of data. The two main issues are that there is only one receiving thread and that heavy processing within the listener context may slow down the middleware processing.

- Antonio

Offline
Last seen: 6 years 4 months ago
Joined: 11/09/2016
Posts: 4

Hello Antonio, 

thank you for this great explanation and addressing all questions I asked. It is very much appreciated.

asanchez's picture
Offline
Last seen: 3 years 10 months ago
Joined: 11/16/2011
Posts: 50

Hi Michael,


Glad to help!

-Antonio

Offline
Last seen: 6 years 4 months ago
Joined: 11/09/2016
Posts: 4

I think I fiugred out my problem. I first suspected a race condition to be a problem with approach 2), hence the question regarding parallel processing and the Request-Reply pattern.

Here is what is happening. Let’s say, I use N threads and one requester object. Assume the following flow of the process

Thread 1 Is Preempted after Send Request, Wait_For_Reply

Thread 2 Is Preempted after Send Request, Wait_For_Reply

Thread N-1 Is Preempted after Send Request, Wait_For_Reply

Now Thread N is running sending multiple requests and consecutive replies. When the scheduler returns to any of the first N-1 threads, some of their Wait_For_Reply methods timeout.

The reason for this was that the depth setting of KEEP_LAST_HISTORY_QOS was smaller than N which lets some of the first Replies being dropped out of the DataReader Receive queue. 

For the next time: Is there a way to inspect the receive and send queue of the DataReader and DataWriter instances while running the code?