Impact of queue depth with "best effort" reliability

4 posts / 0 new
Last post
Offline
Last seen: 5 years 6 months ago
Joined: 08/04/2017
Posts: 5
Impact of queue depth with "best effort" reliability

Considering a data writer with "keep last" history QoS policy: I understand from this description that, when using "reliable" reliability, the queue depth will determine the number of samples for which a data writer can re-send data that isn't received by the data reader.

My question is how (if at all) this queue is used for a "best effort" relibability data writer. I have understood from this description that there is a queue:

If RELIABILITY is BEST_EFFORT: When the number of DDS samples for an instance in the queue reaches the value of depth, a new DDS sample for the instance will replace the oldest DDS sample for the instance in the queue.

But I have not understood the implications, since with best effort reliability there is no need to re-send data to receivers.

Put another way: if I have a best effort data writer using keep last and a queue depth of 10, will it have different performance to a best effort data writer using keep last and a queue depth of 1? Perhaps it is only relevant if using e.g. transient local durability?

Keywords:
Gerardo Pardo's picture
Offline
Last seen: 12 hours 37 min ago
Joined: 06/02/2010
Posts: 601

Hi,

The History Qos can be set both on the DataWriter and DataReader. In both cases it controls what is maintained in the DDS Cache independent of whether the underlying protocol (RTPS) has sent/received/acknowledged the sample.

In the case of the DataWriter history. Depending on the DDS implementation (and perhaps the Qos) the DataWriter does not have to send the sample "synchronously" each time you call DataWriter::write(). It can use a different thread to send samples. Do that periodically, flow controlled, or however it wants. 

Therefore it is possible for the DataWriter to write multiple samples before any of them is sent. In this case the DataWriter history depth would allow the DataWriter to replace the old samples which may have never been sent.

This behavior is independent of te RELIABILITY setting. If the RELIABLE protocol is used then that replacement can affect the samples avalable for repairs and this will happen even if the DataWriter writer synchronously. But this is just a side-consequence of the more general behavior.

There are other optimizations and heuristics that come into play. I would need to dig deeper into the implementations of Connext DDS and Connext Micro to see what situations could cause a history replacement on sample for a best-effors DataWriter. But this is not part of teh API contract and may chnage between releases. So it is best to think of those two (REALIBILITY and HISTORY) as independent as I described.

Gerardo

Offline
Last seen: 5 years 6 months ago
Joined: 08/04/2017
Posts: 5

Thanks for the clear and prompt reply. 

Considering, then, a "best effort" reliability data writer using "keep last" history, I now have the following understanding of (at least) one impliciation of the queue size:

If the data writer is publishing with asynchronous publish mode and a queue depth of 1 then the samples may be overwritten before they ever get published by the other thread responsible for sending them. This may happen if samples are being written at a high frequency.

Does that sound reasonable?

Gerardo Pardo's picture
Offline
Last seen: 12 hours 37 min ago
Joined: 06/02/2010
Posts: 601

Yes. That is indeed the case. 

Two more things to keep in mind:

(1) Connext DDS provides an explicit Qos setting to control whether data is sent syncrhonously or asynchronously. See ASYNCHRONOUS_PUBLISHER. Other DDS implementations may do it diferently. 

(2) The history is not really a "queue" in the sense that HISTORY applies per Topic Instance. So a KEEP_LAST depth=1 does not mean you can keep only one sample in the DataWriter cache. Rather it can keep one sample per-instance (key). So if you have many independent instances the DataWriter will keep the last value for each instance. This is really one of the important values of HISTORY and what differentiates it from the max_samples setting in the RESOURCE_LIMITS Qos.

That said, AFAIK the ROS2 mapping to DDS currently does not use keys/instances. So this may not affect ROS2 systems. However if ROS2 decided to change that or the DDS layer somehow could be configured to treat certain parts of the data as Key then the HISTORY KEEP_LAST behavior would be impacted.