QoS for "Sparse Sampling" DataReaders

2 posts / 0 new
Last post
Offline
Last seen: 9 years 6 months ago
Joined: 06/23/2014
Posts: 1
QoS for "Sparse Sampling" DataReaders

We are having trouble finding the correct QoS settings to get our DDS application to operate as desired. The data readers are unable to read newly publish data after the max_total_instances have be reached, even though the QoS contains KEEP_LAST_HISTORY_QOS, BEST_EFFORT_RELIABILITY_QOS, with disable_positive_acks and  push_on_write being set as well as autopurges set in the reader and writer data lifecycle settings.

We're hoping to have each topic act as FIFO of instances, where newly published instances/samples will push out old instances even if they have not be read (since some of them might never be read).

 

Our application contains:

- data writing processes which regularly publish new data (and new instances). There is only 1 DataWriter per topic. Each DataWriter will need to publish more than max_instances number of instances.

- data reading processes which only require a subset of the data from a single DataWriter (sparse sampling). The read_instance function is used obtain only the necessary samples (sometimes needing to read the same data multiple times).

- The data that will be needed by each DataReader is not known when the data is published so ContentFilteredTopics are not appropriate.

 

There doesn't seem to be any problems with the DataWriters publishing (even after the limits have been reached) but the DataReaders are not able to flush their queues (I think?) to accept the new data, in most cases showing this error: "PRESCstReaderCollator_addInstanceEntry:unexpected error"


What QoS settings are needed in order to put a higher priority on newly published instances than on old samples that are likely never to be read? Or is the only way to get around this to "take" every piece of data and implement our own database services?

 

Thanks,

Tom

 

Keywords:
Gerardo Pardo's picture
Offline
Last seen: 1 day 8 hours ago
Joined: 06/02/2010
Posts: 601

Hello Tom,

As explained in this post http://community.rti.com/content/forum-topic/instance-resources-dispose-and-unregister there is currently (in RTI Connext DDS 5.1) no mechanism that would allow older instances in the DataReader to be replaced in favor of new ones. In the current RTI Connext DDS the resources used by instances in the DataReader are only reclaimed when either the DataWriter disposes the instance or when all the  DataWriters that were writing the instance loose their liveliness.

As mentioned in this update to the post. there is a request for enhancement open to address this use-case so hopefully it will be included in a future release soon. In the meantime I am afraid the only solutuon is to dispose/unregister on the DataWriter. Incdentally using Best-Efforts also makes the problem worse because the dispose/unregister message could be lost...

Gerardo