Confusion about disposed instances

4 posts / 0 new
Last post
Offline
Last seen: 6 months 2 weeks ago
Joined: 10/09/2022
Posts: 13
Confusion about disposed instances

Suppose the following IDL:

struct Data { @key string name; long id; }

Scenario 1:

Suppose there are two writers  (KEEP_ALL_HISTORY_QOS + RELIABLE_RELIABILITY_QOS) in different nodes write the instances as the following order:
Writer 1               Writer 2
{ "A", 1 }
                           { "A", 2 }
dispose "A"

Because of the network delay, reader deployed on same node as Writer 1 receives Writer 1's sample { "A", 1 } and diposed sample first.
What happened when Writer 2's sample { "A", 2 } arrived to reader? Does this sample { "A", 2 } will be delivered to application?

Scenario 2:

Suppose there is a writer (KEEP_LAST_HISTORY_QOS depth_1 + RELIABLE_RELIABILITY_QOS + TRANSIENT_LOCAL_DURABILITY_QOS) publishes many  Struct Data instances and then dispose some of them.

Will the late-joining readers (KEEP_LAST_HISTORY_QOS depth_1 + RELIABLE_RELIABILITY_QOS + TRANSIENT_LOCAL_DURABILITY_QOS) receive the disposed instances (instance state is NOT_ALIVE_DISPSED)?

If so, is that any QoS setting cant can prevent from that?

 
Keywords:
Gerardo Pardo's picture
Offline
Last seen: 3 weeks 4 hours ago
Joined: 06/02/2010
Posts: 601

Good questions. See below

Scenario1:

With the default Qos settings the answer is yes. If a sample for instance  with key name="A" arrives after the sample disposing the instance, the DataReader will consider that the instance has become "alive" again. You will get the new sample and you will also see an increment in the SampleInfo's disposed_generation_count.  

To be robust to network timing/delays you do one of two things (or both)

  • Modify the default value of the DESTINATION_ORDER Qos Policy and select BY_SOURCE_TIMESTAMP. Of course this would require good synchronization between the clocks in all machines, which you can accomplish using NTP or a similar service
  • Modify the default value of the OWNERSHIP Qos Policy and select EXCLUSIVE.  You also have to set the OWNERSHIP_STRENGTH to select which is the primary source of information for the insance with key name="A". This has other side effects,  in  that each instance will only be updated by one writer (is owner) as long as the writer is considered "active".

Scenario2:

Yes, late-joiner readers will receive disposed instances.  We realize this can be an issue for some systems and are looking into ways to better manage this in future releases. In the meantime there are some things you could do to avoid getting the disposed instances; essentially the ammount to different way to unregister the instances in the DataWriter after some delay so that they do not stay there forever:

  • Call "unregister_instance" manually some time after disposing it. Once the instance is unregistered it will be removed from the DataWriter cache and not sent to late joiners
  • Modify the default value of the WRITER_DATA_LIFECYCLE QoS Policy setting autopurge_disposed_instances_delay to a finite value. This will cause the DataWriter to automatically unregister any instance that has been disposed after some time has passed
  • Modify the default value of the RESOURCE_LIMITS  Qos Policy to limit the maximum number of instances that the DataWriter will keep in its cache, and use DATA_WRITER_RESOURCE_LIMITS to prioritize retaining the "live" instances over the "disposed" ones (e.g. setting the specifically the instance_replacement  field to DISPOSED or DISPOSED_THEN_ALIVE).
Offline
Last seen: 6 months 2 weeks ago
Joined: 10/09/2022
Posts: 13

Thanks for kind answer!

From the docautopurge_disposed_instances_delay is supported with durable DataWriter queues only for 0 and INFINITE values (finite values are not supported).

Does it mean that finite values(such as 10s) are not supported for not-volatile DataWriter?

If so, I have to set autopurge_disposed_instances_delay as 0.

Offline
Last seen: 1 year 6 months ago
Joined: 07/08/2021
Posts: 6