Sequence of events:
- Publisher and Subscriber are online and have discovered each other.
- Publisher publishes a sample for a topic.
- Subscriber reads (not takes) the sample, instance_state is alive, as it should be.
- Subscriber loses liveliness to the publisher (due to temporary network outage).
- Subscriber is notified of liveliness change (-1)
- Subscriber reads the sample again, instance-state is not-alive-no-writers, as it should be.
- Subscriber re-gains liveliness to the publisher.
- Subscriber is notified of liveliness change (+1)
- Subscriber reads the sample again, instance-state is still not-alive-no-writers. *
In step 9, is no-alive-no-writers the expected outcome? Is there another way to tell if the sample's writer is back and communicating with the subscriber without requiring the subscriber to publish a new sample?
Notes:
- Qos profile inherits from Generic.StrictReliable
- Reader and Writer are reliable.
- Read and Writer have keey-all history (same result with keep-1)
- Reader and writer have transient-local durability.
Hey seymour,
This seems to be a common problem (at least my project has encountered it in the past).
How ever, it doesn't seem to be a bug but just how things were meant to be.
A sample that "died" due to no writers will not be revived, I am not sure why this was determined to be the desired behavior.
Two options to overcome this are:
1. Keep a local map wherever you have a reader and keep the instances alive in your map (even if publisher and subscriber lose liveliness)
2. Use on_publication_match (or was it subscriber match) to resend data that lost liveliness (this may cause a lot of extra load on the network if you have many 1-1 losses of liveliness)
It may be that there are more elegant solutions but these are the ones I know of.
Good luck,
Roy.