We are planning on having a point-to-point transport between two Router services that will be over a low-bandwidth communication path. Because of this, we want to disable all of the discovery traffic and as much of the "meta" traffic as possible. Given this, is there a way to configure reader QoS such that it detects liveliness based solely receipt of instance samples? Put another way, is there a way to eliminate liveliness assertions and still have liveliness?
liveliness
Hello,
My scenario involve a datawriter and a datareader each one runs on its own host and exchanging topic instances through network.
I want the datareader side to be able to:
- Allocate resources (working processes for example) on creation of instances because the datawriter issues write keyed samples
- Free the resources on destruction of instances because the datawriter issues unregister or dispose (doesn't matter) of keyed samples, once they are created
Until now, no difficulty by reading status instances on the datareader side.
Hi,
I have a Reliable, TransientLocal keyed topic with one DataWriter and one DataReader. History is KeepAll on the DataWriter and KeepLast(2) on the DataReader. All other QOS are at their respective defaults. To test liveliness behavior, I ran the following protocol (using v5.2.0 on Linux):
Hi,
I have a problem when using livelinessChangedStatus structure. In my project, I have one dataReader that listens to a topic which several dataWriters are publishing on. Every time when one or several dataWriters come offline/online due to network disconnect/reconnect, the function on_liveliness_changed(.....) is called, and I get a LivelinessChangedStatus struct. However, this struct only tells me the instance handle to the last remote writer that changed its liveliness, and this causes me a problem when several writers changed their liveliness at the same time.
A simplified setup of our system is below:
Subsystem 1 (domain 0) --> Routing Service --> Rest of system (using Manual liveliness QoS settings, domain 1)
Hi,
We are trying to solve a potential problem with the following scenario:
1. A writer is set up and sends 1 sample of a single instance
2. Persistence Service reads the sample and stores it.
3. The writer is closed.
4. Persistence Service crashes due to some problem (maybe the machine crashes for some reason).
5. All readers are receiving the on_liveliness_change callback, with the instance-state being "NOT_ALIVE_NO_READERS"
I am trying to find a way how my application can react to a participant going stale without using my own timers.
In the RTI Core Libraries and Utilities Manual, section 14.3.1, I found the following:
