DDS Persistence Service Over 2 Domains

4 posts / 0 new
Last post
Offline
Last seen: 4 years 9 months ago
Joined: 08/03/2017
Posts: 13
DDS Persistence Service Over 2 Domains

Hi everyone! 

We have a system where there are two main domains one that primarily generates data and the other that primarily recieves it, although there is two way communication for the rare instances that something is changed via the other domain. The connection between the two is not always reliable and we are looking for a functionality to backfill missed data that occurs when the connection is down, once the connection resumes.

I began to peruse the information about the RTI Persistence Service and was wondering if this could be applied as a solution to this problem? I found a good bit of information on setting this up in the CoreLibraries_UserManual but not much on how the Service actually works. I know its main functionality is updating various participants when they come online late, but I wonder if this could be applied to updating data on separate domains?

Best Regards!

Organization:
Offline
Last seen: 10 months 2 weeks ago
Joined: 02/11/2016
Posts: 144

Hey,

I am not sure what you're trying to do.

You say you are using some sort of "connection" to transfer data between domains?

And that "connection" is unstable?

RTI Persistence Service is a service that aims to provide persistence of data.

That means:

1. It will supply previously written data to late-joining readers

2. It will keep data "alive" when the original writers "die"

3. It will supply on-disk backup of the data, so that it can recover from a crash, even if the service itself crashes

 

For what you are describing it doesn't seem like a proper fit, but I wonder at the architecture itself.

1. Why aren't you generating and receiving data on the same domain? RTI is pretty well suited for just that sort of scenario

2. Why are you using an external "connection" to transfer data between domains? (I can understand using an external connection to connect between different physical sites [although using a routing service or something of RTI based could work well in that scenario], but I can't see other scenarios where I would use external connection to communicate between domains)

 

I guess if you could explain the architecture and goals a bit more I may be able to give a better answer.

 

Good luck,

Roy.

Offline
Last seen: 4 years 9 months ago
Joined: 08/03/2017
Posts: 13

Hi Roy,

I'm probably doing a poor job of explaining the current layout. I don't understand it as well as some of the others that have worked on it longer, but we do indeed have a routing service between the two domains as well as sending/receiving for both. 

The connection I am referring to is a group of machines on an internal network. The link between said machines is prone to periods of loss of connection due to the scenario. 

We have multiple domain 0s that are on individual machines (think independent robots that move around) and a single domain 3 that houses the collection of all the data from all the machines, with a routing service in between. I know it is there, how the routing service works exactly I'm not 100% on.

Offline
Last seen: 10 months 2 weeks ago
Joined: 02/11/2016
Posts: 144

Hey,

It would help if you can describe the system architecture better.

For now my understanding is:

We have multiple "networks" (machines).

Within a network/machine, communication is on domain X and it's relatively safe to say there are no losses.

Between networks/machines, communication is done on domain Y and it's less safe to assume that there are no losses.

For simplicity let's say you have two machines, each running multiple applications + a routing service (the routing services communicate with each other and bridge domain X and Y).

You want to guarantee delivery of some data (even if there was some connection issue).

Let me break this into two parts:
1. Reliability - you can configure Quality of Service to enable some sort of reliability, which will protect you from some packet loss. How ever, it can be difficult to tune reliability to your exact needs.
It's also important to note that reliability is not (and never can be) "perfect" (guarantees that no matter what, each sent sample is received by all desirable readers and that writers are never blocking).

2. Durability - you can configurate Quality of Service to enable some of durability, which will allow late-joining readers (readers created after a writer wrote data) to receive "historical" data. This again can be heavily configured and tuned to your needs (although it may be a bit difficult to do).
This ability, like reliability, is not uniformly perfect for all use-cases, but could help in some cases.

Lastly I will point out that:
1. When using reliability - if the connection between a writer and a reader is lossier than your qos permits, the writer may consider the reader as "inactive", in which case it will no longer send data to it (can only be fixed by restarting reader or writer)
2. When durability + reliability to write maps from one machine to the other machine, the routing services may pose a problem (that is, given that the routing services are always running, if a writer on one routing service gets "disconnected" from a reader on the other routing service [even at the instance level], some/all data may not reach readers on the target machine).

The above-mentioned issues can be confronted by tuning durability and reliability to fit your network (or improving your network to fit your qos) but if you fail to do so you may experience a lot of issues.

That being said, I wish you good luck.
If you have any more questions feel free to ask,
Roy.