Fast publishing in different Partitons

4 posts / 0 new
Last post
Offline
Last seen: 5 years 3 months ago
Joined: 01/07/2015
Posts: 5
Fast publishing in different Partitons

Hi,

 

According to some of my app requirements, I need to send messagens of a topic in a sort of different groups of participants.

For instance, one app can send a message to different groups of apps any time: App1 send message M1 to App Group1 or Group2; Message M2 to Group 3 or Group4. Then, App1 send M1 to Group1 again. And so on...

So I decided to implement different partitions, one for each of these groups of participants: Partition1, Partition2, ...

In my experiment, I created a Publisher and many Subscribers to the same topic. Each Subscriber keeps attached to the same Partition during its entire life.

But the Publisher keeps switching between Partition i, i+1,,.. by using set_qos at Publisher side before every write, as similar as RTI example https://community.rti.com/examples/partitions (*)

 

The problem is that I noticed that if you swich partition/QoS at Publisher side in a very short period of time, I have messages delivered to wrong partitions.

The original RTI example above (*) sleeps 1 second between a set_qos and the next write and it works beatifully. But my app needs to set_qos and write the next message imediately and this partition schema seems not to be working for me.

When I change DR/DW QoS from BEST EFFORD to RELIABLE and/or add more participants to the domain, the time need between set_qos and write to have it working grows exponentially, which is totally unacceptable to my application (which have real time restrictions).

I read that, according to Users Manual, set_qos does not take place immediatelly.

But is there a way to guarantee that set_qos when I need fast switch between different partitions work the way I need? Or what could I be missing here?

 

Thanks in advance!

Julio.

 

Organization:
rip
rip's picture
Offline
Last seen: 3 days 2 hours ago
Joined: 04/06/2012
Posts: 324

(this response uses pseudo-API, you'll need to translate into your language of choice)

The time the system takes to react to a set_qos is non-deterministic. 

(subject to correction on the internals by someone from RTI) <publisher>.set_qos(...) can not be seen as atomic, since the actual work will take place on a database maintenance thread, not on the application thread.

You are also dealing with network latency as the new QoS is propagated throughout the system via the Discovery mechanism.  But, since you are changing partition on the Publisher side, you shouldn't need to worry about the propagation delay -- so long as the maintenance thread has sufficient time to update the reader/writer endpoint connections on the Publisher side, the DataWriter shouldn't be sending to the non-Partition reader endpoints.

Sorry, treat "sufficient time" as falling under "it depends".  "Eventually" both the Publisher and Subscriber databases will know to prevent sample delivery for mismatched partitions.

Note that a writer can call get_matched_subscriptions(...), and then get_matched_subscriptions_data(..., handle) to get a struct that contains the remote DataReader's parent Subscriber's PARTITION_QOS.  if you know that a certain Reader is on the "FOO" partitions, and you set_qos(...) to the "BAR" partition, then that matched subscription should be taken out of the get_matched_subscriptions (calling get_matched_subscriptions_data with an invalid handle will throw an exception in this case, which may be quicker than calling get_matched_subscriptions over and over until you see a change).

Implementing an algorithm like that is clunky.  Optionally you can write your readers' code defensively, under the assumption that it may sometimes receive off-partition samples that it must filter out rather than process.

 

 

Gerardo Pardo's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: 06/02/2010
Posts: 601

Hi Julio,

Dynamic partitions are not the right mechanism to implement what you are trying to do. As you found out t takes some time to switch between partitions because the information needs to be propagated via DDS dicovery. Dynamic partitions are intended for more static situation, where the frequency of sending messages is much higher than that of switching between partitions. To have the "fast" dispatch to different groups from message to message I would try different approaches. See below:

(1) Static Partitions.

One approach would be to still use Partitions as you described but create multiple DataWriters and Publishers, each on a different partition: Group1, Group2, etc. Similar to what you describe DataReaders/Subscribes are also bound to a partition and not changed in their lifetime. Then when you want to send t a specific Group, you use the DataWriter associated with the Publisher on that Partition / group.

(2) Use content filters.

Another approach is to include a field withtin the data message that describes the target_group.  The DataReaders would subscribe using a ContentFilteredTopic that specifies the group they belong to. Foe example:  " target_group = 'Group1' "

When you want to send to a group you would set the target_group field within the message to the desired group and DDS will deliver it to the DataReaders that have set the filter for that group.  RTI Connext DDS filters on the data-writer side so the ammount of traffic received by the DataReader is about the same as in option 1. (actually a bit more because there are additional GAP messages). 

The advantage of (2) versus (1) is that you need fewer resources (less DataWriters) and there are less things to discover. The disadvantage is that there is additional processing to compute the filters for each sample, a bit more 'GAP' messages, and-perhaps most significantly-the messages are cannot be delivered via multicast.

If you can afford the extra memory resources I would recommend option (1).

Gerardo

Offline
Last seen: 5 years 3 months ago
Joined: 01/07/2015
Posts: 5

Thanks a lot rip and Gerardo !

I decided to use static Partitions which actually made me create more Publishers/Data Writers but now it works as well as I expected.

Regards,

Julio.