Multicast and shared memory.

3 posts / 0 new
Last post
Last seen: 2 years 5 months ago
Joined: 02/08/2021
Posts: 21
Multicast and shared memory.
We have an R&D system which is a network of computers deployed by orchestration. People develops services that use DDS and deploy orchestrations. It is not fixed which service runs on which machine. Which computer contains which topic's Reader and Writer is unknown/constantly changing.

As of now we use shared memory and unicast and the processes are able to communicate (no explicit configuration on Readers/Writers). We want to enable multicast as a performance optimization and to keep using shared memory for local topics, but what I have tried seems to break shared memory communication and bring down performance to an unnaceptable level.

As of now we are setting "initial_peers" and "multicast_receive_addresses" on the DomainParticipant. This seems to work right for discovery. No problems here.

From what I read on the documentation, I can see that enabling multicast on topics can be done either on the "DataReader" or through a "TransportMulticastMapping" on the "DomainParticipant".

For "TransportMulticastMapping" on the "DomainParticipant" I can't find documentation for the constructor and where to use it. So I haven't tried.

I still have investigated, and I don't clearly see how it would coexist with shared memory:

-Grepping at the sources I can find the constructor, but it has a "TransportMulticastMappingFunction" parameter whose constructor requires parameters named "the_dll" and "the_function_name". No idea what this is.

-It seems that this interfaces expects IPs, not locators. At least it isn't clear.

I know the multicast address, so it should in theory be possible to set them on the DataReader, so I could set a "TransportMulticastSettingsSeq" of two "TransportMulticastSettings" elements. One as "TransportBuiltin::UDPv4_ALIAS" and another one as "TransportBuiltin::SHMEM_ALIAS", the problem is that it's not very clear from the documentation if this should work, how it would behave with regards to wich address takes precedence, etc.

I tried anyways, by setting the address on the SHMEM setting to "#0" but it didn't work. I was setting shared memory as the first element on the sequence because I wanted it to have higher precedence.

This is without ignoring that multicast over shared memory doesn't conceptually make a lot of sense, at least not intuitively.

Is there a way for multicast and shared memory to coexist?
PK_RTI's picture
Last seen: 5 days 15 hours ago
Joined: 07/01/2021
Posts: 27

Hello Rafael,

To utilize UDP multicast, the TRANSPORT_MULTICAST QosPolicy chapter in the User's Manual is quite helpful. Nothing should be changed in your Shared Memory configuration.

Let me know if you need more assistance,


Last seen: 2 years 5 months ago
Joined: 02/08/2021
Posts: 21
OK, so in theory the code below should do it. Enabling that as done below on every Reader causes a performance degradation.
It seems that I wrongly assumed that shared memory was broken, as when I checked with "netstat --all --program" I didn't see any Unix sockets belonging to our programs. Now that I write it I feel kinda dumb, you seem to be using real shared memory not IPC sockets. The machines didn't have "strace" so I couldn't see that.

About the cause of the performance degradation I don't know. Could running out of automatically assignable ports for a participant cause trouble? I remember reading something along these lines somewhere and we have a big amount of topics. On this system we have to do some work to have profiling data, but it's not there yet so we are temporarily hip-shooting a lot.

multicast_enable(char const* ip_cstr, int32_t port)
// Notice that Default TTL is 1. It's not a reader Qos config parameter,
// but a DomainParticipant one:

namespace rcp = ::rti::core::policy;
using StrSeq = ::dds::core::StringSeq;
using CfgSeq = ::rti::core::TransportMulticastSettingsSeq;
auto& UdpV4Id = rcp::TransportBuiltin::UDPv4_ALIAS;
auto auto_mode = rcp::TransportMulticastKind::AUTOMATIC;

CfgSeq cfg;
cfg.emplace_back (StrSeq{UdpV4Id}, ip_cstr, port > 0 ? port : 0);
// reader is a "dds::sub::qos::DataReaderQos" instace
reader.policy(rcp::TransportMulticast{std::move(cfg), auto_mode});

Thanks for your help!