Receiving Messages From Other Multicast Topics on NIC

8 posts / 0 new
Last post
Offline
Last seen: 4 months 3 weeks ago
Joined: 06/10/2015
Posts: 9
Receiving Messages From Other Multicast Topics on NIC

Hi,

We are using multicast to send and receive messages between applications on two separate topics.  If we publish messages on a single topic (TOPIC_A) and have separate listener applications (APP_A and APP_B) listening on two separate topics (TOPIC_A and TOPIC_B) respectively, we can see through Windows Resource Monitor that both applications are receiving the traffic from TOPIC_A.  However only APP_A is receiving messages on TOPIC_A.  We've configured all applications to use the same multicast address:  

     <multicast><value><element><receive_address>239.255.0.28</receive_address></element></value></multicast>

We've confirmed that IGMP snooping is enabled on our switch.  

Is this behavior expected?  What configuration setting changes should we make to ensure that applications on receive data on topics that they have joined?

 

Regards,

Rick

Offline
Last seen: 2 weeks 6 days ago
Joined: 05/23/2013
Posts: 49

Hi Rick,

Have you checked that the traffic is actual data for TOPIC_A?
APP_B (for TOPIC_B) would receive discovery data from your publication application (for TOPIC_A) even though topics are not matching. 
You would need to use Wireshark to verify that the traffic is discovery data or TOPIC_A data, 

Thanks,
Kyoungho

Offline
Last seen: 4 months 3 weeks ago
Joined: 06/10/2015
Posts: 9

Yes, we have confirmed that both listeners are getting data for Topic_A.  We know this because on Windows Resource monitor, we can see the network traffic send rate for the publisher matches the receive rates for both listeners.  Our application publishes at a high rate (> 150K / sec).  I don't think this would be discovery traffic.

Offline
Last seen: 2 weeks 6 days ago
Joined: 05/23/2013
Posts: 49

Hi Rick,

The same problem happens on my side as well. I am trying to understand why it happens..

To avoid the unwanted traffic, you could try using the transport multicast mapping QoS to assign different multicast addresses for different topics.
Please see the following link to understand how to configure. 

https://community.rti.com/kb/how-do-i-get-data-reader-receive-data-over-multicast

Thanks,
Kyoungho

Offline
Last seen: 4 months 3 weeks ago
Joined: 06/10/2015
Posts: 9

Hi Kyouhgho,

That is what we've done as a work around.  However, I would assume that rti would create different multicast groups with a different port for different topics even if a single multicast IP address was configured in the receive_address.

Rick

Offline
Last seen: 2 weeks 6 days ago
Joined: 05/23/2013
Posts: 49

Hi Rick,

According to the port assignment rule (https://community.rti.com/kb/what-network-port-numbers-does-rti-connext-use), the port number for user multicast is differentiated only by domain_id. 
That means if your subscription applications (in the same domain) running on the same machine using the same multicast address, the applications would use the same multicast address and port. 
To avoid this issue, you could assign different multicast addresses for different topics as I susggested above or you can assign different port numbers statically for each application like following. 

<multicast>
   <value>
      <element>
         <receive_address>239.255.0.28</receive_address>
         <receive_port>58400</receive_port>
      </element>
   </value>
</multicast>

Thanks,
Kyoungho

Gerardo Pardo's picture
Offline
Last seen: 6 days 20 hours ago
Joined: 06/02/2010
Posts: 585

Hello Rick and Kyoungho,

I think there is a way to get the "automatic" behavior that Rick is looking for whereby different Topics get "automatically" assigned to different multicast groups and that way avoid the situation where multicast packets sent to Topic_A are received by a machine that is subscribed to a different Topic_B.

The way to use this is to configure the TransportMulticastMappingQosPolicy in the DomainParticipantQos.  Imagine you set the DDS_TransportMulticastMappingSeq to have a single element with:

addresses = &nbsp;RTI_String_dup("[239.255.100.1,239.255.100.100]");
topic_expression = &nbsp;RTI_String_dup("*"); 

Then RTI Connext DDS will automatically assign a multicast group address in the specified range (239.255.100.1  to 239.255.100.100) based on the name of the Topic. The assignment uses a hash function in the Topic name so that chances that two different Topic names hash to the same multicast group address are small if you use a reasonable number of addresses.  

The  TransportMulticastMappingQosPolicy is quite flexible and you can have finer control over which Topic names get automatically mapped to multicast addresses, which multicast addresses, and even plug custom functions to do the mapping.

This setting needs be done on the DomainParticipants that subscribe to the Topic. It is no necessary for it to work that the setting is the same on all domain participants, but it is highly recommended so that everybody that subscribes to the same Topic name requests the same multicast address and therefore each Topic is sent just on the network. This settings can be done using the XML QosProfiles that the DomainParticipant loads.

Note that port numbers are not configured. This is because the multicast routing/filtering done by the routers/switches only looks at multicast addresses, not the port numbers. 

Gerardo

 

Offline
Last seen: 4 months 3 weeks ago
Joined: 06/10/2015
Posts: 9

Thanks Gerardo. This works as described. For reference, I needed to make the following changes to my qos xml configuration file in order to make this work.

<datareader_qos>
<multicast>
<kind>AUTOMATIC_TRANSPORT_MULTICAST_QOS</kind>
<value>
<element>
<receive_port>0</receive_port>
<receive_address></receive_address>
</element>
</value>
</multicast>
</datareader_qos>

<participant_qos>
<multicast_mapping>
<value>
<element>
<addresses>[239.255.1.1,239.255.2.255]</addresses>
<topic_expression>*</topic_expression>
</element>
</value>
</multicast_mapping>
</participant_qos>

Regards,
Rick