COMMENDFacade_canSampleBeSent:NOT SUPPORTED | Fragment data not supported by this writer Primary tabs

7 posts / 0 new
Last post
Offline
Last seen: 1 month 3 weeks ago
Joined: 10/03/2024
Posts: 5
COMMENDFacade_canSampleBeSent:NOT SUPPORTED | Fragment data not supported by this writer Primary tabs

Hello Community Members,

Getting COMMENDFacade_canSampleBeSent:NOT SUPPORTED | Fragment data not supported by this writer after upgrading to 6.1.2.13.

We are upgrading our RTI Version to 6.1.2.13. Although fragmentation and all related settings are enabled in our QOS File but still seems like fragmentation is not happening, Getting this error in our servicr logs while sending the data and the data is not coming in the RTI Admin console : 

COMMENDFacade_canSampleBeSent:NOT SUPPORTED | Fragment data not supported by this writer.
COMMENDAnonWriterService_checkFragmentationSupport:sample cannot be sent
COMMENDAnonWriterService_write:[Local Participant: 101e066 4e6dbce3 bbce8cee | Local Endpoint 100c2] check for local writer fragmentation support failed
PRESPsWriter_writeCommend:!anonw->write
PRESPsWriter_writeInternal:!failed to write sample in Commend

If we are keeping below setting we are getting the above fragmentation  error :

          <message_size_max>1400</message_size_max>

Observations : 

If we are modifying the message_size_max by increasing the size,the data is going out because fragmentation is not triggering and this issue is not coming.

          <message_size_max>65507</message_size_max>

 We cross checked all the sdds and security files are in right path.

NOTE: we need to use the <message_size_max>1400</message_size_max> and the fragmentation enabled because in all our services it is configured in same way .Increasing the  <message_size_max> to 65507 is failing to run other services and failing the receiver to receive the data.

 

Please reply to this post for any extra info needed for the resolution.

 Any supports will be really helpful.Thanks in advance.

Organization:
Offline
Last seen: 2 months 1 week ago
Joined: 01/10/2013
Posts: 31

Hi, 

Based on the line 

PRESPsWriter_writeCommend:!anonw->write

it looks like it may be the builtin participant data writer that is failing to write because the Data(p) messages used for participant discovery are too big (this is a detail that you don't really need to know, but the 'anon' in that message stands for anonymous. We say that the participant writer and reader use an anonymous channel because it is not really best effort or reliable communiaction, just broadcasting messages to anyone in it's peer list. In case you see 'anon' in any messages again, that can help).

We do not support fragmentation of the participant channel, but one thing that usually helps is to make the participant messages smaller by removing a set of properties that we propogate by default and they can add a lot to the size of the data(p) messages. You can remove them like this by setting the 'propagate' tag to false: 

<participant_qos>
  <property>
    <value>
      <element>
          <name>dds.sys_info.hostname</name>
          <value>"hostname will not be propagate"</value>
          <propagate>false</propagate>
      </element>
      <element>
          <name>dds.sys_info.process_id</name>
          <value>1234</value>
          <propagate>false</propagate>
      </element>   
      </value>
  </property>
</participant_qos>

 The full list to disable is:

  • dds.sys_info.creation_timestamp
  • dds.sys_info.executable_filepath
  • dds.sys_info.execution_timestamp
  • dds.sys_info.hostname
  • dds.sys_info.target
  • dds.sys_info.process_id
  • dds.sys_info.username

It doesn't matter what the 'value' tag is set to. These properties are just used for debugging and informational purposes. 

Offline
Last seen: 2 months 1 week ago
Joined: 07/08/2021
Posts: 8

Hi Soumya,

It seems that you should be a supported customer of ours.

Could you contact me at zklim@rti.com so that I can see how to help you further with this?

Thank you!

Regards,

Zhi Kai

Offline
Last seen: 1 month 3 weeks ago
Joined: 10/03/2024
Posts: 5

Hello erin ,

Thank you for your suggestion regarding the propagation of system properties in the participant discovery messages. I wanted to let you know that we have already disabled the propagation of the following properties as per your recommendation:

  • dds.sys_info.hostname
  • dds.sys_info.process_id
  • dds.sys_info.creation_timestamp
  • dds.sys_info.executable_filepath
  • dds.sys_info.execution_timestamp
  • dds.sys_info.target
  • dds.sys_info.username

Despite these changes, we are still encountering the issue, with the participant messages appearing too large during discovery. Could you kindly provide further guidance or suggest additional steps that we might take to resolve this?

I appreciate your continued support and look forward to your response.

Regards 

Soumya

Offline
Last seen: 2 months 1 week ago
Joined: 01/10/2013
Posts: 31

Hi, 

Are you able to get a wireshark capture? If so, as a test, can you increase the message_size_max to a larger value (or not set it to anything so it takes the default), capture the rtps traffic, and see what the size of the Data(p) messages are in the capture? You can filter to see only Data(p) messages with this filter:

rtps.sm.wrEntityId == 0x000100c2 

It would be helpful to see the length of those packets and then expand the frame to look at the serialized data so we can see what is making them so large (I'm assuming either a lot of properties are being propagated or there are a lot of interfaces being announced). Specifially, I'm interested in expanding "Real-Time Publish Subscribe Wire Protocol" > "submessageId: Data (0x15)" > serializedData > serializedData. For example:

Offline
Last seen: 1 month 3 weeks ago
Joined: 10/03/2024
Posts: 5

Hi erin,

Thank you for your suggestions!

We followed the steps you provided by increasing the message_size_max to larger value(65507) and capturing the RTPS traffic. Upon analyzing the Real-Time Publish Subscribe Wire Protocol" > "submessageId: Data (0x15)" > serializedData > serializedData, we discovered that several interfaces were being announced, which contributed to the larger message size with Frame size of 1724 bytes.

After removing the extra interfaces we tested again with the message_size_max value as 1400, we observed a reduction in the message size.Currently the message size is relatively less with frame size of 1108 bytes. As a result, the fragmentation-related errors we were experiencing earlier are not coming now.

However, we are now encountering below error with message_size_max value as 1400 and data is not appearing in RTI Admin console:

General,!DDS RTIOsapiUtility_getErrorString:!input buffer too small
General,!DDS NDDS_Transport_UDP_receive_rEA:OS recvfrom() failure, error 0X2738: Unknown error

We would appreciate any guidance on how to troubleshoot or resolve this issue.

Thanks again for your continued support!

Soumya

 

Offline
Last seen: 2 months 1 week ago
Joined: 01/10/2013
Posts: 31

Hi, 

That's great that there's been some progress. The new error message looks to me like there may be another participant that is not setting the message_size_max to 1400. There should be other log messages warning you about that error. You can try 2 things: 

- Try and find the participant that is not using that setting and set the message_size_max to 1400. 

- Set the DomainParticipantQoS.receiver_pool.buffer_size to 65507. From the documentation of that QoS: 

You may want the value to be greater than the default if you try to limit the largest data packet that can be sent through the transport(s) in one application, but you still want to receive data from other applications that have not made the same change.

For example, to avoid IP fragmentation, you may want to set NDDS_Transport_Property_t::message_size_max for IP-based transports to a small value, such as 1400 bytes. However, you may not be able to apply this change to all the applications at the same time. To receive data from these other applications the buffer_size should be equal to the original NDDS_Transport_Property_t::message_size_max.