Hello,
I'm using version 5.0.0 which is supposed to solve the issue of large discovery packets being fragmented on the IP level, by limiting the packets on the applicative level. But I was wondering how exactly do I set the MTU in my application?
I've set the dds.transport.UDPv4.builtin.parent.message_size_max property to 1400 in DomainParticipant QoS and set publication_writer_publish_mode and subscription_writer_publish_mode to ASYNCHRONOUS as I'm using reliable transmission. Yet I still see large messages being sent (larger than the default UPD MTU of 1500), therefore IP fragmentation happening.
Are there any additional settings I need to apply?
Thanks,
Michael
Hello Michael,
I have not been able to reproduce your problem in my local environment. Setting the
message_size_max
parameter in the UDPv4 transport should definitely do the trick. Can you run RTI Monitor to check whether your application is loading the right QoS settings? You will need to enable the monitoring library to do this. Also, can you explain how are you checking the message size is larger than 1400?I have uploaded an example that configures all the QoS settings you need to set. Basically you just need to set the message_size_max in the Transport properties, Publication and Subscription Writer's mode to ASYNCHRONOUS in Discovery Config, and Data Writer's publish mode to ASYNCHRONOUS as well. I have also disabled all the transports other than UDPv4 just for the example.
I have tested this in my local environment and it works fine. To run the example you will need to:
Best,
Fernando.
Hi Fernando,
Thanks for the very informative reply. As it turns out, I had batching enabled with the field
max_data_bytes
in BatchQosPolicy set to a much higher value than 1400, thus each individual message was under 1400 bytes, but sending multiple messages as a batch caused IP fragmentation. As I was going over BatchQosPolicy in the xsd, I've encountered the fieldmax_meta_data_bytes
. Could you please explain what kind of metadata is being sent and what value should I put in the field? (Is the metadata taken into account in the calculation of the culumative length, thusmax_data_bytes
should remain 1400, or should I divide them so thatmax_data_bytes + max_meta_data_bytes = 1400
,in case of the latter, what is the best practice for such division?). Sorry for being bothersome, but I've been unable to find any explanation about the field in the RTI Connext Java API documentation.Thanks a lot,
Michael
Hi Michael,
You are right,
max_meta_data_bytes
is not explicitly documented in the RTI Connext DDS Java API documentation. However, max_data_bytes documentation provides the information we need: each sample has at least 8 bytes of metadata containing information such as the timestamp of sequence number. Metadata can be as large as 52 bytes for keyed topics and 20 bytes for unkeyed topics. So, batching requires additional resources to store the metadata associated with the samples in the batch (see User's Manual Section 6.5.2.9). Note that you will need to take into account the number of samples in the batch in your calculations.Best,
Fernando.
Hi Fernando,
If I understood you correctly, then setting the maximum batch size to 1400 bytes would be pretty meaningless, as the order of samples I could send in such a batch would be very small, considering that a good part of the batch is allocated to metadata (I guess we're talking about a dozen of samples at the very best). Is there a way to achieve more efficient batching, while still avoiding IP fragmentation?
Thanks,
Michael
Hi Michael,
I just wanted to note that a dozen samples in a single batch will give you a very significant performance boost...
I would not be surprised if it gave you 10x the throughput or 1/10th of the CPU utilization compared to a "not batching" configuration. The reason is that when batching is enabled writing each individual sample just copies it into a local memory cache cache, which is very efficient. The individual sample write should take less than 1 micro-second in core i7-type CPUs and the hard work of creating UDP, messages marking them with sequence numbers, sending them via UDP, receiving them, dispatching them up the middleware stack, etc. etc. is done once per batch. This process is much more work taking on the order of 10 to 20 micro-seconds in core i7-type CPUs.
Batches of even a few samples will also save significant bandwidth.
So setting maximum batch size to 1400 bytes is not meaningless as long as you can get a few samples (even 2 or 3) in there...
On a different subject. I am curious why you want to avoid IP fragmentation. Is this causing a problem on your network?
Note also that you can try to configure your computer to use Jumbo Frames (9000 bytes). In this situation IP fragmentation should occur around 8980 Bytes so you would be able to batch aroung 8900 bytes without fragmenting.
Gerardo