What do I need to send Large Data successfully?

What is Large Data?

"Large" is defined as data that cannot be sent as a single packet by a transport; that is, the data is larger than the maximum transmission unit (MTU). For example, to send data larger than 63K reliably using UDP/IP, you may want to follow the recommendations in this article.

This table from Wikipedia (version 10 April 2019) compares the value of the MTU, depending on the media.

How do I send large data?

Below is a list of recommendations you may want to consider when sending Large Data.

Asynchronous publisher

This feature configures a mechanism that sends user data in an external middleware thread, making the application more optimal. To configure asynchronous publishing in your application, set the following snippet in your QoS:


Here is a knowledge base article with an example and more details regarding the asynchronous publisher.

For more information, check the User’s Manual, section 6.5.20, PUBLISH_MODE QosPolicy (DDS Extension).

DDS fragmentation

When sending Large Data, the DDS samples will be fragmented into multiple packets to fit into the network’s MTU. This fragmentation can occur at either the DDS level or IP level. If the fragmentation is done at the IP level, it can lead to issues such as not receiving samples.

Here is a knowledge base article explaining IP fragmentation in more detail and how to avoid it by fragmenting the samples at the Connext DDS level (by adjusting the message_size_max property to the same size as the transport MTU).

Reliable Reliability

If you use Best Effort, the application is not going to try to recover any lost fragments. Since in Best Effort communication the lost fragment isn’t resent, the DataReader will discard the entire sample. Depending on its size, the sample could have a lot of fragments, meaning it is more likely to lose a fragment (and therefore, the entire sample). By using Reliable Reliability, if a fragment is lost Connext DDS will try to recover it.

You can set Reliable Reliability for one or more entities; at minimum set it in the datareader_qos:


For more information regarding the Reliability policy, check the User’s Manual, section 6.5.21, RELIABILITY QosPolicy.

FlowController in the application

FlowControllers are used to shape the network traffic by controlling when an attached asynchronous DataWriter is allowed to write data.

FlowControllers are necessary for Large Data in order to avoid data bursts.

Connext DDS provides three built-in FlowControllers:




To enable these, you need to set the flow_controller_name field in the publish_mode QoS policy:


It is also possible to create your own custom flow controller. Here is a knowledge base article on how to create one.

For more information regarding FlowControllers, check the User’s Manual, section 6.6, FlowControllers (DDS Extension).

There are some examples on FlowControllers in our RTI Perftest QoS, one for 10Gps and another for 1Gps. You can check them here and test their use in the RTI Perftest.

Starting with Connext DDS 6.0.0, two new features for efficiently transferring large data are available: Zero Copy transfer over shared memory and FlatData™ language binding. These examples from our Knowledge Base show how to use these features. For more information on these features, see the User’s Manual, chapter Chapter 22 Sending Large Data.