Large Data Use Cases
[Programming How-To's]

Working with large data types. More...

Introduction

RTI Data Distribution Service supports data types whose size exceeds the maximum message size of the underlying transports. A DDSDataWriter will fragment data samples when required. Fragments are automatically reassembled at the receiving end.

Once all fragments of a sample have been received, the new sample is passed to the DDSDataReader which can then make it available to the user. Note that the new sample is treated as a regular sample at that point and its availability depends on standard QoS settings such as DDS_ResourceLimitsQosPolicy::max_samples and DDS_KEEP_LAST_HISTORY_QOS.

The large data feature is fully supported by all DDS API's, so its use is mostly transparent. Some additional considerations apply as explained below.

Writing Large Data

In order to use the large data feature with the DDS_RELIABLE_RELIABILITY_QOS setting, the DDSDataWriter must be configured as an asynchronous writer (DDS_ASYNCHRONOUS_PUBLISH_MODE_QOS) with associated DDSFlowController.

While the use of an asynchronous writer and flow controller is optional when using the DDS_BEST_EFFORT_RELIABILITY_QOS setting, most large data use cases will benefit from the use of a flow controller to prevent flooding the network when fragments are being sent.

Receiving Large Data

Large data is supported by default and in most cases, no further changes are required.

The DDS_DataReaderResourceLimitsQosPolicy allows tuning the resources available to the DDSDataReader for reassembling fragmented large data.


RTI Data Distribution Service C++ API Version 4.5e Copyright © 23 Oct 2011 Real-Time Innovations, Inc