Once all fragments of a sample have been received, the new sample is passed to the DDS::DataReader which can then make it available to the user. Note that the new sample is treated as a regular sample at that point and its availability depends on standard QoS settings such as DDS::ResourceLimitsQosPolicy::max_samples and DDS::HistoryQosPolicyKind::KEEP_LAST_HISTORY_QOS.
The large data feature is fully supported by all DDS API's, so its use is mostly transparent. Some additional considerations apply as explained below.
While the use of an asynchronous writer and flow controller is optional when using the DDS::ReliabilityQosPolicyKind::BEST_EFFORT_RELIABILITY_QOS setting, most large data use cases will benefit from the use of a flow controller to prevent flooding the network when fragments are being sent.
The DDS::DataReaderResourceLimitsQosPolicy allows tuning the resources available to the DDS::DataReader for reassembling fragmented large data.