22.1.1 Memory Management without Batching

When the write() operation is called on a DataWriter that does not have batching enabled, the DataWriter serializes (marshals) the input DDS sample and stores it in the DataWriter’s queue (see Figure 22.1: DataWriter Actions when Batching is Disabled). The size of this queue is limited by initial_samples/max_samples in the 7.5.22 RESOURCE_LIMITS QosPolicy.

Figure 22.1: DataWriter Actions when Batching is Disabled

Each DDS sample in the queue has an associated serialization buffer in which the DataWriter will serialize the DDS sample. This buffer is either obtained from a pre-allocated pool (if the serialized size of the DDS sample is <= dds.data_writer.history.memory_manager.fast_pool.pool_buffer_max_size) or the buffer is dynamically allocated from the heap (if the serialized size of the DDS sample is > dds.data_writer.history.memory_manager.fast_pool.pool_buffer_max_size). The size of the buffer allocated on the heap is the sample serialized size. See Table 22.1 DDS Sample-Data Memory Management Properties for DataWriters,

The default value of pool_buffer_max_size is -1 (UNLIMITED). In this case, all the DDS samples come from the pre-allocated pool and the size of the buffers is the maximum serialized size of the DDS samples as returned by the type plugin get_serialized_sample_max_size() operation. The default value is optimum for real-time applications where determinism and predictability is a must. The trade-off is higher memory usage, especially in cases where the maximum serialized size of a DDS sample is large.

Connext DDS cannot send arbitrarily large samples. For details on serialization limits see 3.10 Data Sample Serialization Limits.

© 2020 RTI