BATCH QosPolicy (DDS Extension)

This QosPolicy can be used to decrease the amount of communication overhead associated with the transmission and (in the case of reliable communication) acknowledgement of small DDS samples, in order to increase throughput.

It specifies and configures the mechanism that allows Connext DDS to collect multiple user data DDS samples to be sent in a single network packet, to take advantage of the efficiency of sending larger packets and thus increase effective throughput.

This QosPolicy can be used to increase effective throughput dramatically for small data DDS samples. Throughput for small DDS samples (size < 2048 bytes) is typically limited by CPU capacity and not by network bandwidth. Batching many smaller DDS samples to be sent in a single large packet will increase network utilization and thus throughput in terms of DDS samples per second.

It contains the members listed in DDS_BatchQosPolicy.



Field Name




Enables/disables batching.



Sets the maximum cumulative length of all serialized DDS samples in a batch.

Before or when this limit is reached, the batch is automatically flushed.

The size does not include the meta-data associated with the batch DDS samples.



Sets the maximum number of DDS samples in a batch.

When this limit is reached, the batch is automatically flushed.

struct DDS_Duration_t


Sets the maximum flush delay.

When this duration is reached, the batch is automatically flushed.

The delay is measured from the time the first DDS sample in the batch is written by the application.

struct DDS_Duration_t


Sets the batch source timestamp resolution.

The value of this field determines how the source timestamp is associated with the DDS samples in a batch.

A DDS sample written with timestamp 't' inherits the source timestamp 't2' associated with the previous DDS sample, unless ('t' - 't2') is greater than source_timestamp_resolution.

If source_timestamp_resolution is DURATION_INFINITE, every DDS sample in the batch will share the source timestamp associated with the first DDS sample.

If source_timestamp_resolution is zero, every DDS sample in the batch will contain its own source timestamp corresponding to the moment when the DDS sample was written.

The performance of the batching process is better when source_timestamp_resolution is set to DURATION_INFINITE.



Determines whether or not the write operation is thread-safe.

If TRUE, multiple threads can call write on the DataWriter concurrently.

A setting of FALSE can be used to increase batching throughput for batches with many small DDS samples.

If batching is enabled (not the default), DDS samples are not immediately sent when they are written. Instead, they get collected into a "batch." A batch always contains whole number of DDS samples—a DDS sample will never be fragmented into multiple batches.

A batch is sent on the network ("flushed") when one of the following things happens:

Additional batching configuration takes place in the Publisher’s ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension).

The flush() operation is described in Flushing Batches of DDS Data Samples.

Synchronous and Asynchronous Flushing

Usually, a batch is flushed synchronously:

However, some behavior is asynchronous:

Batching vs. Coalescing

Even when batching is disabled, Connext DDS will sometimes coalesce multiple DDS samples into a single network datagram. For example, DDS samples buffered by a FlowController or sent in response to a negative acknowledgement (NACK) may be coalesced. This behavior is distinct from DDS sample batching.

DDS samples that are sent individually (not part of a batch) are always treated as separate DDS samples by Connext DDS. Each DDS sample is accompanied by a complete RTPS header on the network (although DDS samples may share UDP and IP headers) and (in the case of reliable communication) a unique physical sequence number that must be positively or negatively acknowledged.

In contrast, batched DDS samples share an RTPS header and an entire batch is acknowledged —positively or negatively—as a unit, potentially reducing the amount of meta-traffic on the network and the amount of processing per individual DDS sample.

Batching can also improve latency relative to simply coalescing. Consider two use cases:

  1. A DataWriter is configured to write asynchronously with a FlowController. Even if the FlowController's rules would allow it to publish a new DDS sample immediately, the send will always happen in the context of the asynchronous publishing thread. This context switch can add latency to the send path.
  2. A DataWriter is configured to write synchronously but with batching turned on. When the batch is full, it will be sent on the wire immediately, eliminating a thread context switch from the send path.

Batching and ContentFilteredTopics

When batching is enabled, content filtering is always done on the reader side.

Turbo Mode: Automatically Adjusting the Number of Bytes in a Batch—Experimental Feature

Turbo Mode is an experimental feature that uses an intelligent algorithm that automatically adjusts the number of bytes in a batch at run time according to current system conditions, such as write speed (or write frequency) and DDS sample size. This intelligence is what gives it the ability to increase throughput at high message rates and avoid negatively impacting message latency at low message rates.

To enable Turbo mode, set the DataWriter's property dds.data_writer.enable_turbo_mode to true. Turbo mode is not enabled by default.

Note: If you explicitly enable batching by setting enable to TRUE in BatchQosPolicy, the value of the turbo mode property is ignored and turbo mode is not used.

Performance Considerations

The purpose of batching is to increase throughput when writing small DDS samples at a high rate. In such cases, throughput can be increased several-fold, approaching much more closely the physical limitations of the underlying network transport.

However, collecting DDS samples into a batch implies that they are not sent on the network immediately when the application writes them; this can potentially increase latency. However, if the application sends data faster than the network can support, an increased proportion of the network's available bandwidth will be spent on acknowledgements and DDS sample resends. In this case, reducing that overhead by turning on batching could decrease latency while increasing throughput.

As a general rule, to improve batching throughput:

Batching affects how often piggyback heartbeats are sent; see heartbeats_per_max_samples in DDS_RtpsReliableWriterProtocol_t.

Maximum Transport Datagram Size

Batches cannot be fragmented. As a result, the maximum batch size (max_data_bytes) must be set no larger than the maximum transport datagram size. For example, a UDP datagram is limited to 64 KB, so any batches sent over UDP must be less than or equal to that size.


This QosPolicy cannot be modified after the DataWriter is enabled.

Since it is only for DataWriters, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.

All batching configuration occurs on the publishing side. A subscribing application does not configure anything specific to receive batched DDS samples, and in many cases, it will be oblivious to whether the DDS samples it processes were received individually or as part of a batch.

Consistency rules:

Related QosPolicies

To flush batches based on a time limit, enable batching in the ASYNCHRONOUS_PUBLISHER QosPolicy (DDS Extension) of the DataWriter's Publisher.

Be careful when configuring a DataWriter's LIFESPAN QoS Policy with a duration shorter than the batch flush period (max_flush_delay). If the batch does not fill up before the flush period elapses, the short duration will cause the DDS samples to be lost without being sent.

Do not configure the DataReader’s or DataWriter’s HISTORY QosPolicy to be shallower than the DataWriter's maximum batch size (max_samples). When the HISTORY QosPolicy is shallower on the DataWriter, some DDS samples may not be sent. When the HISTORY QosPolicy is shallower on the DataReader, DDS samples may be dropped before being provided to the application.

The initial and maximum numbers of batches that a DataWriter will manage is set in the DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension).

The maximum number of DDS samples that a DataWriter can store is determined by the value max_samples in the RESOURCE_LIMITS QosPolicy and max_batches in the DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension). The limit that is reached first is applied.

The amount of resources required for batching depends on the configuration of the RESOURCE_LIMITS QosPolicy and the DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension). See System Resource Considerations.

Applicable DDS Entities

System Resource Considerations

© 2018 RTI