47.5 DATA_WRITER_PROTOCOL QosPolicy (DDS Extension)

Connext uses a standard protocol for packet (user and meta data) exchange between applications. The DataWriterProtocol QosPolicy gives you control over configurable portions of the protocol, including the configuration of the reliable data delivery mechanism of the protocol on a per DataWriter basis.

These configuration parameters control timing and timeouts, and give you the ability to trade off between speed of data loss detection and repair, versus network and CPU bandwidth used to maintain reliability.

It is important to tune the reliability protocol on a per DataWriter basis to meet the requirements of the end-user application so that data can be sent between DataWriters and DataReaders in an efficient and optimal manner in the presence of data loss. You can also use this QosPolicy to control how Connext responds to "slow" reliable DataReaders or ones that disconnect or are otherwise lost.

This policy includes the members presented in Table 47.13 DDS_DataWriterProtocolQosPolicy and Table 47.14 DDS_RtpsReliableWriterProtocol_t. For defaults and valid ranges, please refer to the API Reference HTML documentation.

For details on the reliability protocol used by Connext, see Chapter 32 Reliability Models for Sending Data. See the 47.21 RELIABILITY QosPolicy for more information on per-DataReader/DataWriter reliability configuration. The 47.12 HISTORY QosPolicy and 47.22 RESOURCE_LIMITS QosPolicy also play important roles in the DDS reliability protocol.

Table 47.13 DDS_DataWriterProtocolQosPolicy

Type

Field Name

Description

DDS_GUID_t

virtual_guid

The virtual GUID (Global Unique Identifier) is used to uniquely identify the same DataWriter across multiple incarnations. In other words, this value allows Connext to remember information about a DataWriter that may be deleted and then recreated.

Connext uses the virtual GUID to associate a durable writer history to a DataWriter.

Persistence Service  uses the virtual GUID to send DDS samples on behalf of the original DataWriter.

A DataReader persists its state based on the virtual GUIDs of matching remote DataWriters.

For more information, see 21.2 Durability and Persistence Based on Virtual GUIDs.

By default, Connext will assign a virtual GUID automatically. If you want to restore the state of the durable writer history after a restart, you can retrieve the value of the writer's virtual GUID using the DataWriter’s get_qos() operation, and set the virtual GUID of the restarted DataWriter to the same value.

DDS_Unsigned-

Long

rtps_object_id

Determines the DataWriter’s RTPS object ID, according to the DDS-RTPS Interoperability Wire Protocol.

Only the last 3 bytes are used; the most significant byte is ignored.

The rtps_host_id, rtps_app_id, and rtps_instance_id in the 44.10 WIRE_PROTOCOL QosPolicy (DDS Extension), together with the 3 least significant bytes in rtps_object_id, and another byte assigned by Connext to identify the entity type, forms the BuiltinTopicKey in PublicationBuiltinTopicData.

DDS_Boolean

push_on_write

Controls when a DDS sample is sent after write() is called on a DataWriter. If TRUE, the DDS sample is sent immediately; if FALSE, the DDS sample is put in a queue until an ACK/NACK is received from a reliable DataReader.

DDS_Boolean

disable_positive_

acks

Determines whether matching DataReaders send positive acknowledgements (ACKs) to the DataWriter.

When TRUE, the DataWriter will keep DDS samples in its queue for ACK-disabled readers for a minimum keep duration (see 47.5.3 Disabling Positive Acknowledgements ).

When strict reliability is not required, setting this to TRUE reduces overhead network traffic.

DDS_Boolean

disable_inline_

keyhash

Controls whether or not the key-hash is propagated on the wire with DDS samples.

This field only applies to keyed writers.

Connext associates a key-hash (an internal 16-byte representation) with each key.

When FALSE, the key-hash is sent on the wire with every data instance.

When TRUE, the key-hash is not sent on the wire (so the readers must compute the value using the received data).

If the reader is CPU bound, sending the key-hash on the wire may increase performance, because the reader does not have to get the key-hash from the data.

If the writer is CPU bound, sending the key-hash on the wire may decrease performance, because it requires more bandwidth (16 more bytes per DDS sample).

Setting disable_inline_keyhash to TRUE is not compatible with using RTI Database Integration Service or RTI Recording Service.

DDS_Boolean

serialize_key_

with_dispose

Controls whether or not the serialized key is propagated on the wire with dispose notifications.

This field only applies to keyed writers.

By default, this field is set to FALSE.

RTI recommends setting this field to TRUE if there are DataReaders with propagate_dispose_of_unregistered_instances (in the 48.1 DATA_READER_PROTOCOL QosPolicy (DDS Extension)) also set to TRUE (which is done because you anticipate receiving a dispose meta-sample without previously having received a data sample for an instance).

When setting serialize_key_with_dispose to FALSE, only a key hash is included in the dispose meta-sample sent by a DataWriter for a dispose action. If a dispose meta-sample only includes the key hash, then DataReaders must have previously received an actual data sample for the instance being disposed, in order for a DataReader to map a key hash/instance handle to actual key values.

If an actual data sample was never received for an instance and serialize_key_with_dispose is set to FALSE, then the DataReader application will not be able to determine the value of the key that was disposed, since FooDataReader::get_key_value() will not be able to map an instance handle to actual key values.

By setting serialize_key_with_dispose to TRUE, the values of the key members of a data type will be sent in the dispose meta-sample for a dispose action by the DataWriter. This allows the DataReader to map an instance handle to the values of the key members even when receiving a dispose meta-sample without previously having received a data sample for the instance.

Important: When this field TRUE, batching will not be compatible with RTI Data Distribution Service 4.3e, 4.4b, or 4.4c—the DataReaders will receive incorrect data and/or encounter deserialization errors.

DDS_Boolean

propagate_app_

ack_with_no_

response

Controls whether or not a DataWriter receives on_application_acknowledgment() notifications with an empty or invalid response.

When FALSE, on_application_acknowledgment() will not be invoked if the DDS sample being acknowledged has an empty or invalid response.

DDS_RtpsReliable WriterProtocol_t

rtps_reliable_

writer

This structure includes the fields in Table 47.14 DDS_RtpsReliableWriterProtocol_t.

DDS_Sequence_t

initial_virtual_sequence_number

Determines the initial virtual sequence number for this DataWriter.

By default, the virtual sequence number of the first sample published by a DataWriter is 1 for DataWriters that do not use durable writer history. For durable writers, the default virtual sequence number is the last sequence number they published in a previous execution, plus one. So, when a non-durable DataWriter is restarted and must continue communicating with the same DataReaders, its samples start over with sequence number 1. Durable DataWriters start over where the last sequence number left off, plus one.

This QoS setting allows overwriting the default initial virtual sequence number.

Normally, this parameter is not expected to be modified; however, in some scenarios when continuing communication after restarting, applications may require the DataWriter's virtual sequence number to start at something other than the value described above. An example would be to enable non-durable DataWriters to start at the last sequence number published, plus one, similar to the durable DataWriter. This property enables you to make such a configuration, if desired.

The virtual sequence number can be overwritten as well on a per sample basis by updating DDS_WriteParams_t::identity in FooDataWriter_write_w_params.

 

Table 47.14 DDS_RtpsReliableWriterProtocol_t

Type

Field Name

Description

DDS_Long

low_watermark

Queue levels that control when to switch between the regular and fast heartbeat rates ( heartbeat_period and fast_heartbeat_period ). See 47.5.1 High and Low Watermarks.

high_watermark

DDS_Duration_t

heartbeat_period

Rates at which to send heartbeats to DataReaders with unacknowledged DDS samples. See 47.5.2 Normal, Fast, and Late-Joiner Heartbeat Periods and 32.4.4.1 How Often Heartbeats are Resent (heartbeat_period).

fast_heartbeat_period

late_joiner_heartbeat_

period

DDS_Duration_t

virtual_heartbeat_period

The rate at which a reliable DataWriter will send virtual heartbeats. Virtual heartbeat informs the reliable DataReader about the range of DDS samples currently present for each virtual GUID in the reliable writer's queue. See 47.5.6 Virtual Heartbeats.

DDS_Long

samples_per_virtual_

heartbeat

The number of DDS samples that a reliable DataWriter must publish before sending a virtual heartbeat. See 47.5.6 Virtual Heartbeats.

DDS_Long

max_heartbeat_retries

Maximum number of periodic heartbeats sent without receiving an ACK/NACK packet before marking a DataReader ‘inactive.’

When a DataReader has not acknowledged all the DDS samples the reliable DataWriter has sent to it, and max_heartbeat_retries number of periodic heartbeats have been sent without receiving any ACK/NACK packets in return, the DataReader will be marked as inactive (not alive) and be ignored until it resumes sending ACK/NACKs.

Note that piggyback heartbeats do not count towards this value.

See 32.4.4.4 Controlling How Many Times Heartbeats are Resent (max_heartbeat_retries).

DDS_Boolean

inactivate_nonprogressing_

readers

Allows the DataWriter to treat DataReaders that send successive non-progressing NACK packets as inactive.

See 32.4.4.5 Treating Non-Progressing Readers as Inactive Readers (inactivate_nonprogressing_readers).

DDS_Long

heartbeats_per_max_samples

When a DataWriter is configured with a fixed send window size (min_send_window_size is equal to effective max_send_window_size), a piggyback heartbeat is sent every [(effective max_send_window_size/heartbeats_per_max_samples)] number of samples written. (See 47.5.4 Configuring the Send Window Size.)

Otherwise, the number of piggyback heartbeats sent is scaled according to the current size of the send window. For example, consider a heartbeats_per_max_samples of 50. If the current send window size is 100, a piggyback heartbeat will be sent every two samples. If the send window size grows to 150, a piggyback heartbeat will be sent every three samples, and so on. Additionally, when the send window size grows, a piggyback heartbeat is sent with the next sample. (If it weren't, the sending of that heartbeat could be delayed, since the heartbeat rate scales with the increasing window size.)

The effective max send window is calculated as follows:

Without batching, it is the minimum of max_samples in the 47.22 RESOURCE_LIMITS QosPolicy or max_send_window_size.

With batching, it is the minimum of max_batches in the 47.6 DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) or max_send_window_size.

If heartbeats_per_max_samples is set to zero, no piggyback heartbeat will be sent.

If the current send window size is LENGTH_UNLIMITED, 100 million is assumed as the effective max send window.

DDS_Boolean

disable_repair_piggyback_heartbeat

When samples are repaired, the DataWriter resends the number of bytes indicated in max_bytes_per_nack_response and a piggyback heartbeat with each message. You can configure the DataWriter to not send the piggyback heartbeat, by setting this field to TRUE, and instead rely on the late_joiner_heartbeat_period to control the throughput used to repair samples. This field is only mutable for the DataWriter QoS and not for the Discovery Config QoS of the DomainParticipant.

DDS_Duration_t

min_nack_response_delay

Minimum delay to respond to an ACK/NACK.

When a reliable DataWriter receives an ACK/NACK from a DataReader, the DataWriter can choose to delay a while before it sends repair DDS samples or a heartbeat. The ACK/NACK will be sent at a random delay between this value and max_nack_response_delay.

See 32.4.4.6 Coping with Redundant NACKs for Missing DDS Samples (nack_suppression_duration and min/max_nack_response_delay).

DDS_Duration_t

max_nack_response_delay

Maximum delay to respond to a ACK/NACK.

This sets the value of maximum delay between receiving an ACK/NACK and sending repair DDS samples or a heartbeat.The ACK/NACK will be sent at a random delay between min_nack_response_delay and this value.

A longer wait can help prevent storms of repair packets if many DataReaders send NACKs at the same time. However, it delays the repair, and hence increases the latency of the communication.

See 32.4.4.6 Coping with Redundant NACKs for Missing DDS Samples (nack_suppression_duration and min/max_nack_response_delay).

DDS_Duration_t

nack_suppression_duration

How long consecutive NACKs are suppressed.

When a reliable DataWriter receives consecutive NACKs within a short duration, this may trigger the DataWriter to send redundant repair messages. This value sets the duration during which consecutive NACKs are ignored, thus preventing redundant repairs from being sent.

DDS_Long

max_bytes_per_nack_

response

Maximum bytes in a repair package.

When a reliable DataWriter resends DDS samples, the total package size is limited to this value. Note: The reliable DataWriter will always send at least one sample.

See 32.4.4.3 Controlling Packet Size for Resent DDS Samples (max_bytes_per_nack_response).

DDS_Duration_t

disable_positive_acks_

min_sample_keep_

duration

Minimum duration that a DDS sample will be kept in the DataWriter’s queue for ACK-disabled DataReaders.

See 47.5.3 Disabling Positive Acknowledgements and 32.4.4.7 Disabling Positive Acknowledgements (disable_positive_acks_min_sample_keep_duration).

disable_positive_acks_

max_sample_keep_

duration

Maximum duration that a DDS sample will be kept in the DataWriter’s queue for ACK-disabled readers.

DDS_Boolean

disable_positive_acks_

enable_adaptive_

sample_keep_duration

Enables automatic dynamic adjustment of the ‘keep duration’ in response to network congestion.

DDS_Long

disable_positive_acks_

increase_sample_

keep_duration_factor

When the ‘keep duration’ is dynamically controlled, the lengthening of the ‘keep duration’ is controlled by this factor, which is expressed as a percentage.

When the adaptive algorithm determines that the keep duration should be increased, this factor is multiplied with the current keep duration to get the new longer keep duration. For example, if the current keep duration is 20 milliseconds, using the default factor of 150% would result in a new keep duration of 30 milliseconds.

disable_positive_acks_

decrease_sample_

keep_duration_factor

When the ‘keep duration’ is dynamically controlled, the shortening of the ‘keep duration’ is controlled by this factor, which is expressed as a percentage.

When the adaptive algorithm determines that the keep duration should be decreased, this factor is multiplied with the current keep duration to get the new shorter keep duration. For example, if the current keep duration is 20 milliseconds, using the default factor of 95% would result in a new keep duration of 19 milliseconds.

DDS_Long

min_send_window_size

Minimum and maximum size for the window of outstanding DDS samples.

See 47.5.4 Configuring the Send Window Size.

max_send_window_size

DDS_Long

send_window_decrease_

factor

Scales the current send-window size down by this percentage to decrease the effective send-rate in response to received negative acknowledgement.

See 47.5.4 Configuring the Send Window Size.

DDS_Boolean

enable_multicast_periodic_heartbeat

Controls whether or not periodic heartbeat messages are sent over multicast.

When enabled, if a reader has a multicast destination, the writer will send its periodic HEARTBEAT messages to that destination.

Otherwise, if not enabled or the reader does not have a multicast destination, the writer will send its periodic HEARTBEATs over unicast.

DDS_Long

multicast_resend_threshold

Sets the minimum number of requesting readers needed to trigger a multicast resend.

See 47.5.7 Resending Over Multicast.

DDS_Long

send_window_increase_

factor

Scales the current send-window size up by this percentage to increase the effective send-rate when a duration has passed without any received negative acknowledgements.

See 47.5.4 Configuring the Send Window Size

DDS_Duration

send_window_update_

period

Period in which DataWriter checks for received negative acknowledgements and conditionally increases the send-window size when none are received.

See 47.5.4 Configuring the Send Window Size

47.5.1 High and Low Watermarks

When the number of unacknowledged DDS samples in the current send-window of a reliable DataWriter meets or exceeds high_watermark, the 31.6.8 RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) will be changed appropriately, a listener callback will be triggered, and the DataWriter will start heartbeating its matched DataReaders at fast_heartbeat_period

When the number of DDS samples meets or falls below low_watermark, the 31.6.8 RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) will be changed appropriately, a listener callback will be triggered, and the heartbeat rate will return to the "normal" rate ( heartbeat_period ).

Having both high and low watermarks (instead of one) helps prevent rapid flickering between the rates, which could happen if the number of DDS samples hovers near the cut-off point.

Increasing the high and low watermarks will make the DataWriters less aggressive about seeking acknowledgments for sent data, decreasing the size of traffic spikes but slowing performance.

Decreasing the watermarks will make the DataWriters more aggressive, increasing both network utilization and performance.

If batching is used, high_watermark and low_watermark refer to batches, not DDS samples.

When min_send_window_size and max_send_window_size are not equal, the low and high watermarks are scaled down linearly to stay within the current send-window size. The value provided by configuration corresponds to the high and low watermarks for the max_send_window_size.

47.5.2 Normal, Fast, and Late-Joiner Heartbeat Periods

The normal heartbeat_period is used until the number of DDS samples in the reliable DataWriter’s queue meets or exceeds high_watermark; then fast_heartbeat_period is used. Once the number of DDS samples meets or drops below low_watermark, the normal rate ( heartbeat_period ) is used again.

Increasing fast_heartbeat_period increases the speed of discovery, but results in a larger surge of traffic when the DataWriter is waiting for acknowledgments.

Decreasing heartbeat_period decreases the steady state traffic on the wire, but may increase latency by decreasing the speed of repairs for lost packets when the writer does not have very many outstanding unacknowledged DDS samples.

Having two periodic heartbeat rates, and switching between them based on watermarks:

  • Ensures that all DataReaders receive all their data as quickly as possible (the sooner they receive a heartbeat, the sooner they can send a NACK, and the sooner the DataWriter can send repair DDS samples);
  • Helps prevent the DataWriter from overflowing its resource limits (as its queue starts the fill, the DataWriter sends heartbeats faster, prompting the DataReaders to acknowledge sooner, allowing the DataWriter to purge these acknowledged DDS samples from its queue);
  • Tunes the amount of network traffic. (Heartbeats and NACKs use up network bandwidth like any other traffic; decreasing the heartbeat rates, or increasing the threshold before the fast rate starts, can smooth network traffic—at the expense of discovery performance).

The late_joiner_heartbeat_period is used when a reliable DataReader joins after a reliable DataWriter (with non-volatile Durability) has begun publishing DDS samples. Once the late-joining DataReader has received all cached DDS samples, it will be serviced at the same rate as other reliable DataReaders.

47.5.3 Disabling Positive Acknowledgements

When strict reliable communication is not required, you can configure Connext so that it does not send positive acknowledgements (ACKs). In this case, reliability is maintained solely based on negative acknowledgements (NACKs). The removal of ACK traffic may improve middleware performance. For example, when sending DDS samples over multicast, ACK-storms that previously may have hindered DataWriters and consumed overhead network bandwidth are now precluded.

By default, DataWriters and DataReaders are configured with positive ACKS enabled. To disable ACKs, either:

If ACKs are disabled, instead of the DataWriter holding a DDS sample in its send queue until all of its DataReaders have ACKed it, the DataWriter will hold a DDS sample for a configurable duration. This “keep-duration" starts when a DDS sample is written. When this time elapses, the DDS sample is logically considered as acknowledged by its ACK-disabled readers.

The length of the "keep-duration" can be static or dynamic, depending on how rtps_reliable_writer.disable_positive_acks_enable_adaptive_sample_keep_duration is set.

  • When the length is static, the "keep-duration" is set to the minimum (rtps_reliable_writer.disable_positive_acks_min_sample_keep_duration).
  • When the length is dynamic, the "keep-duration" is dynamically adjusted between the minimum and maximum durations (rtps_reliable_writer.disable_positive_acks_min_sample_keep_duration and rtps_reliable_writer.disable_positive_acks_max_sample_keep_duration).

Dynamic adjustment maximizes throughput and reliability in response to current network conditions: when the network is congested, durations are increased to decrease the effective send rate and relieve the congestion; when the network is not congested, durations are decreased to increase the send rate and maximize throughput.

You should configure the minimum "keep-duration" to allow at least enough time for a possible NACK to be received and processed. When a DataWriter has both matching ACK-disabled and ACK-enabled DataReaders, it holds a DDS sample in its queue until all ACK-enabled DataReaders have ACKed it and the "keep-duration" has elapsed.

See also: 32.4.4.7 Disabling Positive Acknowledgements (disable_positive_acks_min_sample_keep_duration).

47.5.4 Configuring the Send Window Size

When a reliable DataWriter writes a DDS sample, it keeps the DDS sample in its send queue until it has received acknowledgements from all of its subscribing DataReaders. The number of these outstanding DDS samples is referred to as the DataWriter's "send window." Once the number of outstanding DDS samples has reached the send window size, subsequent writes will block until an outstanding DDS sample is acknowledged. For more information about when a sample is considered acknowledged see 47.5.3 Disabling Positive Acknowledgements and 31.8.2 write() behavior with KEEP_LAST and KEEP_ALL (especially the Notes at the end of that section).

Configuration of the send window sets a minimum and maximum size, which may be unlimited. The send window size is initialized to the minimum size. The min and max send windows can be the same. When set differently, the send window will dynamically change in response to detected network congestion, as signaled by received negative acknowledgements. When NACKs are received, the DataWriter responds to the slowed reader by decreasing the send window by the send_window_decrease_factor to throttle down its effective send rate. The send window will not be decreased to less than the min_send_window_size. After a period (send_window_update_period) during which no NACKs are received, indicating that the reader is catching up, the DataWriter will increase the send window size to increase the effective send rate by the percentage specified by send_window_increase_factor. The send window will increase to no greater than the max_send_window_size.

When both min_send_window_size and max_send_window_size are unlimited, either the resource limits max_samples in 47.22 RESOURCE_LIMITS QosPolicy (for non-batching) or max_batches in 47.6 DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) (for batching) serves as the effective max_send_window_size.

When either max_samples (for non-batching) or max_batches (for batching) is less than max_send_window_size, it serves as the effective max_send_window_size. If it is also less than min_send_window_size, then effectively both min and max send-window sizes are equal to max_samples or max_batches.

47.5.5 Propagating Serialized Keys with Disposed-Instance Notifications

This section describes the interaction between these two fields:

RTI recommends setting serialize_key_with_dispose to TRUE if there are DataReaders with propagate_dispose_of_unregistered_instances also set to TRUE. The following examples will help you understand how these fields work.

See also: 31.14.3 Disposing Instances.

Note: Persistence Service DataReaders ignore the serialized key propagated with dispose updates. Persistence Service DataWriters cannot propagate the serialized key with dispose, and therefore ignore the serialize_key_with_dispose setting on the DataWriter QoS.

47.5.6 Virtual Heartbeats

Virtual heartbeats announce the availability of DDS samples with the Collaborative DataWriters feature described in 48.1 DATA_READER_PROTOCOL QosPolicy (DDS Extension), where multiple DataWriters publish DDS samples from a common logical data-source (identified by a virtual GUID).

When 46.4 PRESENTATION QosPolicy access_scope is set to TOPIC or INSTANCE on the Publisher, the virtual heartbeat contains information about the DDS samples contained in the DataWriter queue.

When presentation access_scope is set to GROUP on the Publisher, the virtual heartbeat contains information about the DDS samples in the queues of all DataWriters that belong to the Publisher.

47.5.7 Resending Over Multicast

Given DataReaders with multicast destinations, when a DataReader sends a NACK to request for DDS samples to be resent, the DataWriter can either resend them over unicast or multicast. Though resending over multicast would save bandwidth and processing for the DataWriter, the potential problem is that there could be DataReaders of the multicast group that did not request for any resends, yet they would have to process, and drop, the resent DDS samples.

Thus, to make each multicast resend more efficient, the multicast_resend_threshold is set as the minimum number of DataReaders of the same multicast group that the DataWriter must receive NACKs from within a single response-delay duration. This allows the DataWriter to coalesce near-simultaneous unicast resends into a multicast resend, and it allows a "vote" from DataReaders of a multicast group to exceed a threshold before resending over multicast.

The multicast_resend_threshold must be set to a positive value. Note that a threshold of 1 means that all resends will be sent over multicast. Also, note that a DataWriter with a zero NACK response-delay (i.e., both min_nack_response_delay and max_nack_response_delay are zero) will resend over multicast only if the threshold is 1.

47.5.8 Example

For information on how to use the fields in Table 47.14 DDS_RtpsReliableWriterProtocol_t, see 32.4.4 Controlling Heartbeats and Retries with DataWriterProtocol QosPolicy.

The following describes a use case for when to change push_on_write to DDS_BOOLEAN_FALSE. Suppose you have a system in which the data packets being sent is very small. However, you want the data to be sent reliably, and the latency between the time that data is sent to the time that data is received is not an issue. However, the total network bandwidth between the DataWriter and DataReader applications is limited.

If the DataWriter sends a burst of data a a high rate, it is possible that it will overwhelm the limited bandwidth of the network. If you allocate enough space for the DataWriter to store the data burst being sent (see 47.22 RESOURCE_LIMITS QosPolicy), then you can use the push_on_write parameter of the DATA_WRITER_PROTOCOL QosPolicy to delay sending the data until the reliable DataReader asks for it.

By setting push_on_write to DDS_BOOLEAN_FALSE, when write() is called on the DataWriter, no data is actually sent. Instead data is stored in the DataWriter’s send queue. Periodically, Connext will be sending heartbeats informing the DataReader about the data that is available. So every heartbeat period, the DataReader will realize that the DataWriter has new data, and it will send an ACK/NACK, asking for them.

When DataWriter receives the ACK/NACK packet, it will put together a package of data, up to the size set by the parameter max_bytes_per_nack_response, to be sent to the DataReader. This method not only self-throttles the send rate, but also uses network bandwidth more efficiently by eliminating redundant packet headers when combining several small packets into one larger one. Please note that the DataWriter will always send at least one sample.

47.5.9 Properties

This QosPolicy cannot be modified after the DataWriter is created.

Since it is only for DataWriters, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.

When setting the fields in this policy, the following rules apply. If any of these are false, Connext returns DDS_RETCODE_INCONSISTENT_POLICY:

  • min_nack_response_delay <= max_nack_response_delay
  • fast_heartbeat_period <= heartbeat_period
  • late_joiner_heartbeat_period <= heartbeat_period
  • low_watermark < high_watermark
  • If batching is disabled:
    • heartbeats_per_max_samples <= writer_qos.resource_limits.max_samples
  • If batching is enabled:
    • heartbeats_per_max_samples <= writer_qos.resource_limits.max_batches

47.5.10 Related QosPolicies

47.5.11 Applicable DDS Entities

47.5.12 System Resource Considerations

A high max_bytes_per_nack_response may increase the instantaneous network bandwidth required to send a single burst of traffic for resending dropped packets.