DATA_WRITER_PROTOCOL QosPolicy (DDS Extension)

Connext DDS uses a standard protocol for packet (user and meta data) exchange between applications. The DataWriterProtocol QosPolicy gives you control over configurable portions of the protocol, including the configuration of the reliable data delivery mechanism of the protocol on a per DataWriter basis.

These configuration parameters control timing and timeouts, and give you the ability to trade off between speed of data loss detection and repair, versus network and CPU bandwidth used to maintain reliability.

It is important to tune the reliability protocol on a per DataWriter basis to meet the requirements of the end-user application so that data can be sent between DataWriters and DataReaders in an efficient and optimal manner in the presence of data loss. You can also use this QosPolicy to control how Connext DDS responds to "slow" reliable DataReaders or ones that disconnect or are otherwise lost.

This policy includes the members presented in DDS_DataWriterProtocolQosPolicy and DDS_RtpsReliableWriterProtocol_t. For defaults and valid ranges, please refer to the API Reference HTML documentation.

For details on the reliability protocol used by Connext DDS, see Reliable Communications. See the RELIABILITY QosPolicy for more information on per-DataReader/DataWriter reliability configuration. The HISTORY QosPolicy and RESOURCE_LIMITS QosPolicy also play important roles in the DDS reliability protocol.

DDS_DataWriterProtocolQosPolicy

Type

Field Name

Description

DDS_GUID_t

virtual_guid

The virtual GUID (Global Unique Identifier) is used to uniquely identify the same DataWriter across multiple incarnations. In other words, this value allows Connext DDS to remember information about a DataWriter that may be deleted and then recreated.

Connext DDS uses the virtual GUID to associate a durable writer history to a DataWriter.

Persistence Service1Persistence Service is included with the Connext DDS Professional, Evaluation, and Basic package types. It saves DDS data samples so they can be delivered to subscribing applications that join the system at a later time (see Introduction to RTI Persistence Service (Chapter 1 on page 1)). uses the virtual GUID to send DDS samples on behalf of the original DataWriter.

A DataReader persists its state based on the virtual GUIDs of matching remote DataWriters.

For more information, see Durability and Persistence Based on Virtual GUIDs.

By default, Connext DDS will assign a virtual GUID automatically. If you want to restore the state of the durable writer history after a restart, you can retrieve the value of the writer's virtual GUID using the DataWriter’s get_qos() operation, and set the virtual GUID of the restarted DataWriter to the same value.

DDS_Unsigned-
Long

rtps_object_id

Determines the DataWriter’s RTPS object ID, according to the DDS-RTPS Interoperability Wire Protocol.

Only the last 3 bytes are used; the most significant byte is ignored.

The rtps_host_id, rtps_app_id, and rtps_instance_id in the WIRE_PROTOCOL QosPolicy (DDS Extension), together with the 3 least significant bytes in rtps_object_id, and another byte assigned by Connext DDS to identify the entity type, forms the BuiltinTopicKey in PublicationBuiltinTopicData.

DDS_Boolean

push_on_write

Controls when a DDS sample is sent after write() is called on a DataWriter. If TRUE, the DDS sample is sent immediately; if FALSE, the DDS sample is put in a queue until an ACK/NACK is received from a reliable DataReader.

DDS_Boolean

disable_positive_
acks

Determines whether matching DataReaders send positive acknowledgements (ACKs) to the DataWriter.

When TRUE, the DataWriter will keep DDS samples in its queue for ACK-disabled readers for a minimum keep duration (see Disabling Positive Acknowledgements ).

When strict reliability is not required, setting this to TRUE reduces overhead network traffic.

DDS_Boolean

disable_inline_
keyhash

Controls whether or not the key-hash is propagated on the wire with DDS samples.

This field only applies to keyed writers.

Connext DDS associates a key-hash (an internal 16-byte representation) with each key.

When FALSE, the key-hash is sent on the wire with every data instance.

When TRUE, the key-hash is not sent on the wire (so the readers must compute the value using the received data).

If the reader is CPU bound, sending the key-hash on the wire may increase performance, because the reader does not have to get the key-hash from the data.

If the writer is CPU bound, sending the key-hash on the wire may decrease performance, because it requires more bandwidth (16 more bytes per DDS sample).

Setting disable_inline_keyhash to TRUE is not compatible with using RTI Database Integration Service or RTI Recording Service.

DDS_Boolean

serialize_key_
with_dispose

Controls whether or not the serialized key is propagated on the wire with dispose notifications.

This field only applies to keyed writers.

RTI recommends setting this field to TRUE if there are DataReaders with propagate_dispose_of_unregistered_instances (in the DATA_READER_PROTOCOL QosPolicy (DDS Extension)) also set to TRUE.

Important: When this field TRUE, batching will not be compatible with RTI Data Distribution Service 4.3e, 4.4b, or 4.4c—the DataReaders will receive incorrect data and/or encounter deserialization errors.

DDS_Boolean

propagate_app_
ack_with_no_
response

Controls whether or not a DataWriter receives on_application_acknowledgment() notifications with an empty or invalid response.

When FALSE, on_application_acknowledgment() will not be invoked if the DDS sample being acknowledged has an empty or invalid response.

DDS_RtpsReliable WriterProtocol_t

rtps_reliable_
writer

This structure includes the fields in DDS_RtpsReliableWriterProtocol_t.

 

DDS_RtpsReliableWriterProtocol_t

Type

Field Name

Description

DDS_Long

low_watermark

Queue levels that control when to switch between the regular and fast heartbeat rates (heartbeat_period and fast_heartbeat_period ). See High and Low Watermarks.

high_watermark

DDS_Duration_t

heartbeat_period

Rates at which to send heartbeats to DataReaders with unacknowledged DDS samples. See Normal, Fast, and Late-Joiner Heartbeat Periods and How Often Heartbeats are Resent (heartbeat_period).

fast_heartbeat_period

late_joiner_heartbeat_
period

DDS_Duration_t

virtual_heartbeat_period

The rate at which a reliable DataWriter will send virtual heartbeats. Virtual heartbeat informs the reliable DataReader about the range of DDS samples currently present for each virtual GUID in the reliable writer's queue. See Virtual Heartbeats.

DDS_Long

samples_per_virtual_
heartbeat

The number of DDS samples that a reliable DataWriter must publish before sending a virtual heartbeat. See Virtual Heartbeats.

DDS_Long

max_heartbeat_retries

Maximum number of periodic heartbeats sent without receiving an ACK/NACK packet before marking a DataReader ‘inactive.’

When a DataReader has not acknowledged all the DDS samples the reliable DataWriter has sent to it, and max_heartbeat_retries number of periodic heartbeats have been sent without receiving any ACK/NACK packets in return, the DataReader will be marked as inactive (not alive) and be ignored until it resumes sending ACK/NACKs.

Note that piggyback heartbeats do not count towards this value.

See Controlling How Many Times Heartbeats are Resent (max_heartbeat_retries).

DDS_Boolean

inactivate_nonprogressing_
readers

Allows the DataWriter to treat DataReaders that send successive non-progressing NACK packets as inactive.

See Treating Non-Progressing Readers as Inactive Readers (inactivate_nonprogressing_readers).

DDS_Long

heartbeats_per_max_samples

A piggyback heartbeat is sent every [current send-window size/heartbeats_per_max_samples] number of DDS samples written.

If set to zero, no piggyback heartbeat will be sent.

If the current send-window size is LENGTH_UNLIMITED, 100 million is assumed as the value in the calculation.

See Configuring the Send Window Size

DDS_Duration_t

min_nack_response_delay

Minimum delay to respond to an ACK/NACK.

When a reliable DataWriter receives an ACK/NACK from a DataReader, the DataWriter can choose to delay a while before it sends repair DDS samples or a heartbeat. This set the value of the minimum delay.

See Coping with Redundant Requests for Missing DDS Samples (max_nack_response_delay).

DDS_Duration_t

max_nack_response_delay

Maximum delay to respond to a ACK/NACK.

This sets the value of maximum delay between receiving an ACK/NACK and sending repair DDS samples or a heartbeat.

A longer wait can help prevent storms of repair packets if many DataReaders send NACKs at the same time. However, it delays the repair, and hence increases the latency of the communication.

See Coping with Redundant Requests for Missing DDS Samples (max_nack_response_delay).

DDS_Duration_t

nack_suppression_duration

How long consecutive NACKs are suppressed.

When a reliable DataWriter receives consecutive NACKs within a short duration, this may trigger the DataWriter to send redundant repair messages. This value sets the duration during which consecutive NACKs are ignored, thus preventing redundant repairs from being sent.

DDS_Long

max_bytes_per_nack_
response

Maximum bytes in a repair package.

When a reliable DataWriter resends DDS samples, the total package size is limited to this value. Note: The reliable DataWriter will always send at least one sample.

See Controlling Packet Size for Resent DDS Samples (max_bytes_per_nack_response).

DDS_Duration_t

disable_positive_acks_
min_sample_keep_
duration

Minimum duration that a DDS sample will be kept in the DataWriter’s queue for ACK-disabled DataReaders.

See Disabling Positive Acknowledgements and Disabling Positive Acknowledgements (disable_positive_acks_min_sample_keep_duration).

disable_positive_acks_
max_sample_keep_
duration

Maximum duration that a DDS sample will be kept in the DataWriter’s queue for ACK-disabled readers.

DDS_Boolean

disable_positive_acks_
enable_adaptive_
sample_keep_duration

Enables automatic dynamic adjustment of the ‘keep duration’ in response to network congestion.

DDS_Long

disable_positive_acks_
increase_sample_
keep_duration_factor

When the ‘keep duration’ is dynamically controlled, the lengthening of the ‘keep duration’ is controlled by this factor, which is expressed as a percentage.

When the adaptive algorithm determines that the keep duration should be increased, this factor is multiplied with the current keep duration to get the new longer keep duration. For example, if the current keep duration is 20 milliseconds, using the default factor of 150% would result in a new keep duration of 30 milliseconds.

disable_positive_acks_
decrease_sample_
keep_duration_factor

When the ‘keep duration’ is dynamically controlled, the shortening of the ‘keep duration’ is controlled by this factor, which is expressed as a percentage.

When the adaptive algorithm determines that the keep duration should be decreased, this factor is multiplied with the current keep duration to get the new shorter keep duration. For example, if the current keep duration is 20 milliseconds, using the default factor of 95% would result in a new keep duration of 19 milliseconds.

DDS_Long

min_send_window_size

Minimum and maximum size for the window of outstanding DDS samples.

See Configuring the Send Window Size.

max_send_window_size

DDS_Long

send_window_decrease_
factor

Scales the current send-window size down by this percentage to decrease the effective send-rate in response to received negative acknowledgement.

See Configuring the Send Window Size.

DDS_Boolean

enable_multicast_periodic_heartbeat

Controls whether or not periodic heartbeat messages are sent over multicast.

When enabled, if a reader has a multicast destination, the writer will send its periodic HEARTBEAT messages to that destination.

Otherwise, if not enabled or the reader does not have a multicast destination, the writer will send its periodic HEARTBEATs over unicast.

DDS_Long

multicast_resend_threshold

Sets the minimum number of requesting readers needed to trigger a multicast resend.

See Resending Over Multicast.

DDS_Long

send_window_increase_
factor

Scales the current send-window size up by this percentage to increase the effective send-rate when a duration has passed without any received negative acknowledgements.

See Configuring the Send Window Size

DDS_Duration

send_window_update_
period

Period in which DataWriter checks for received negative acknowledgements and conditionally increases the send-window size when none are received.

See Configuring the Send Window Size

High and Low Watermarks

When the number of unacknowledged DDS samples in the current send-window of a reliable DataWriter meets or exceeds high_watermark, the RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) will be changed appropriately, a listener callback will be triggered, and the DataWriter will start heartbeating its matched DataReaders at fast_heartbeat_period

When the number of DDS samples meets or falls below low_watermark, the RELIABLE_WRITER_CACHE_CHANGED Status (DDS Extension) will be changed appropriately, a listener callback will be triggered, and the heartbeat rate will return to the "normal" rate (heartbeat_period ).

Having both high and low watermarks (instead of one) helps prevent rapid flickering between the rates, which could happen if the number of DDS samples hovers near the cut-off point.

Increasing the high and low watermarks will make the DataWriters less aggressive about seeking acknowledgments for sent data, decreasing the size of traffic spikes but slowing performance.

Decreasing the watermarks will make the DataWriters more aggressive, increasing both network utilization and performance.

If batching is used, high_watermark and low_watermark refer to batches, not DDS samples.

When min_send_window_size and max_send_window_size are not equal, the low and high watermarks are scaled down linearly to stay within the current send-window size. The value provided by configuration corresponds to the high and low watermarks for the max_send_window_size.

Normal, Fast, and Late-Joiner Heartbeat Periods

The normal heartbeat_period is used until the number of DDS samples in the reliable DataWriter’s queue meets or exceeds high_watermark; then fast_heartbeat_period is used. Once the number of DDS samples meets or drops below low_watermark, the normal rate (heartbeat_period ) is used again.

Increasing fast_heartbeat_period increases the speed of discovery, but results in a larger surge of traffic when the DataWriter is waiting for acknowledgments.

Decreasing heartbeat_period decreases the steady state traffic on the wire, but may increase latency by decreasing the speed of repairs for lost packets when the writer does not have very many outstanding unacknowledged DDS samples.

Having two periodic heartbeat rates, and switching between them based on watermarks:

The late_joiner_heartbeat_period is used when a reliable DataReader joins after a reliable DataWriter (with non-volatile Durability) has begun publishing DDS samples. Once the late-joining DataReader has received all cached DDS samples, it will be serviced at the same rate as other reliable DataReaders.

Disabling Positive Acknowledgements

When strict reliable communication is not required, you can configure Connext DDS so that it does not send positive acknowledgements (ACKs). In this case, reliability is maintained solely based on negative acknowledgements (NACKs). The removal of ACK traffic may improve middleware performance. For example, when sending DDS samples over multicast, ACK-storms that previously may have hindered DataWriters and consumed overhead network bandwidth are now precluded.

By default, DataWriters and DataReaders are configured with positive ACKS enabled. To disable ACKs, either:

If ACKs are disabled, instead of the DataWriter holding a DDS sample in its send queue until all of its DataReaders have ACKed it, the DataWriter will hold a DDS sample for a configurable duration. This “keep-duration" starts when a DDS sample is written. When this time elapses, the DDS sample is logically considered as acknowledged by its ACK-disabled readers.

The length of the "keep-duration" can be static or dynamic, depending on how rtps_reliable_writer.disable_positive_acks_enable_adaptive_sample_keep_duration is set.

Dynamic adjustment maximizes throughput and reliability in response to current network conditions: when the network is congested, durations are increased to decrease the effective send rate and relieve the congestion; when the network is not congested, durations are decreased to increase the send rate and maximize throughput.

You should configure the minimum "keep-duration" to allow at least enough time for a possible NACK to be received and processed. When a DataWriter has both matching ACK-disabled and ACK-enabled DataReaders, it holds a DDS sample in its queue until all ACK-enabled DataReaders have ACKed it and the "keep-duration" has elapsed.

See also: Disabling Positive Acknowledgements (disable_positive_acks_min_sample_keep_duration).

Configuring the Send Window Size

When a reliable DataWriter writes a DDS sample, it keeps the DDS sample in its queue until it has received acknowledgements from all of its subscribing DataReaders. The number of these outstanding DDS samples is referred to as the DataWriter's "send window." Once the number of outstanding DDS samples has reached the send window size, subsequent writes will block until an outstanding DDS sample is acknowledged.

Configuration of the send window sets a minimum and maximum size, which may be unlimited. The min and max send windows can be the same. When set differently, the send window will dynamically change in response to detected network congestion, as signaled by received negative acknowledgements. When NACKs are received, the DataWriter responds to the slowed reader by decreasing the send window by the send_window_decrease_factor to throttle down its effective send rate. The send window will not be decreased to less than the min_send_window_size. After a period (send_window_update_period) during which no NACKs are received, indicating that the reader is catching up, the DataWriter will increase the send window size to increase the effective send rate by the percentage specified by send_window_increase_factor. The send window will increase to no greater than the max_send_window_size.

When both min_send_window_size and max_send_window_size are unlimited, either the resource limits max_samples in RESOURCE_LIMITS QosPolicy (for non-batching) or max_batches in DATA_WRITER_RESOURCE_LIMITS QosPolicy (DDS Extension) (for batching) serves as the effective max_send_window_size.

When either max_samples (for non-batching) or max_batches (for batching) is less than max_send_window_size, it serves as the effective max_send_window_size. If it is also less than min_send_window_size, then effectively both min and max send-window sizes are equal to max_samples or max_batches.

Propagating Serialized Keys with Disposed-Instance Notifications

This section describes the interaction between these two fields:

RTI recommends setting serialize_key_with_dispose to TRUE if there are DataReaders with propagate_dispose_of_unregistered_instances also set to TRUE. However, it is permissible to set one to TRUE and the other to FALSE. The following examples will help you understand how these fields work.

See also: Disposing of Data.

Virtual Heartbeats

Virtual heartbeats announce the availability of DDS samples with the Collaborative DataWriters feature described in DATA_READER_PROTOCOL QosPolicy (DDS Extension), where multiple DataWriters publish DDS samples from a common logical data-source (identified by a virtual GUID).

When PRESENTATION QosPolicy access_scope is set to TOPIC or INSTANCE on the Publisher, the virtual heartbeat contains information about the DDS samples contained in the DataWriter queue.

When presentation access_scope is set to GROUP on the Publisher, the virtual heartbeat contains information about the DDS samples in the queues of all DataWriters that belong to the Publisher.

Resending Over Multicast

Given DataReaders with multicast destinations, when a DataReader sends a NACK to request for DDS samples to be resent, the DataWriter can either resend them over unicast or multicast. Though resending over multicast would save bandwidth and processing for the DataWriter, the potential problem is that there could be DataReaders of the multicast group that did not request for any resends, yet they would have to process, and drop, the resent DDS samples.

Thus, to make each multicast resend more efficient, the multicast_resend_threshold is set as the minimum number of DataReaders of the same multicast group that the DataWriter must receive NACKs from within a single response-delay duration. This allows the DataWriter to coalesce near-simultaneous unicast resends into a multicast resend, and it allows a "vote" from DataReaders of a multicast group to exceed a threshold before resending over multicast.

The multicast_resend_threshold must be set to a positive value. Note that a threshold of 1 means that all resends will be sent over multicast. Also, note that a DataWriter with a zero NACK response-delay (i.e., both min_nack_response_delay and min_nackresponse_delay are zero) will resend over multicast only if the threshold is 1.

Example

For information on how to use the fields in DDS_RtpsReliableWriterProtocol_t, see Controlling Heartbeats and Retries with DataWriterProtocol QosPolicy.

The following describes a use case for when to change push_on_write to DDS_BOOLEAN_FALSE. Suppose you have a system in which the data packets being sent is very small. However, you want the data to be sent reliably, and the latency between the time that data is sent to the time that data is received is not an issue. However, the total network bandwidth between the DataWriter and DataReader applications is limited.

If the DataWriter sends a burst of data a a high rate, it is possible that it will overwhelm the limited bandwidth of the network. If you allocate enough space for the DataWriter to store the data burst being sent (see RESOURCE_LIMITS QosPolicy), then you can use the push_on_write parameter of the DATA_WRITER_PROTOCOL QosPolicy to delay sending the data until the reliable DataReader asks for it.

By setting push_on_write to DDS_BOOLEAN_FALSE, when write() is called on the DataWriter, no data is actually sent. Instead data is stored in the DataWriter’s send queue. Periodically, Connext DDS will be sending heartbeats informing the DataReader about the data that is available. So every heartbeat period, the DataReader will realize that the DataWriter has new data, and it will send an ACK/NACK, asking for them.

When DataWriter receives the ACK/NACK packet, it will put together a package of data, up to the size set by the parameter max_bytes_per_nack_response, to be sent to the DataReader. This method not only self-throttles the send rate, but also uses network bandwidth more efficiently by eliminating redundant packet headers when combining several small packets into one larger one. Please note that the DataWriter will always send at least one sample.

Properties

This QosPolicy cannot be modified after the DataWriter is created.

Since it is only for DataWriters, there are no compatibility restrictions for how it is set on the publishing and subscribing sides.

When setting the fields in this policy, the following rules apply. If any of these are false, Connext DDS returns DDS_RETCODE_INCONSISTENT_POLICY:

Related QosPolicies

Applicable DDS Entities

System Resource Considerations

A high max_bytes_per_nack_response may increase the instantaneous network bandwidth required to send a single burst of traffic for resending dropped packets.

© 2018 RTI