47.9 DURABILITY QosPolicy

Because the publish-subscribe paradigm is connectionless, applications can create publications and subscriptions in any way they choose. As soon as a matching pair of DataWriter and DataReader exists, the data published by the DataWriter will be delivered to the DataReader. However, a DataWriter may publish data before a DataReader has been created. For example, before you subscribe to a magazine, there have been past issues that were published.

The DURABILITY QosPolicy controls whether or not, and how, published DDS samples are stored by the DataWriter application for DataReaders that are found after the DDS samples were initially written. DataReaders use this QoS to request DDS samples that were published before the DataReaders were created. The analogy is for a new subscriber to a magazine to ask for issues that were published in the past. These are known as ‘historical’ DDS data samples. (Reliable DataReaders may wait for these historical DDS samples, see 40.5 Checking DataReader Status and StatusConditions.)

This QosPolicy can be used to help ensure that DataReaders get all data that was sent by DataWriters, regardless of when it was sent. This QosPolicy can increase system tolerance to failure conditions.

The 47.12 HISTORY QosPolicy controls how many samples the DataWriter stores for repair to currently matched DataReaders. The DURABILITY QosPolicy controls how many samples the DataWriter stores for sending to late-joining DataReaders (DataReaders that are found after the samples were initially written). See Figure 47.1: History Depth and Durability Depth.

See also Chapter 21 Mechanisms for Achieving Information Durability and Persistence.

The possible settings for this QoS are:

This QosPolicy includes the members in Table 47.19 DDS_DurabilityQosPolicy and Table 47.20 DDS_PersistentStorageSettings_t. For default settings, please refer to the API Reference HTML documentation.

Table 47.19 DDS_DurabilityQosPolicy

Type

Field Name

Description

DDS_DurabilityQosPolicyKind

kind

(default) DDS_VOLATILE_DURABILITY_QOS:

Do not save or deliver historical DDS samples.

DDS_TRANSIENT_LOCAL_DURABILITY_QOS:

Save and deliver historical DDS samples if the DataWriter still exists.

DDS_TRANSIENT_DURABILITY_QOS:

Save and deliver historical DDS samples using Persistence Service to store samples in volatile memory.

DDS_PERSISTENCE_DURABILITY_QOS:

Save and deliver historical DDS samples using Persistence Service to store samples in non-volatile memory.

DDS_Long

writer_depth

How many DDS samples are stored per instance by the DataWriter application for sending to late-joining DataReaders (DataReaders that are found after the DDS samples were initially written).

The default value, AUTO, makes this parameter equal to the following:

The writer_depth must be <= to the History depth in the HISTORY QosPolicy if the History kind is KEEP_LAST.

writer_depth applies only to non-volatile DataWriters (those for which the kind is DDS_TRANSIENT_LOCAL_DURABILITY_QOS, DDS_TRANSIENT_DURABILITY_QOS, or DDS_PERSISTENCE_DURABILITY_QOS).

writer_depth set on the DataReader side will be ignored.

DDS_Boolean

direct_

communication

Whether or not a TRANSIENT or PERSISTENT DataReader should receive DDS samples directly from a TRANSIENT or PERSISTENT DataWriter.

When TRUE (the default value), a TRANSIENT or PERSISTENT DataReader will receive DDS samples directly from the original DataWriter. The DataReader may also receive DDS samples from Persistence Service, but the duplicates will be filtered by the middleware.

When FALSE, a TRANSIENT or PERSISTENT DataReader will receive DDS samples only from the DataWriter created by Persistence Service. This ‘relay communication’ pattern provides a way to guarantee eventual consistency.

See 21.5.1 RTI Persistence Service.

This field only applies to DataReaders.

DDS_PersistentStorageSettings

storage_settings

Configures durable writer history and durable reader state using the fields in Table 47.20 DDS_PersistentStorageSettings_t. See also Chapter 21 Mechanisms for Achieving Information Durability and Persistence for more information.

By default, durable writer history and durable reader state are disabled. This means that a DataWriter will not persist its historical cache and a DataReader will not persist its state. To enable durable writer history and durable reader state, set enable in Table 47.20 DDS_PersistentStorageSettings_t to TRUE.

Information durability can be combined with required subscriptions in order to guarantee that DDS samples are delivered to a set of required subscriptions. For additional details on required subscriptions see 31.13 Required Subscriptions and 47.1 AVAILABILITY QosPolicy (DDS Extension).

A DataWriter will keep at most History.depth samples per instance until they are fully acknowledged. Samples outside of the Durability.writer_depth for an instance will be removed once they are fully acknowledged. Only the most recent Durability.writer_depth samples per instance will be kept by the DataWriter for delivery to late-joining non-volatile DataReaders.

When writer_depth is used in combination with batching, it acts as a minimum number of samples that will be kept per instance, rather than a maximum. Any batch that contains a sample that falls within the writer_depth of the instance to which it belongs will be sent to late-joining DataReaders. This means that batches may be sent that contain samples from other instances, or the same instance, that fall outside of the writer_depth for the instance to which they belong. For example, if the writer_depth is set to 1 and a batch with two samples for the same instance is written, then when a late-joining DataReader is discovered, the DataWriter will send the batch containing two samples for the same instance to the DataReader.

Table 47.20 DDS_PersistentStorageSettings_t

Type

Field Name

Description

DDS_Boolean

enable

Enables durable writer history in a DataWriter and durable reader state in a DataReader.

When this field is set to TRUE, the persistent storage configuration set in DDS_PersistentStorageSettings_t will take precedence over the configuration set in the deprecated dds.data_writer.history.odbc_plugin.builtin.* and dds.data_reader.state.* properties described in 21.3.2 How To Configure Durable Writer History and 21.4.4 How To Configure a DataReader for Durable Reader State.

Default: FALSE

char*

file_name

File name where the durable writer history or durable reader state will be stored.

Setting this field to a value other than NULL is mandatory when enabling durable writer history or durable reader state. Connext uses SQLite to store the durable writer history and durable reader state.

If the file does not exist, it will be created. If the file exists and restore is set to TRUE, the durable writer history or durable reader state will be restored from the file. Otherwise, the file will be overwritten.

Important: When the file exists, the virtual_guid fields in the48.1 DATA_READER_PROTOCOL QosPolicy (DDS Extension) and 47.5 DATA_WRITER_PROTOCOL QosPolicy (DDS Extension) will be set by Connext based on the file content. If you change these fields, the value will be ignored.

Default: NULL

char*

trace_file_name

Stores the SQL statements executed when loading and storing the durable writer history or durable reader state.

Setting this field to a value other than the NULL will enable tracing of the SQL statements executed when loading and storing the durable writer history or durable reader state.

Important: Enabling tracing will have a negative impact on performance. Use this feature only for debugging purposes.

Default: NULL

DDS_PersistentJournalKind

journal_kind

Sets the journal mode of the persistent storage. The rollback journal is used in SQLite to store the state of the persistent storage before a transaction is committed.

DDS_DELETE_PERSISTENT_JOURNAL

Deletes the rollback journal at the conclusion of each transaction.

DDS_TRUNCATE_PERSISTENT_JOURNAL

Commits transactions by truncating the rollback journal to zero-length instead of deleting it.

DDS_PERSIST_PERSISTENT_JOURNAL

Prevents the rollback journal from being deleted at the end of each transaction. Instead, the header of the journal is overwritten with zeros.

DDS_MEMORY_PERSISTENT_JOURNAL

Stores the rollback journal in volatile RAM. This saves disk I/O.

(default) DDS_WAL_PERSISTENT_JOURNAL

Uses a write-ahead log instead of a rollback journal to implement transactions.

DDS_OFF_PERSISTENT_JOURNAL

Completely disables the rollback journal. If the application crashes in the middle of a transaction when the OFF journaling mode is set, the persistent storage will very likely be corrupted.

DDS_PersistentSynchronizationKind

syncronization_kind

Determines the level of synchronization with the physical disk.

(default) DDS_NORMAL_PERSISTENT_SYNCHRONIZATION

Data (e.g., new sample) is written to disk at critical moments.

DDS_FULL_PERSISTENT_SYNCHRONIZATION

Data (e.g., new sample) is written to physical disk immediately.

DDS_OFF_PERSISTENT_SYNCHRONIZATION

No synchronization is enforced. Data will be written to physical disk when the operating system flushes its buffers.

DDS_Boolean

vacuum

Sets the auto-vacuum status of the storage.

When auto-vacuum is TRUE, the storage files will be compacted automatically with every transaction. When auto-vacuum is FALSE, after data is deleted from the storage files, the files remain the same size.

Default: TRUE

DDS_Boolean

restore

Indicates if the persisted writer history or reader state must be restored.

For a DataWriter, this field indicates whether or not the persisted writer history must be restored once the DataWriter is restarted. For a DataReader, this field indicates whether or not the persisted reader state must be restored once the DataReader is restarted.

Default: TRUE

DDS_AllocationSettings_t

writer_instance_cache_allocation

Configures the resource limits associated with the instance durable writer history cache, using the DDS_AllocationSettings_t structure.

This field only applies to DataWriters. To minimize the number of accesses to the persisted storage, Connext uses an instance cache. Do not confuse this limit with the initial and maximum number of instances that can be maintained by a DataWriter in persistent storage. These resource limits are configured using the max_instances and initial_instances fields in the 47.22 RESOURCE_LIMITS QosPolicy.

If writer_memory_state is set to TRUE, then the value of max_count in the DDS_AllocationSettings_t structure is set to DDS_LENGTH_UNLIMITED, overwriting any value you set. The incremental_count in the DDS_AllocationSettings_t structure is ignored.

Range:

  • max_count in interval [1, INT_MAX], DDS_LENGTH_AUTO, or DDS_LENGTH_UNLIMITED
  • initial_count in interval [1, INT_MAX] or DDS_LENGTH_AUTO

DDS_LENGTH_AUTO means that the value will be set to the equivalent value of the 47.22 RESOURCE_LIMITS QosPolicy.

Default:

  • max_count = DDS_LENGTH_AUTO (= DDS_ResourceLimitsQosPolicy::max_instances)
  • initial_count = DDS_LENGTH_AUTO (= DDS_ResourceLimitsQosPolicy::initial_instances)

DDS_AllocationSettings_t

writer_sample_cache_allocation

Configures the resource limits associated with the sample durable writer history cache, using the DDS_AllocationSettings_t structure.

This field only applies to DataWriters. To minimize the number of accesses to the persisted storage, Connext uses a sample cache. Do not confuse this limit with the initial and maximum number of samples that can be maintained by a DataWriter in persistent storage. These resource limits are configured using the max_samples and initial_samples fields in the 47.22 RESOURCE_LIMITS QosPolicy. The incremental_count in the DDS_AllocationSettings_t structure is ignored.

Range:

  • max_count in interval [1, INT_MAX], DDS_LENGTH_AUTO, or DDS_LENGTH_UNLIMITED
  • initial_count in interval [1, INT_MAX], or DDS_LENGTH_AUTO

DDS_LENGTH_AUTO means that the value will be set to the equivalent value of the 47.22 RESOURCE_LIMITS QosPolicy.

Default:

  • max_count = 32
  • initial_count = 32

DDS_Boolean

writer_memory_state

Determines how much state will be kept in memory by the durable writer history in order to avoid accessing the persistent storage in disk.

This field only applies to DataWriters. If this field is set to TRUE, then max_count in the DDS_AllocationSettings_t in the writer_instance_cache_allocation is set to DDS_LENGTH_UNLIMITED, overwriting any value you set. In addition, the durable writer history will keep a fixed state overhead per sample in memory. This mode provides the best durable writer history performance. However, the restore operation will be slower, and the maximum number of samples that the durable writer history can manage is limited by the available physical memory.

If this field is set to FALSE, all the state will be kept in the underlying database. In this mode, the maximum number of samples in the durable writer history is not limited by the physical memory available.

This field is always set to FALSE when the DataWriter is configured with an acknowledgment_kind in the 47.21 RELIABILITY QosPolicy set to DDS_APPLICATION_AUTO_ACKNOWLEDGMENT_MODE or DDS_APPLICATION_EXPLICIT_ACKNOWLEDGMENT_MODE, or an enable_required_subscriptions in the 47.1 AVAILABILITY QosPolicy (DDS Extension) set to TRUE.

Default: TRUE

DDS_UnsignedLong

reader_checkpoint_frequency

Controls how often the reader state is stored into the database.

This field only applies to DataReaders. A value of N means store the state once every N received and processed samples. The circumstances under which a data sample is considered “processed by the application” depends on the DataReader configuration. For information on when a sample is considered "processed by the application," see 21.4 Durable Reader State.

A high value will provide better performance. However, if the DataReader is restarted, it may receive some duplicate samples.

Range: [1, 1000000]

Default: 1

47.9.1 Example

Suppose you have a DataWriter that sends data sporadically and its DURABILITY kind is set to VOLATILE. If a new DataReader joins the system, it won’t see any data until the next time that write() is called on the DataWriter. If you want the DataReader to receive any data that is valid, old or new, both sides should set their DURABILITY kind to TRANSIENT_LOCAL. This will ensure that the DataReader gets some of the previous DDS samples immediately after it is enabled.

47.9.2 Properties

This QosPolicy cannot be modified after the Entity has been created.

The DataWriter and DataReader must use compatible settings for this QosPolicy. To be compatible, the DataWriter and DataReader must use one of the valid combinations shown in Table 47.21 Valid Combinations of Durability ‘kind’.

If this QosPolicy is found to be incompatible, the ON_OFFERED_INCOMPATIBLE_QOS and ON_REQUESTED_INCOMPATIBLE_QOS statuses will be modified and the corresponding Listeners called for the DataWriter and DataReader respectively.

Table 47.21 Valid Combinations of Durability ‘kind’

 

 

DataReader requests:

VOLATILE

TRANSIENT_LOCAL

TRANSIENT

PERSISTENT

DataWriter offers:

VOLATILE

compatible

incompatible

incompatible

incompatible

TRANSIENT_

LOCAL

compatible

compatible

incompatible

incompatible

TRANSIENT

compatible

compatible

compatible

incompatible

PERSISTENT

compatible

compatible

compatible

compatible

47.9.3 Related QosPolicies

47.9.4 Applicable Entities

47.9.5 System Resource Considerations

Using this policy with a setting other than VOLATILE will cause Connext to use CPU and network bandwidth to send old DDS samples to matching, newly discovered DataReaders. The actual amount of resources depends on the total size of data that needs to be sent.

The maximum number of DDS samples that will be kept on the DataWriter’s queue for late-joiners and/or required subscriptions is determined by max_samples in RESOURCE_LIMITS Qos Policy.

System Resource Considerations With Required Subscriptions

By default, when TRANSIENT_LOCAL durability is used in combination with required subscriptions, a DataWriter configured with KEEP_ALL in the 47.12 HISTORY QosPolicy will keep the DDS samples in its cache until they are acknowledged by all the required subscriptions. (For additional details, see 31.13 Required Subscriptions.) After the DDS samples are acknowledged by the required subscriptions they will be marked as reclaimable, but they will not be purged from the DataWriter’s queue until the DataWriter needs these resources for new DDS samples. This may lead to a non efficient resource utilization, specially when max_samples is high or even UNLIMITED.

The DataWriter’s behavior can be changed to purge DDS samples after they have been acknowledged by all the active/matching DataReaders and all the required subscriptions configured on the DataWriter. To do so, set the dds.data_writer.history.purge_samples_after_acknowledgment property to 1 (see 47.19 PROPERTY QosPolicy (DDS Extension) ).

See 31.13 Required Subscriptions.