48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension)

The DATA_READER_RESOURCE_LIMITS QosPolicy extends your control over the memory allocated by Connext for DataReaders beyond what is offered by the 47.22 RESOURCE_LIMITS QosPolicy. RESOURCE_LIMITS controls memory allocation with respect to the DataReader itself: the number of DDS samples that it can store in the receive queue and the number of instances that it can manage simultaneously. DATA_READER_RESOURCE_LIMITS controls memory allocation on a per matched-DataWriter basis. The two are orthogonal.

This policy includes the members in Table 48.3 DDS_DataReaderResourceLimitsQosPolicy. For defaults and valid ranges, please refer to the API Reference HTML documentation.

Table 48.3 DDS_DataReaderResourceLimitsQosPolicy

Type

Field Name

Description

DDS_
Long

max_remote_writers

Maximum number of DataWriters from which a DataReader may receive DDS data samples, among all instances.

For unkeyed Topics: max_remote_writers must = max_remote_writers_per_instance

max_remote_writers_
per_instance

Maximum number of DataWriters from which a DataReader may receive DDS data samples for a single instance.

For unkeyed Topics: max_remote_writers must = max_remote_writers_per_instance

max_samples_
per_remote_writer

Maximum number of DDS samples received out-of-order that a DataReader can store from a single reliable DataWriter.

max_samples_per_remote_writer must be <= RESOURCE_LIMITS::max_samples

max_infos

Maximum number of DDS_SampleInfo structures that a DataReader can allocate.

max_infos must be >= RESOURCE_LIMITS::max_samples

initial_remote_writers

Initial number of DataWriters from which a DataReader may receive DDS data samples, including all instances.

For unkeyed Topics: initial_remote_writers must = initial_remote_writers_per_instance

initial_remote_
writers_per_instance

Initial number of DataWriters from which a DataReader may receive DDS data samples for a single instance.

For unkeyed Topics: initial_remote_writers must = initial_remote_writers_per_instance

initial_infos

Initial number of DDS_SampleInfo structures that a DataReader will allocate.

initial_outstanding_
reads

Initial number of times in which memory can be concurrently loaned via read/take calls without being returned with return_loan().

max_outstanding_
reads

Maximum number of times in which memory can be concurrently loaned via read/take calls without being returned with return_loan().

max_samples_per_
read

Maximum number of DDS samples that can be read/taken on a DataReader.

DDS_
Boolean

disable_fragmentation_
support

Determines whether the DataReader can receive fragmented DDS samples.

When fragmentation support is not needed, disabling fragmentation support will save some memory resources.

DDS_
Long

max_fragmented_
samples

The maximum number of DDS samples for which the DataReader may store fragments at a given point in time.

At any given time, a DataReader may store fragments for up to max_fragmented_samples DDS samples while waiting for the remaining fragments. These DDS samples need not have consecutive sequence numbers and may have been sent by different DataWriters. Once all fragments of a DDS sample have been received, the DDS sample is treated as a regular DDS sample and becomes subject to standard QoS settings, such as max_samples. Connext will drop fragments if the max_fragmented_samples limit has been reached.

For best-effort communication, Connext will accept a fragment for a new DDS sample, but drop the oldest fragmented DDS sample from the same remote writer.

For reliable communication, Connext will drop fragments for any new DDS samples until all fragments for at least one older DDS sample from that writer have been received.

Only applies if disable_fragmentation_support is FALSE.

initial_fragmented_
samples

The initial number of DDS samples for which a DataReader may store fragments.

Only applies if disable_fragmentation_support is FALSE.

max_fragmented_
samples_per_remote_
writer

The maximum number of DDS samples per remote writer for which a DataReader may store fragments. This is a logical limit, so a single remote writer cannot consume all available resources.

Only applies if disable_fragmentation_support is FALSE.

max_fragments_per_
sample

Maximum number of fragments for a single DDS sample.

Only applies if disable_fragmentation_support is FALSE.

DDS_
Boolean

dynamically_allocate_
fragmented_samples

By default, the middleware does not allocate memory upfront, but instead allocates memory from the heap upon receiving the first fragment of a new sample. The amount of memory allocated equals the amount of memory needed to store all fragments in the sample. Once all fragments of a sample have been received, the sample is deserialized and stored in the regular receive queue. At that time, the dynamically allocated memory is freed again.

This QoS setting is useful for large, but variable-sized data types where up-front memory allocation for multiple samples based on the maximum possible sample size may be expensive. The main disadvantage of not pre-allocating memory is that one can no longer guarantee the middleware will have sufficient resources at run-time.

If dynamically_allocate_fragmented_samples is FALSE, the middleware will allocate memory up-front for storing fragments for up to initial_fragmented_samples samples. This memory may grow up to max_fragmented_samples if needed.

Only applies if disable_fragmentation_support is FALSE.

DDS_
Long

max_total_instances

Maximum number of instances (attached + detached instances) for which a DataReader will keep state. Only applicable if keep_minimum_state_for_intsances is TRUE.

See 48.2.1 max_total_instances and max_instances

DDS_DataReaderResourceLimitsInstanceReplacementSettings

instance_replacement

Sets the kinds of instances allowed to be replaced for each instance state when a DataReader reaches max_instances in the 47.22 RESOURCE_LIMITS QosPolicy. See 48.2.3 Configuring DataReader Instance Replacement.

DDS_
Long

max_remote_virtual_
writers

The maximum number of virtual writers (identified by a virtual GUID) from which a DataReader may read, including all instances.

When the Subscriber’s access_scope is GROUP, this value determines the maximum number of DataWriter groups supported by the Subscriber. Since the Subscriber may contain more than one DataReader, only the setting of the first applies.

DDS_
Long

initial_remote_virtual_
writers

The initial number of virtual writers from which a DataReader may read, including all instances.

DDS_
Long

max_remote_virtual_
writers_per_instance

Maximum number of virtual remote writers that can be associated with an instance.

For unkeyed types, this value is ignored.

The features of Durable Reader State and MultiChannel DataWriters, as well as Persistence Service, require Connext to keep some internal state per virtual writer and instance that is used to filter duplicate DDS samples. These duplicate DDS samples could be coming from different DataWriter channels or from multiple executions of Persistence Service.

Once an association between a remote virtual writer and an instance is established, it is permanent—it will not disappear even if the physical writer incarnating the virtual writer is destroyed.

If max_remote_virtual_writers_per_instance is exceeded for an instance, Connext will not associate this instance with new virtual writers. Duplicate DDS samples coming from these virtual writers will not be filtered on the reader.

If you are not using Durable Reader State, MultiChannel DataWriters or Persistence Service, you can set this property to 1 to optimize resources.

For additional information about the virtual writers see Chapter 21 Mechanisms for Achieving Information Durability and Persistence.

DDS_
Long

initial_remote_virtual_
writers_per_instance

Initial number of virtual remote writers per instance.

For unkeyed types, this value is ignored.

DDS_
Long

max_remote_writers_
per_sample

Maximum number of remote writers that are allowed to write the same DDS sample.

One scenario in which two DataWriters may write the same DDS sample is when using Persistence Service. The DataReader may receive the same DDS sample from the original DataWriter and from an Persistence Service DataWriter.

DDS_
Long

max_query_condition_
filters

This value determines the maximum number of unique query condition content filters that a reader may create.

Each query condition content filter is comprised of both its query_expression and query_parameters. Two query conditions that have the same query_expression will require unique query condition filters if their query_parameters differ. Query conditions that differ only in their state masks will share the same query condition filter.

DDS_
Long

max_app_ack_response_
length

The maximum length of response data in an application-level acknowledgment.

When set to zero, no response data is sent with application-level acknowledgments.

DDS_Boolean

keep_minimum_state_for_instances

Determines whether the DataReader keeps a minimum instance state for up to max_total_instances. The minimum state is useful for filtering samples in certain scenarios. See 48.2.1 max_total_instances and max_instances

DDS_Long

initial_topic_queries

The initial number of TopicQueries allocated by a DataReader.

DDS_Long

max_topic_queries

The maximum number of active TopicQueries that a DataReader can create. Once this limit is reached, a DataReader can create more TopicQueries only if it deletes some of the previously created ones.

DDS_AllocationSettings_t

shmem_ref_transfer_mode_attached_segment_allocation

Configures the allocation resource used to attach to different shared memory segments if you are using Zero Copy transfer over shared memory. See 34.1.5 Zero Copy Transfer Over Shared Memory.

DataReaders must allocate internal structures to handle: the maximum number of DataWriters that may connect to it; whether or not a DataReader handles data fragmentation and how many data fragments that it may handle (for DDS data samples larger than the MTU of the underlying network transport); how many simultaneous outstanding loans of internal memory holding DDS data samples can be provided to user code; as well as others.

Most of these internal structures start at an initial size and, by default, will grow as needed by dynamically allocating additional memory. You may set fixed, maximum sizes for these internal structures if you want to bound the amount of memory that can be used by a DataReader. Setting the initial size to the maximum size will prevent Connext from dynamically allocating any memory after the DataReader is created.

This policy also controls how the allocated internal data structure may be used. For example, DataReaders need data structures to keep track of all of the DataWriters that may be sending it DDS data samples. The total number of DataWriters that it can keep track of is set by the initial_remote_writers and max_remote_writers values. For keyed Topics, initial_remote_writers_per_instance and max_remote_writers_per_instance control the number of DataWriters allowed by the DataReader to modify the value of a single instance.

By setting the max value to be less than max_remote_writers, you can prevent instances with many DataWriters from using up the resources and starving other instances. Once the resources for keeping track of DataWriters are used up, the DataReader will not be able to accept “connections” from new DataWriters. The DataReader will not be able to receive data from new matching DataWriters which would be ignored.

In the reliable protocol used by Connext to support a RELIABLE setting for the 47.21 RELIABILITY QosPolicy, the DataReader must temporarily store DDS data samples that have been received out-of-order from a reliable DataWriter. The storage of out-of-order DDS samples is allocated from the DataReader’s receive queue and shared among all reliable DataWriters. The parameter max_samples_per_remote_writer controls the maximum number of out-of-order data DDS samples that the DataReader is allowed to store for a single DataWriter. This value must be less than the max_samples value set in the 47.22 RESOURCE_LIMITS QosPolicy.

max_samples_per_remote_writer allows Connext to share the limited resources of the DataReader equitably so that a single DataWriter is unable to use up all of the storage of the DataReader while missing DDS data samples are being resent.

When setting the values of the members, the following rules apply:

  • max_remote_writers >= initial_remote_writers
  • max_remote_writers_per_instance >= initial_remote_writers_per_instance
    max_remote_writers_per_instance <= max_remote_writers
  • max_infos >= initial_infos
    max_infos >= RESOURCE_LIMITS::max_samples
  • max_outstanding_reads >= initial_outstanding_reads
  • max_remote_writers >= max_remote_writers_per_instance
  • max_samples_per_remote_writer <= RESOURCE_LIMITS::max_samples

If any of the above are false, Connext returns the error code DDS_RETCODE_INCONSISTENT_POLICY when setting the DataReader’s QoS.

48.2.1 max_total_instances and max_instances

The features 21.4 Durable Reader State, Chapter 36 Multi-Channel DataWriters for High-Performance Filtering, and Persistence Service ( Part 12: RTI Persistence Service) require Connext to keep some minimum internal state even for instances without DataWriters or DDS samples in the DataReader’s queue or that have been purged due to a dispose. Instances for which only this minimum state is kept are called detached instances. The additional state is used to filter duplicate DDS samples that could be coming from different DataWriter channels or from multiple executions of Persistence Service. The total maximum number of instances that will be managed by the middleware, attached plus detached instances, is determined by max_total_instances. This additional state will only be kept for up to max_total_instances if keep_minimum_state_for_instances is TRUE, otherwise the additional state will not be kept for any instances. The minimum state includes information such as the source timestamp of the last sample received by the instance and the last sequence number received from a virtual GUID. See also 40.8.7 Active State and Minimum State.

48.2.2 keep_minimum_state_for_instances

There are important implications of the minimum state setting.

When a DataReader is exposed to an unbounded number of instances over its lifetime (for example, if the key for an instance is a UUID and the application cycles through unlimited numbers of such UUIDs over time) and the DataReader does keep its minimum state, the set of minimum state data will grow with the total number of instances (unique keys) the DataReader has been exposed to until max_total_instances is reached.

max_total_instances by default gets its value from max_instances. If max_instances is set to its default value, which is unbounded, the DataReader’s memory will grow slowly but without bound until the DataReader itself is deleted. As a rule of thumb, when instances are used only once in a system and are never used again after being disposed or unregistered, set max_instances and max_total_instances to finite values or bound the lifetime of the DataReader (see 15.1 Creating and Deleting DDS Entities). If neither of these options is practical, it may help to set keep_minimum_state_for_instances to FALSE.

If a DataReader does not retain this minimum state, there may be correctness implications if the DataReader is exposed to an instance again after it has been removed from the DataReader cache. For example, because the last source timestamp is not preserved, eventual consistency cannot be assured (even if destination order is by source timestamp). Samples that had already been received by the DataReader may be re-delivered and provided to the application again as if for the first time (especially when using redundant Routing Service routes, Persistence Service, or Collaborative DataWriters). As a rule of thumb, when instances have complex lifecycles (especially involving multiple DataWriters modifying the instance), in which an instance can become not alive and later come alive again, set keep_minimum_state_for_instances to TRUE.

48.2.3 Configuring DataReader Instance Replacement

When the max_instances limit in the 47.22 RESOURCE_LIMITS QosPolicy is reached, a DataReader will try to make space for a new instance by replacing an existing instance according to the instance replacement kind set in instance_replacement in the 1.1 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension). If it cannot make space for the new instance, the sample for the new instance will be lost with the reason LOST_BY_INSTANCES_LIMIT (see 40.7.7 SAMPLE_LOST Status).

The instance_replacement field is useful for managing large volumes of instances that come and go. It is important to be able to set an upper limit on the resources that will be used by an application to avoid running into decreased performance and potentially running out of system resources. The instance_replacement QoS setting allows you to set an upper bound on the resources that will be used for instances. It allows DataReaders to make room for new instances by replacing older ones. For example, a hospital may have 100 beds. Many patients (instances) come and go, so at any given time you only need resources for 100 instances, but over time you will see an unbounded number of instances. An instance replacement policy can help manage this flow.

For each instance state (see 40.8 Accessing and Managing Instances (Working with Keyed Data Types)), you can set the following removal kinds:

  • The alive_instance_removal kind sets a removal policy for ALIVE instances (default: DDS_NO_INSTANCE_REMOVAL).
  • The disposed_instance_removal kind sets a removal policy for NOT_ALIVE_DISPOSED instances (default: DDS_EMPTY_INSTANCE_REMOVAL).
  • The no_writers_instance_removal kind sets a removal policy for NOT_ALIVE_NO_WRITERS instances (default: DDS_EMPTY_INSTANCE_REMOVAL).

For each instance state, you can choose among the following replacement kinds:

  • DDS_NO_INSTANCE_REMOVAL: Instances in the associated state cannot be replaced.
  • DDS_EMPTY_INSTANCE_REMOVAL: Instances in the associated state can be replaced only if they are empty (all samples have been taken or removed from the DataReader queue due to QoS settings such as, but not limited to, the 47.14 LIFESPAN QoS Policy or sample purging due to the 48.3 READER_DATA_LIFECYCLE QoS Policy), and there are no outstanding loans on any of the instance's samples.
  • DDS_FULLY_PROCESSED_INSTANCE_REMOVAL: Instances in the associated state can be replaced only if every sample has been processed by the application. A sample is considered processed by the application based on the Reliability kind:
    • If the Reliability kind is RELIABLE, a sample is considered processed by the application based on the ApplicationAcknowledgementKind (see 31.12.1 Application Acknowledgment Kinds):
      • PROTOCOL_ACKNOWLEDGMENT_MODE or APPLICATION_AUTO_ACKNOWLEDGEMENT_MODE: The sample is considered processed when it has been read or taken by the application and return_loan has been called.
      • APPLICATION_EXPLICIT_ACKNOWLEDGMENT_MODE: The sample is considered processed when the subscribing application has explicitly acknowledged the sample by calling either the DataReader’s acknowledge_sample() or acknowledge_all() operations, the AppAckConf message has been received, and the application has called return_loan.
    • If the Reliability kind is BEST_EFFORT, a sample is considered processed by the application when all samples have been read or taken by the application and return_loan has been called.
  • DDS_ANY_INSTANCE_REMOVAL: Instances in the associated state can be replaced regardless of whether the subscribing application has processed all of the samples. Samples that have not been processed will be dropped and accounted for by the total_samples_dropped_by_instance_replacement statistic in the 40.7.2 DATA_READER_CACHE_STATUS.

For all kinds, instance replacement starts with the least-recently-updated (LRU) instance that matches the allowed criteria. For example, if alive_instance_removal is set to DDS_EMPTY_INSTANCE_REMOVAL: when the max_instances limit is reached, the least-recently-updated, empty, ALIVE instance will be replaced to make room for the new instance. An instance is considered updated when a valid sample or dispose sample for the instance is received and accepted by the DataReader. An instance is not considered updated in the following cases:

  • When using EXCLUSIVE_OWNERSHIP, when samples that are received from DataWriters that do not own the instance. Only the owner of an instance can update the instance.
  • A sample that is filtered out due to content filtering does not count as updating the instance.
  • Unregister messages do not count as an update to the instance because the unregister message conveys information about the DataWriter (that it is finished updating the instance), as opposed to any change to the instance itself.

There is no preference among the instance states as far as which instance is replaced first; instance replacement relies only on the LRU. For example, imagine if Connext were to prefer disposed_instance_removal over alive_instance_removal. It doesn't, but if it did, the application might never see disposed instances, yet have very old alive instances in its queue. The same is true for the replacement criteria options. If you choose DDS_FULLY_PROCESSED_INSTANCE_REMOVAL (for example), Connext will not look for empty instances first and then fully processed instances; the LRU instance that is considered fully-processed will be replaced.

If no replaceable instance exists after the instance replacement kinds above have been applied, the sample for the new instance will be considered lost with the reason LOST_BY_INSTANCES_LIMIT in the 40.7.7 SAMPLE_LOST Status; the instance will not be inserted into the DataReader queue.

48.2.4 Example

The max_samples_per_remote_writer value affects sharing and starvation. max_samples_per_remote_writer can be set to less than the RESOURCE_LIMITS QosPolicy’s max_samples to prevent a single DataWriter from starving others. This control is especially important for Topics that have their 47.17 OWNERSHIP QosPolicy set to SHARED.

In the case of EXCLUSIVE ownership, a lower-strength remote DataWriter can "starve" a higher-strength remote DataWriter by making use of more of the DataReader's resources, an undesirable condition. In the case of SHARED ownership, a remote DataWriter may starve another remote DataWriter, making the sharing not really equal.

48.2.5 Properties

This QosPolicy cannot be modified after the DataReader is created.

It only applies to DataReaders, so there are no restrictions for setting it compatibly on the DataWriter.

48.2.6 Related QosPolicies

48.2.7 Applicable DDS Entities

48.2.8 System Resource Considerations

Increasing any of the “initial” values in this policy will increase the amount of memory allocated by Connext when a new DataReader is created. Increasing any of the “max” values will not affect the initial memory allocated for a new DataReader, but will affect how much additional memory may be allocated as needed over the DataReader’s lifetime.

Setting a max value greater than an initial value thus allows your application to use memory more dynamically and efficiently in the event that the size of the application is not well-known ahead of time. However, Connext may dynamically allocate memory in response to network communications.