The DATA_READER_RESOURCE_LIMITS QosPolicy extends your control over the memory allocated by Connext DDS for DataReaders beyond what is offered by the RESOURCE_LIMITS QosPolicy. RESOURCE_LIMITS controls memory allocation with respect to the DataReader itself: the number of DDS samples that it can store in the receive queue and the number of instances that it can manage simultaneously. DATA_READER_RESOURCE_LIMITS controls memory allocation on a per matched-DataWriter basis. The two are orthogonal.
This policy includes the members in . For defaults and valid ranges, please refer to the API Reference HTML documentation.
Type |
Field Name |
Description |
DDS_ |
Maximum number of DataWriters from which a DataReader may receive DDS data samples, among all instances. For unkeyed Topics: max_remote_writers must = max_remote_writers_per_instance |
|
Maximum number of DataWriters from which a DataReader may receive DDS data samples for a single instance. For unkeyed Topics: max_remote_writers must = max_remote_writers_per_instance |
||
Maximum number of DDS samples received out-of-order that a DataReader can store from a single reliable DataWriter. max_samples_per_remote_writer must be <= RESOURCE_LIMITS::max_samples |
||
Maximum number of DDS_SampleInfo structures that a DataReader can allocate. max_infos must be >= RESOURCE_LIMITS::max_samples |
||
Initial number of DataWriters from which a DataReader may receive DDS data samples, including all instances. For unkeyed Topics: initial_remote_writers must = initial_remote_writers_per_instance |
||
Initial number of DataWriters from which a DataReader may receive DDS data samples for a single instance. For unkeyed Topics: initial_remote_writers must = initial_remote_writers_per_instance |
||
Initial number of DDS_SampleInfo structures that a DataReader will allocate. |
||
Initial number of times in which memory can be concurrently loaned via read/take calls without being returned with return_loan(). |
||
Maximum number of times in which memory can be concurrently loaned via read/take calls without being returned with return_loan(). |
||
Maximum number of DDS samples that can be read/taken on a DataReader. |
||
DDS_ |
Determines whether the DataReader can receive fragmented DDS samples. When fragmentation support is not needed, disabling fragmentation support will save some memory resources. |
|
DDS_ |
The maximum number of DDS samples for which the DataReader may store fragments at a given point in time. At any given time, a DataReader may store fragments for up to max_fragmented_samples DDS samples while waiting for the remaining fragments. These DDS samples need not have consecutive sequence numbers and may have been sent by different DataWriters. Once all fragments of a DDS sample have been received, the DDS sample is treated as a regular DDS sample and becomes subject to standard QoS settings, such as max_samples. Connext DDS will drop fragments if the max_fragmented_samples limit has been reached. For best-effort communication, Connext DDS will accept a fragment for a new DDS sample, but drop the oldest fragmented DDS sample from the same remote writer. For reliable communication, Connext DDS will drop fragments for any new DDS samples until all fragments for at least one older DDS sample from that writer have been received. Only applies if disable_fragmentation_support is FALSE. |
|
The initial number of DDS samples for which a DataReader may store fragments. Only applies if disable_fragmentation_support is FALSE. |
||
The maximum number of DDS samples per remote writer for which a DataReader may store fragments. This is a logical limit, so a single remote writer cannot consume all available resources. Only applies if disable_fragmentation_support is FALSE. |
||
Maximum number of fragments for a single DDS sample. Only applies if disable_fragmentation_support is FALSE. |
||
DDS_ |
By default, the middleware does not allocate memory upfront, but instead allocates memory from the heap upon receiving the first fragment of a new sample. The amount of memory allocated equals the amount of memory needed to store all fragments in the sample. Once all fragments of a sample have been received, the sample is deserialized and stored in the regular receive queue. At that time, the dynamically allocated memory is freed again. This QoS setting is useful for large, but variable-sized data types where up-front memory allocation for multiple samples based on the maximum possible sample size may be expensive. The main disadvantage of not pre-allocating memory is that one can no longer guarantee the middleware will have sufficient resources at run-time. If dynamically_allocate_fragmented_samples is FALSE, the middleware will allocate memory up-front for storing fragments for up to initial_fragmented_samples samples. This memory may grow up to max_fragmented_samples if needed. Only applies if disable_fragmentation_support is FALSE. |
|
DDS_ |
Maximum number of instances for which a DataReader will keep state. |
|
DDS_ |
The maximum number of virtual writers (identified by a virtual GUID) from which a DataReader may read, including all instances. When the Subscriber’s access_scope is GROUP, this value determines the maximum number of DataWriter groups supported by the Subscriber. Since the Subscriber may contain more than one DataReader, only the setting of the first applies. |
|
DDS_ |
The initial number of virtual writers from which a DataReader may read, including all instances. |
|
DDS_ |
Maximum number of virtual remote writers that can be associated with an instance. For unkeyed types, this value is ignored. The features of Durable Reader State and MultiChannel DataWriters, as well as Persistence Service1Persistence Service is included with the Connext DDS Professional, Evaluation, and Basic package types. It saves DDS data samples so they can be delivered to subscribing applications that join the system at a later time (see Introduction to RTI Persistence Service)., require Connext DDS to keep some internal state per virtual writer and instance that is used to filter duplicate DDS samples. These duplicate DDS samples could be coming from different DataWriter channels or from multiple executions of Persistence Service. Once an association between a remote virtual writer and an instance is established, it is permanent—it will not disappear even if the physical writer incarnating the virtual writer is destroyed. If max_remote_virtual_writers_per_instance is exceeded for an instance, Connext DDS will not associate this instance with new virtual writers. Duplicate DDS samples coming from these virtual writers will not be filtered on the reader. If you are not using Durable Reader State, MultiChannel DataWriters or Persistence Service, you can set this property to 1 to optimize resources. For additional information about the virtual writers see Mechanisms for Achieving Information Durability and Persistence. |
|
DDS_ |
Initial number of virtual remote writers per instance. For unkeyed types, this value is ignored. |
|
DDS_ |
Maximum number of remote writers that are allowed to write the same DDS sample. One scenario in which two DataWriters may write the same DDS sample is when using Persistence Service. The DataReader may receive the same DDS sample from the original DataWriter and from an Persistence Service DataWriter. |
|
DDS_ |
This value determines the maximum number of unique query condition content filters that a reader may create. Each query condition content filter is comprised of both its query_expression and query_parameters. Two query conditions that have the same query_expression will require unique query condition filters if their query_parameters differ. Query conditions that differ only in their state masks will share the same query condition filter. |
|
DDS_ |
Maximum length of application-level acknowledgment response data. The maximum length of response data in an application-level acknowledgment. When set to zero, no response data is sent with application-level acknowledgments. |
|
DDS_Boolean |
keep_minimum_state_for_instances |
Determines whether the DataReader keeps a minimum instance state for up to max_total_instances. The minimum state is useful for filtering samples in certain scenarios. See max_total_instances and max_instances |
DataReaders must allocate internal structures to handle: the maximum number of DataWriters that may connect to it; whether or not a DataReader handles data fragmentation and how many data fragments that it may handle (for DDS data samples larger than the MTU of the underlying network transport); how many simultaneous outstanding loans of internal memory holding DDS data samples can be provided to user code; as well as others.
Most of these internal structures start at an initial size and, by default, will grow as needed by dynamically allocating additional memory. You may set fixed, maximum sizes for these internal structures if you want to bound the amount of memory that can be used by a DataReader. Setting the initial size to the maximum size will prevent Connext DDS from dynamically allocating any memory after the DataReader is created.
This policy also controls how the allocated internal data structure may be used. For example, DataReaders need data structures to keep track of all of the DataWriters that may be sending it DDS data samples. The total number of DataWriters that it can keep track of is set by the initial_remote_writers and max_remote_writers values. For keyed Topics, initial_remote_writers_per_instance and max_remote_writers_per_instance control the number of DataWriters allowed by the DataReader to modify the value of a single instance.
By setting the max value to be less than max_remote_writers, you can prevent instances with many DataWriters from using up the resources and starving other instances. Once the resources for keeping track of DataWriters are used up, the DataReader will not be able to accept “connections” from new DataWriters. The DataReader will not be able to receive data from new matching DataWriters which would be ignored.
In the reliable protocol used by Connext DDS to support a RELIABLE setting for the RELIABILITY QosPolicy, the DataReader must temporarily store DDS data samples that have been received out-of-order from a reliable DataWriter. The storage of out-of-order DDS samples is allocated from the DataReader’s receive queue and shared among all reliable DataWriters. The parameter max_samples_per_remote_writer controls the maximum number of out-of-order data DDS samples that the DataReader is allowed to store for a single DataWriter. This value must be less than the max_samples value set in the RESOURCE_LIMITS QosPolicy.
max_samples_per_remote_writer allows Connext DDS to share the limited resources of the DataReader equitably so that a single DataWriter is unable to use up all of the storage of the DataReader while missing DDS data samples are being resent.
When setting the values of the members, the following rules apply:
If any of the above are false, Connext DDS returns the error code DDS_RETCODE_INCONSISTENT_POLICY when setting the DataReader’s QoS.
The features Durable Reader State, Multi-channel DataWriters, and Persistence Service (Part 6: RTI Persistence Service) require Connext DDS to keep some internal state even for instances without DataWriters or DDS samples in the DataReader’s queue or that have been purged due to a dispose. The additional state is used to filter duplicate DDS samples that could be coming from different DataWriter channels or from multiple executions of Persistence Service. The total maximum number of instances that will be managed by the middleware, including instances without associated DataWriters or DDS samples or that have been purged due to a dispose, is determined by max_total_instances. This additional state will only be kept for up to max_total_instances if keep_minimum_state_for_instances is TRUE, otherwise the additional state will not be kept for any instances.
The max_samples_per_remote_writer value affects sharing and starvation. max_samples_per_remote_writer can be set to less than the RESOURCE_LIMITS QosPolicy’s max_samples to prevent a single DataWriter from starving others. This control is especially important for Topics that have their OWNERSHIP QosPolicy set to SHARED.
In the case of EXCLUSIVE ownership, a lower-strength remote DataWriter can "starve" a higher-strength remote DataWriter by making use of more of the DataReader's resources, an undesirable condition. In the case of SHARED ownership, a remote DataWriter may starve another remote DataWriter, making the sharing not really equal.
This QosPolicy cannot be modified after the DataReader is created.
It only applies to DataReaders, so there are no restrictions for setting it compatibly on the DataWriter.
Increasing any of the “initial” values in this policy will increase the amount of memory allocated by Connext DDS when a new DataReader is created. Increasing any of the “max” values will not affect the initial memory allocated for a new DataReader, but will affect how much additional memory may be allocated as needed over the DataReader’s lifetime.
Setting a max value greater than an initial value thus allows your application to use memory more dynamically and efficiently in the event that the size of the application is not well-known ahead of time. However, Connext DDS may dynamically allocate memory in response to network communications.
© 2016 RTI