RTI Connext C API
Version 6.1.1
|
Various settings that configure how a DDS_DataReader allocates and uses physical memory for internal resources. More...
Data Fields | |
DDS_Long | max_remote_writers |
The maximum number of remote writers from which a DDS_DataReader may read, including all instances. More... | |
DDS_Long | max_remote_writers_per_instance |
The maximum number of remote writers from which a DDS_DataReader may read a single instance. More... | |
DDS_Long | max_samples_per_remote_writer |
The maximum number of out-of-order samples from a given remote DDS_DataWriter that a DDS_DataReader may store when maintaining a reliable connection to the DDS_DataWriter. More... | |
DDS_Long | max_infos |
The maximum number of info units that a DDS_DataReader can use to store DDS_SampleInfo. More... | |
DDS_Long | initial_remote_writers |
The initial number of remote writers from which a DDS_DataReader may read, including all instances. More... | |
DDS_Long | initial_remote_writers_per_instance |
The initial number of remote writers from which a DDS_DataReader may read a single instance. More... | |
DDS_Long | initial_infos |
The initial number of info units that a DDS_DataReader can have, which are used to store DDS_SampleInfo. More... | |
DDS_Long | initial_outstanding_reads |
The initial number of outstanding calls to read/take (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan. More... | |
DDS_Long | max_outstanding_reads |
The maximum number of outstanding read/take calls (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan. More... | |
DDS_Long | max_samples_per_read |
The maximum number of data samples that the application can receive from the middleware in a single call to FooDataReader_read or FooDataReader_take. If more data exists in the middleware, the application will need to issue multiple read/take calls. More... | |
DDS_Boolean | disable_fragmentation_support |
Determines whether the DDS_DataReader can receive fragmented samples. More... | |
DDS_Long | max_fragmented_samples |
The maximum number of samples for which the DDS_DataReader may store fragments at a given point in time. More... | |
DDS_Long | initial_fragmented_samples |
The initial number of samples for which a DDS_DataReader may store fragments. More... | |
DDS_Long | max_fragmented_samples_per_remote_writer |
The maximum number of samples per remote writer for which a DDS_DataReader may store fragments. More... | |
DDS_Long | max_fragments_per_sample |
Maximum number of fragments for a single sample. More... | |
DDS_Boolean | dynamically_allocate_fragmented_samples |
Determines whether the DDS_DataReader pre-allocates storage for storing fragmented samples. More... | |
DDS_Long | max_total_instances |
Maximum number of instances for which a DataReader will keep state. More... | |
DDS_Long | max_remote_virtual_writers |
The maximum number of remote virtual writers from which a DDS_DataReader may read, including all instances. More... | |
DDS_Long | initial_remote_virtual_writers |
The initial number of remote virtual writers from which a DDS_DataReader may read, including all instances. More... | |
DDS_Long | max_remote_virtual_writers_per_instance |
The maximum number of virtual remote writers that can be associated with an instance. More... | |
DDS_Long | initial_remote_virtual_writers_per_instance |
The initial number of virtual remote writers per instance. More... | |
DDS_Long | max_remote_writers_per_sample |
The maximum number of remote writers allowed to write the same sample. More... | |
DDS_Long | max_query_condition_filters |
The maximum number of query condition filters a reader is allowed. More... | |
DDS_Long | max_app_ack_response_length |
Maximum length of application-level acknowledgment response data. More... | |
DDS_Boolean | keep_minimum_state_for_instances |
Whether or not keep a minimum instance state for up to DDS_DataReaderResourceLimitsQosPolicy::max_total_instances. More... | |
DDS_Long | initial_topic_queries |
The initial number of TopicQueries allocated by a DDS_DataReader. More... | |
DDS_Long | max_topic_queries |
The maximum number of active TopicQueries that a DDS_DataReader can create. More... | |
struct DDS_AllocationSettings_t | shmem_ref_transfer_mode_attached_segment_allocation |
Allocation resource for the shared memory segments attached by the DDS_DataReader. More... | |
struct DDS_DataReaderResourceLimitsInstanceReplacementSettings | instance_replacement |
Sets the kind of instances allowed to be replaced for each instance state (DDS_InstanceStateKind) when a DataReader reaches DDS_ResourceLimitsQosPolicy::max_instances. More... | |
Various settings that configure how a DDS_DataReader allocates and uses physical memory for internal resources.
DataReaders must allocate internal structures to handle the maximum number of DataWriters that may connect to it, whether or not a DDS_DataReader handles data fragmentation and how many data fragments that it may handle (for data samples larger than the MTU of the underlying network transport), how many simultaneous outstanding loans of internal memory holding data samples can be provided to user code, as well as others.
Most of these internal structures start at an initial size and, by default, will grow as needed by dynamically allocating additional memory. You may set fixed, maximum sizes for these internal structures if you want to bound the amount of memory that can be used by a DDS_DataReader. By setting the initial size to the maximum size, you will prevent RTI Connext from dynamically allocating any memory after the creation of the DDS_DataReader.
This QoS policy is an extension to the DDS standard.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers |
The maximum number of remote writers from which a DDS_DataReader may read, including all instances.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, >= initial_remote_writers, >= max_remote_writers_per_instance
For unkeyed types, this value has to be equal to max_remote_writers_per_instance if max_remote_writers_per_instance is not equal to DDS_LENGTH_UNLIMITED.
Note: For efficiency, set max_remote_writers
>= DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers_per_instance.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers_per_instance |
The maximum number of remote writers from which a DDS_DataReader may read a single instance.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1024] or DDS_LENGTH_UNLIMITED, <= max_remote_writers or DDS_LENGTH_UNLIMITED, >= initial_remote_writers_per_instance
For unkeyed types, this value has to be equal to max_remote_writers if it is not DDS_LENGTH_UNLIMITED.
Note: For efficiency, set max_remote_writers_per_instance
<= DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_samples_per_remote_writer |
The maximum number of out-of-order samples from a given remote DDS_DataWriter that a DDS_DataReader may store when maintaining a reliable connection to the DDS_DataWriter.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 100 million] or DDS_LENGTH_UNLIMITED, <= DDS_ResourceLimitsQosPolicy::max_samples
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_infos |
The maximum number of info units that a DDS_DataReader can use to store DDS_SampleInfo.
When read/take is called on a DataReader, the DataReader passes a sequence of data samples and an associated sample info sequence. The sample info sequence contains additional information for each data sample.
max_infos determines the resources allocated for storing sample info. This memory is loaned to the application when passing a sample info sequence.
Note that sample info is a snapshot, generated when read/take is called.
max_infos should not be less than max_samples.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, >= initial_infos
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers |
The initial number of remote writers from which a DDS_DataReader may read, including all instances.
[default] 2
[range] [1, 1 million], <= max_remote_writers
For unkeyed types this value has to be equal to initial_remote_writers_per_instance.
Note: For efficiency, set initial_remote_writers >= DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers_per_instance.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers_per_instance |
The initial number of remote writers from which a DDS_DataReader may read a single instance.
[default] 2
[range] [1,1024], <= max_remote_writers_per_instance
For unkeyed types this value has to be equal to initial_remote_writers.
Note: For efficiency, set initial_remote_writers_per_instance <= DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_infos |
The initial number of info units that a DDS_DataReader can have, which are used to store DDS_SampleInfo.
[default] 32
[range] [1,1 million], <= max_infos
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_outstanding_reads |
The initial number of outstanding calls to read/take (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan.
[default] 2
[range] [1, 65536], <= max_outstanding_reads
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_outstanding_reads |
The maximum number of outstanding read/take calls (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 65536] or DDS_LENGTH_UNLIMITED, >= initial_outstanding_reads
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_samples_per_read |
The maximum number of data samples that the application can receive from the middleware in a single call to FooDataReader_read or FooDataReader_take. If more data exists in the middleware, the application will need to issue multiple read/take calls.
When reading data using listeners, the expected number of samples available for delivery in a single take
call is typically small: usually just one, in the case of unbatched data, or the number of samples in a single batch, in the case of batched data. (See DDS_BatchQosPolicy for more information about this feature.) When polling for data or using a DDS_WaitSet, however, multiple samples (or batches) could be retrieved at once, depending on the data rate.
A larger value for this parameter makes the API simpler to use at the expense of some additional memory consumption.
[default] 1024
[range] [1,65536]
DDS_Boolean DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support |
Determines whether the DDS_DataReader can receive fragmented samples.
When fragmentation support is not needed, disabling fragmentation support will save some memory resources.
[default] DDS_BOOLEAN_FALSE
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_fragmented_samples |
The maximum number of samples for which the DDS_DataReader may store fragments at a given point in time.
At any given time, a DDS_DataReader may store fragments for up to max_fragmented_samples
samples while waiting for the remaining fragments. These samples need not have consecutive sequence numbers and may have been sent by different DDS_DataWriter instances.
Once all fragments of a sample have been received, the sample is treated as a regular sample and becomes subject to standard QoS settings such as DDS_ResourceLimitsQosPolicy::max_samples.
The middleware will drop fragments if the max_fragmented_samples
limit has been reached. For best-effort communication, the middleware will accept a fragment for a new sample, but drop the oldest fragmented sample from the same remote writer. For reliable communication, the middleware will drop fragments for any new samples until all fragments for at least one older sample from that writer have been received.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] 1024
[range] [1, 1 million]
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_fragmented_samples |
The initial number of samples for which a DDS_DataReader may store fragments.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] 4
[range] [1,1024], <= max_fragmented_samples
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_fragmented_samples_per_remote_writer |
The maximum number of samples per remote writer for which a DDS_DataReader may store fragments.
Logical limit so a single remote writer cannot consume all available resources.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] 256
[range] [1, 1 million], <= max_fragmented_samples
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_fragments_per_sample |
Maximum number of fragments for a single sample.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED
DDS_Boolean DDS_DataReaderResourceLimitsQosPolicy::dynamically_allocate_fragmented_samples |
Determines whether the DDS_DataReader pre-allocates storage for storing fragmented samples.
By default, the middleware does not allocate memory upfront, but instead allocates memory from the heap upon receiving the first fragment of a new sample. The amount of memory allocated equals the amount of memory needed to store all fragments in the sample. Once all fragments of a sample have been received, the sample is deserialized and stored in the regular receive queue. At that time, the dynamically allocated memory is freed again.
This QoS setting is useful for large, but variable-sized data types where upfront memory allocation for multiple samples based on the maximum possible sample size may be expensive. The main disadvantage of not pre-allocating memory is that one can no longer guarantee the middleware will have sufficient resources at run-time.
If dynamically_allocate_fragmented_samples
is set to DDS_BOOLEAN_FALSE, the middleware will allocate memory upfront for storing fragments for up to DDS_DataReaderResourceLimitsQosPolicy::initial_fragmented_samples samples. This memory may grow up to DDS_DataReaderResourceLimitsQosPolicy::max_fragmented_samples if needed.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] DDS_BOOLEAN_TRUE
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_total_instances |
Maximum number of instances for which a DataReader will keep state.
The maximum number of instances actively managed by a DataReader is determined by DDS_ResourceLimitsQosPolicy::max_instances.
These instances have associated DataWriters or samples in the DataReader's queue and are visible to the user through operations such as FooDataReader_take, FooDataReader_read, and FooDataReader_get_key_value.
The features Durable Reader State, MultiChannel DataWriters and RTI Persistence Service require RTI Connext to keep some internal state even for instances without DataWriters or samples in the DataReader's queue. The additional state is used to filter duplicate samples that could be coming from different DataWriter channels or from multiple executions of RTI Persistence Service.
The total maximum number of instances that will be managed by the middleware, including instances without associated DataWriters or samples, is determined by max_total_instances.
When a new instance is received, RTI Connext will check the resource limit DDS_ResourceLimitsQosPolicy::max_instances. If the limit is exceeded, RTI Connext will drop the sample with the reason LOST_BY_INSTANCES_LIMIT. If the limit is not exceeded, RTI Connext will check max_total_instances. If max_total_instances is exceeded, RTI Connext will replace an existing instance without DataWriters and samples with the new one. The application could receive duplicate samples for the replaced instance if it becomes alive again.
The max_total_instances limit is not used if DDS_DataReaderResourceLimitsQosPolicy::keep_minimum_state_for_instances is false, and in that case should be left at the default value.
[default] DDS_AUTO_MAX_TOTAL_INSTANCES
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED or DDS_AUTO_MAX_TOTAL_INSTANCES, >= DDS_ResourceLimitsQosPolicy::max_instances
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_virtual_writers |
The maximum number of remote virtual writers from which a DDS_DataReader may read, including all instances.
When DDS_PresentationQosPolicy::access_scope is set to DDS_GROUP_PRESENTATION_QOS, this value determines the maximum number of DataWriter groups that can be managed by the DDS_Subscriber containing this DDS_DataReader.
Since the DDS_Subscriber may contain more than one DDS_DataReader, only the setting of the first applies.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, >= initial_remote_virtual_writers, >= max_remote_virtual_writers_per_instance
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_virtual_writers |
The initial number of remote virtual writers from which a DDS_DataReader may read, including all instances.
[default] 2
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, <= max_remote_virtual_writers
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_virtual_writers_per_instance |
The maximum number of virtual remote writers that can be associated with an instance.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1024] or DDS_LENGTH_UNLIMITED, >= initial_remote_virtual_writers_per_instance
For unkeyed types, this value is ignored.
The features of Durable Reader State and MultiChannel DataWriters, and RTI Persistence Service require RTI Connext to keep some internal state per virtual writer and instance that is used to filter duplicate samples. These duplicate samples could be coming from different DataWriter channels or from multiple executions of RTI Persistence Service.
Once an association between a remote virtual writer and an instance is established, it is permanent – it will not disappear even if the physical writer incarnating the virtual writer is destroyed.
If max_remote_virtual_writers_per_instance is exceeded for an instance, RTI Connext will not associate this instance with new virtual writers. Duplicates samples from these virtual writers will not be filtered on the reader.
If you are not using Durable Reader State, MultiChannel DataWriters or RTI Persistence Service in your system, you can set this property to 1 to optimize resources.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_virtual_writers_per_instance |
The initial number of virtual remote writers per instance.
[default] 2
[range] [1, 1024], <= max_remote_virtual_writers_per_instance
For unkeyed types, this value is ignored.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers_per_sample |
The maximum number of remote writers allowed to write the same sample.
One scenario in which two DataWriters may write the same sample is Persistence Service. The DataReader may receive the same sample coming from the original DataWriter and from a Persistence Service DataWriter. [default] 3
[range] [1, 1024]
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_query_condition_filters |
The maximum number of query condition filters a reader is allowed.
[default] 4
[range] [0, 32]
This value determines the maximum number of unique query condition content filters that a reader may create.
Each query condition content filter is comprised of both its query_expression
and query_parameters
. Two query conditions that have the same query_expression
will require unique query condition filters if their query_paramters
differ. Query conditions that differ only in their state masks will share the same query condition filter.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_app_ack_response_length |
Maximum length of application-level acknowledgment response data.
The maximum length of response data in an application-level acknowledgment.
When set to zero, no response data is sent with application-level acknowledgments.
[default] 1
[range] [0, 65536]
DDS_Boolean DDS_DataReaderResourceLimitsQosPolicy::keep_minimum_state_for_instances |
Whether or not keep a minimum instance state for up to DDS_DataReaderResourceLimitsQosPolicy::max_total_instances.
The features Durable Reader State, multi-channel DataWriters, and Persistence Service require RTI Connext to keep some minimal internal state even for instances without DataWriters or DDS samples in the DataReader's queue, or that have been purged due to a dispose. The additional state is used to filter duplicate DDS samples that could be coming from different DataWriter channels or from multiple executions of Persistence Service. The total maximum number of instances that will be managed by the middleware, including instances without associated DataWriters or DDS samples or that have been purged due to a dispose, is determined by DDS_DataReaderResourceLimitsQosPolicy::max_total_instances.
This additional state will only be kept for up to max_total_instances if this field is set to DDS_BOOLEAN_TRUE, otherwise the additional state will not be kept for any instances.
The minimum state includes information such as the source timestamp of the last sample received by the instance and the last sequence number received from a virtual GUID.
[default] DDS_BOOLEAN_TRUE
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_topic_queries |
The initial number of TopicQueries allocated by a DDS_DataReader.
[default] 1
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_topic_queries |
The maximum number of active TopicQueries that a DDS_DataReader can create.
Once this limit is reached, a DDS_DataReader can create more TopicQueries only if it deletes some of the previously created ones.
[default] DDS_LENGTH_UNLIMITED
struct DDS_AllocationSettings_t DDS_DataReaderResourceLimitsQosPolicy::shmem_ref_transfer_mode_attached_segment_allocation |
Allocation resource for the shared memory segments attached by the DDS_DataReader.
The max_count does not limit the total number of shared memory segments used by the DDS_DataReader. When this limit is hit, the DDS_DataReader will try to detach from a segment that doesn't contain any loaned samples and attach to a new segment. If samples are loaned from all attached segments, then the DDS_DataReader will fail to attach to the new segment. This scenario will result in a sample loss.
[default] initial_count = DDS_AUTO_COUNT (DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers); max_count = DDS_AUTO_COUNT (DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers); incremental_count = DDS_AUTO_COUNT (0 if initial_count = max_count; -1 otherwise);
[range] See allowed ranges in struct DDS_AllocationSettings_t
struct DDS_DataReaderResourceLimitsInstanceReplacementSettings DDS_DataReaderResourceLimitsQosPolicy::instance_replacement |
Sets the kind of instances allowed to be replaced for each instance state (DDS_InstanceStateKind) when a DataReader reaches DDS_ResourceLimitsQosPolicy::max_instances.
When DDS_ResourceLimitsQosPolicy::max_instances is reached, a DDS_DataReader will try to make room for a new instance by attempting to reclaim an existing instance based on the instance replacement kinds specified by this field.
A DataReader can choose what kinds of instances can be replaced for each DDS_InstanceStateKind separately. This means, for example, that a DataReader can choose to not allow replacing alive (DDS_ALIVE_INSTANCE_STATE) instances but allow replacement of empty disposed (DDS_NOT_ALIVE_DISPOSED_INSTANCE_STATE) instances.
Only instances whose states match the specified kinds are eligible to be replaced. In addition, there must be no outstanding loans on any of the samples belonging to the instance for it to be considered for replacement.
For all kinds, a DDS_DataReader will replace the least-recently-updated instance satisfying that kind. An instance is considered 'updated' when a valid sample or dispose sample is received and accepted for that instance. When using DDS_EXCLUSIVE_OWNERSHIP_QOS, only samples that are received from the owner of the instance will cause the instance to be considered updated. An instance is not considered updated when an unregister sample is received because the unregister message simply indicates that there is one less writer that has updates for the instance, not that the instance itself was updated.
If no replaceable instance exists, the sample for the new instance will be considered lost with lost reason DDS_LOST_BY_INSTANCES_LIMIT and the instance will not be asserted into the DataReader queue.
[default]