RTI Connext C API
Version 5.0.0
|
Various settings that configure how a DDS_DataReader allocates and uses physical memory for internal resources. More...
Data Fields | |
DDS_Long | max_remote_writers |
The maximum number of remote writers from which a DDS_DataReader may read, including all instances. | |
DDS_Long | max_remote_writers_per_instance |
The maximum number of remote writers from which a DDS_DataReader may read a single instance. | |
DDS_Long | max_samples_per_remote_writer |
The maximum number of out-of-order samples from a given remote DDS_DataWriter that a DDS_DataReader may store when maintaining a reliable connection to the DDS_DataWriter. | |
DDS_Long | max_infos |
The maximum number of info units that a DDS_DataReader can use to store DDS_SampleInfo. | |
DDS_Long | initial_remote_writers |
The initial number of remote writers from which a DDS_DataReader may read, including all instances. | |
DDS_Long | initial_remote_writers_per_instance |
The initial number of remote writers from which a DDS_DataReader may read a single instance. | |
DDS_Long | initial_infos |
The initial number of info units that a DDS_DataReader can have, which are used to store DDS_SampleInfo. | |
DDS_Long | initial_outstanding_reads |
The initial number of outstanding calls to read/take (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan. | |
DDS_Long | max_outstanding_reads |
The maximum number of outstanding read/take calls (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan. | |
DDS_Long | max_samples_per_read |
The maximum number of data samples that the application can receive from the middleware in a single call to FooDataReader_read or FooDataReader_take. If more data exists in the middleware, the application will need to issue multiple read/take calls. | |
DDS_Boolean | disable_fragmentation_support |
Determines whether the DDS_DataReader can receive fragmented samples. | |
DDS_Long | max_fragmented_samples |
The maximum number of samples for which the DDS_DataReader may store fragments at a given point in time. | |
DDS_Long | initial_fragmented_samples |
The initial number of samples for which a DDS_DataReader may store fragments. | |
DDS_Long | max_fragmented_samples_per_remote_writer |
The maximum number of samples per remote writer for which a DDS_DataReader may store fragments. | |
DDS_Long | max_fragments_per_sample |
Maximum number of fragments for a single sample. | |
DDS_Boolean | dynamically_allocate_fragmented_samples |
Determines whether the DDS_DataReader pre-allocates storage for storing fragmented samples. | |
DDS_Long | max_total_instances |
Maximum number of instances for which a DataReader will keep state. | |
DDS_Long | max_remote_virtual_writers |
The maximum number of remote virtual writers from which a DDS_DataReader may read, including all instances. | |
DDS_Long | initial_remote_virtual_writers |
The initial number of remote virtual writers from which a DDS_DataReader may read, including all instances. | |
DDS_Long | max_remote_virtual_writers_per_instance |
The maximum number of virtual remote writers that can be associated with an instance. | |
DDS_Long | initial_remote_virtual_writers_per_instance |
The initial number of virtual remote writers per instance. | |
DDS_Long | max_remote_writers_per_sample |
The maximum number of remote writers allowed to write the same sample. | |
DDS_Long | max_query_condition_filters |
The maximum number of query condition filters a reader is allowed. | |
Various settings that configure how a DDS_DataReader allocates and uses physical memory for internal resources.
DataReaders must allocate internal structures to handle the maximum number of DataWriters that may connect to it, whether or not a DDS_DataReader handles data fragmentation and how many data fragments that it may handle (for data samples larger than the MTU of the underlying network transport), how many simultaneous outstanding loans of internal memory holding data samples can be provided to user code, as well as others.
Most of these internal structures start at an initial size and, by default, will grow as needed by dynamically allocating additional memory. You may set fixed, maximum sizes for these internal structures if you want to bound the amount of memory that can be used by a DDS_DataReader. By setting the initial size to the maximum size, you will prevent RTI Connext from dynamically allocating any memory after the creation of the DDS_DataReader.
This QoS policy is an extension to the DDS standard.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers |
The maximum number of remote writers from which a DDS_DataReader may read, including all instances.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, >= initial_remote_writers, >= max_remote_writers_per_instance
For unkeyed types, this value has to be equal to max_remote_writers_per_instance if max_remote_writers_per_instance is not equal to DDS_LENGTH_UNLIMITED.
Note: For efficiency, set max_remote_writers
>= DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers_per_instance.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers_per_instance |
The maximum number of remote writers from which a DDS_DataReader may read a single instance.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1024] or DDS_LENGTH_UNLIMITED, <= max_remote_writers or DDS_LENGTH_UNLIMITED, >= initial_remote_writers_per_instance
For unkeyed types, this value has to be equal to max_remote_writers if it is not DDS_LENGTH_UNLIMITED.
Note: For efficiency, set max_remote_writers_per_instance
<= DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_samples_per_remote_writer |
The maximum number of out-of-order samples from a given remote DDS_DataWriter that a DDS_DataReader may store when maintaining a reliable connection to the DDS_DataWriter.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 100 million] or DDS_LENGTH_UNLIMITED, <= DDS_ResourceLimitsQosPolicy::max_samples
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_infos |
The maximum number of info units that a DDS_DataReader can use to store DDS_SampleInfo.
When read/take is called on a DataReader, the DataReader passes a sequence of data samples and an associated sample info sequence. The sample info sequence contains additional information for each data sample.
max_infos determines the resources allocated for storing sample info. This memory is loaned to the application when passing a sample info sequence.
Note that sample info is a snapshot, generated when read/take is called.
max_infos should not be less than max_samples.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, >= initial_infos
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers |
The initial number of remote writers from which a DDS_DataReader may read, including all instances.
[default] 2
[range] [1, 1 million], <= max_remote_writers
For unkeyed types this value has to be equal to initial_remote_writers_per_instance.
Note: For efficiency, set initial_remote_writers >= DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers_per_instance.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers_per_instance |
The initial number of remote writers from which a DDS_DataReader may read a single instance.
[default] 2
[range] [1,1024], <= max_remote_writers_per_instance
For unkeyed types this value has to be equal to initial_remote_writers.
Note: For efficiency, set initial_remote_writers_per_instance <= DDS_DataReaderResourceLimitsQosPolicy::initial_remote_writers.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_infos |
The initial number of info units that a DDS_DataReader can have, which are used to store DDS_SampleInfo.
[default] 32
[range] [1,1 million], <= max_infos
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_outstanding_reads |
The initial number of outstanding calls to read/take (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan.
[default] 2
[range] [1, 65536], <= max_outstanding_reads
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_outstanding_reads |
The maximum number of outstanding read/take calls (or one of their variants) on the same DDS_DataReader for which memory has not been returned by calling FooDataReader_return_loan.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 65536] or DDS_LENGTH_UNLIMITED, >= initial_outstanding_reads
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_samples_per_read |
The maximum number of data samples that the application can receive from the middleware in a single call to FooDataReader_read or FooDataReader_take. If more data exists in the middleware, the application will need to issue multiple read/take calls.
When reading data using listeners, the expected number of samples available for delivery in a single take
call is typically small: usually just one, in the case of unbatched data, or the number of samples in a single batch, in the case of batched data. (See DDS_BatchQosPolicy for more information about this feature.) When polling for data or using a DDS_WaitSet, however, multiple samples (or batches) could be retrieved at once, depending on the data rate.
A larger value for this parameter makes the API simpler to use at the expense of some additional memory consumption.
[default] 1024
[range] [1,65536]
DDS_Boolean DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support |
Determines whether the DDS_DataReader can receive fragmented samples.
When fragmentation support is not needed, disabling fragmentation support will save some memory resources.
[default] DDS_BOOLEAN_FALSE
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_fragmented_samples |
The maximum number of samples for which the DDS_DataReader may store fragments at a given point in time.
At any given time, a DDS_DataReader may store fragments for up to max_fragmented_samples
samples while waiting for the remaining fragments. These samples need not have consecutive sequence numbers and may have been sent by different DDS_DataWriter instances.
Once all fragments of a sample have been received, the sample is treated as a regular sample and becomes subject to standard QoS settings such as DDS_ResourceLimitsQosPolicy::max_samples.
The middleware will drop fragments if the max_fragmented_samples
limit has been reached. For best-effort communication, the middleware will accept a fragment for a new sample, but drop the oldest fragmented sample from the same remote writer. For reliable communication, the middleware will drop fragments for any new samples until all fragments for at least one older sample from that writer have been received.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] 1024
[range] [1, 1 million]
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_fragmented_samples |
The initial number of samples for which a DDS_DataReader may store fragments.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] 4
[range] [1,1024], <= max_fragmented_samples
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_fragmented_samples_per_remote_writer |
The maximum number of samples per remote writer for which a DDS_DataReader may store fragments.
Logical limit so a single remote writer cannot consume all available resources.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] 256
[range] [1, 1 million], <= max_fragmented_samples
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_fragments_per_sample |
Maximum number of fragments for a single sample.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] 512
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED
DDS_Boolean DDS_DataReaderResourceLimitsQosPolicy::dynamically_allocate_fragmented_samples |
Determines whether the DDS_DataReader pre-allocates storage for storing fragmented samples.
By default, the middleware will allocate memory upfront for storing fragments for up to DDS_DataReaderResourceLimitsQosPolicy::initial_fragmented_samples samples. This memory may grow up to DDS_DataReaderResourceLimitsQosPolicy::max_fragmented_samples if needed.
If dynamically_allocate_fragmented_samples
is set to DDS_BOOLEAN_TRUE, the middleware does not allocate memory upfront, but instead allocates memory from the heap upon receiving the first fragment of a new sample. The amount of memory allocated equals the amount of memory needed to store all fragments in the sample. Once all fragments of a sample have been received, the sample is deserialized and stored in the regular receive queue. At that time, the dynamically allocated memory is freed again.
This QoS setting may be useful for large, but variable-sized data types where upfront memory allocation for multiple samples based on the maximum possible sample size may be expensive. The main disadvantage of not pre-allocating memory is that one can no longer guarantee the middleware will have sufficient resources at run-time.
Only applies if DDS_DataReaderResourceLimitsQosPolicy::disable_fragmentation_support is DDS_BOOLEAN_FALSE.
[default] DDS_BOOLEAN_FALSE
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_total_instances |
Maximum number of instances for which a DataReader will keep state.
The maximum number of instances actively managed by a DataReader is determined by DDS_ResourceLimitsQosPolicy::max_instances.
These instances have associated DataWriters or samples in the DataReader's queue and are visible to the user through operations such as FooDataReader_take, FooDataReader_read, and FooDataReader_get_key_value.
The features Durable Reader State, MultiChannel DataWriters and RTI Persistence Service require RTI Connext to keep some internal state even for instances without DataWriters or samples in the DataReader's queue. The additional state is used to filter duplicate samples that could be coming from different DataWriter channels or from multiple executions of RTI Persistence Service.
The total maximum number of instances that will be managed by the middleware, including instances without associated DataWriters or samples, is determined by max_total_instances.
When a new instance is received, RTI Connext will check the resource limit DDS_ResourceLimitsQosPolicy::max_instances. If the limit is exceeded, RTI Connext will drop the sample and report it as lost and rejected. If the limit is not exceeded, RTI Connext will check max_total_instances. If max_total_instances is exceeded, RTI Connext will replace an existing instance without DataWriters and samples with the new one. The application could receive duplicate samples for the replaced instance if it becomes alive again.
[default] DDS_AUTO_MAX_TOTAL_INSTANCES
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED or DDS_AUTO_MAX_TOTAL_INSTANCES, >= DDS_ResourceLimitsQosPolicy::max_instances
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_virtual_writers |
The maximum number of remote virtual writers from which a DDS_DataReader may read, including all instances.
When DDS_PresentationQosPolicy::access_scope is set to DDS_GROUP_PRESENTATION_QOS, this value determines the maximum number of DataWriter groups that can be managed by the DDS_Subscriber containing this DDS_DataReader.
Since the DDS_Subscriber may contain more than one DDS_DataReader, only the setting of the first applies.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, >= initial_remote_virtual_writers, >= max_remote_virtual_writers_per_instance
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_virtual_writers |
The initial number of remote virtual writers from which a DDS_DataReader may read, including all instances.
[default] 2
[range] [1, 1 million] or DDS_LENGTH_UNLIMITED, <= max_remote_virtual_writers
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_virtual_writers_per_instance |
The maximum number of virtual remote writers that can be associated with an instance.
[default] DDS_LENGTH_UNLIMITED
[range] [1, 1024] or DDS_LENGTH_UNLIMITED, >= initial_remote_virtual_writers_per_instance
For unkeyed types, this value is ignored.
The features of Durable Reader State and MultiChannel DataWriters, and RTI Persistence Service require RTI Connext to keep some internal state per virtual writer and instance that is used to filter duplicate samples. These duplicate samples could be coming from different DataWriter channels or from multiple executions of RTI Persistence Service.
Once an association between a remote virtual writer and an instance is established, it is permanent – it will not disappear even if the physical writer incarnating the virtual writer is destroyed.
If max_remote_virtual_writers_per_instance is exceeded for an instance, RTI Connext will not associate this instance with new virtual writers. Duplicates samples from these virtual writers will not be filtered on the reader.
If you are not using Durable Reader State, MultiChannel DataWriters or RTI Persistence Service in your system, you can set this property to 1 to optimize resources.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::initial_remote_virtual_writers_per_instance |
The initial number of virtual remote writers per instance.
[default] 2
[range] [1, 1024], <= max_remote_virtual_writers_per_instance
For unkeyed types, this value is ignored.
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_remote_writers_per_sample |
The maximum number of remote writers allowed to write the same sample.
One scenario in which two DataWriters may write the same sample is Persistence Service. The DataReader may receive the same sample coming from the original DataWriter and from a Persistence Service DataWriter. [default] 3
[range] [1, 1024]
DDS_Long DDS_DataReaderResourceLimitsQosPolicy::max_query_condition_filters |
The maximum number of query condition filters a reader is allowed.
[default] 4
[range] [0, 32]
This value determines the maximum number of unique query condition content filters that a reader may create.
Each query condition content filter is comprised of both its query_expression
and query_parameters
. Two query conditions that have the same query_expression
will require unique query condition filters if their query_paramters
differ. Query conditions that differ only in their state masks will share the same query condition filter.