RTI Connext Modern C++ API
Version 7.0.0
|
Controls the memory usage of dds::pub::DataWriter or a dds::sub::DataReader. More...
#include <dds/core/policy/CorePolicy.hpp>
Public Member Functions | |
ResourceLimits () | |
Creates the default policy. More... | |
ResourceLimits (int32_t the_max_samples, int32_t the_max_instances, int32_t the_max_samples_per_instance) | |
Creates an instance with the specified max_samples, max_instances and max_samples_per instance and default values for the rest of parameters. More... | |
ResourceLimits & | max_samples (int32_t the_max_samples) |
Sets the maximum number of data samples that a DataWriter or a DataReader can manage across all instances. More... | |
int32_t | max_samples () const |
Getter (see setter with the same name) More... | |
ResourceLimits & | max_instances (int32_t the_max_instances) |
Sets the maximum number of instances that a DataWriter or a DataReader can manage. More... | |
int32_t | max_instances () const |
Getter (see setter with the same name) More... | |
ResourceLimits & | max_samples_per_instance (int32_t the_max_samples_per_instance) |
Sets the maximum number of data samples per instance that a DataWriter or a DataReader can manage. More... | |
int32_t | max_samples_per_instance () const |
Getter (see setter with the same name) More... | |
dds::core::policy::ResourceLimits & | initial_samples (int32_t the_initial_samples) |
<<extension>> Sets the number of samples that a DataReader or a DataWriter will preallocate. More... | |
int32_t | initial_samples () const |
Getter (see setter with the same name) More... | |
dds::core::policy::ResourceLimits & | initial_instances (int32_t the_initial_instances) |
<<extension>> Sets the number of instances that a DataReader or a DataWriter will preallocate. More... | |
int32_t | initial_instances () const |
Getter (see setter with the same name) More... | |
dds::core::policy::ResourceLimits & | instance_hash_buckets (int32_t the_instance_hash_buckets) |
<<extension>> Sets the number of hash buckets for looking up instances More... | |
int32_t | instance_hash_buckets () const |
Getter (see setter with the same name) More... | |
Controls the memory usage of dds::pub::DataWriter or a dds::sub::DataReader.
This policy controls the resources that RTI Connext can use to meet the requirements imposed by the application and other QoS settings.
For the reliability protocol (and dds::core::policy::Durability), this QoS policy determines the actual maximum queue size when the dds::core::policy::History is set to dds::core::policy::HistoryKind::KEEP_ALL.
In general, this QoS policy is used to limit the amount of system memory that RTI Connext can allocate. For embedded real-time systems and safety-critical systems, pre-determination of maximum memory usage is often required. In addition, dynamic memory allocation could introduce non-deterministic latencies in time-critical paths.
This QoS policy can be set such that an entity does not dynamically allocate any more memory after its initialization phase.
If dds::pub::DataWriter objects are communicating samples faster than they are ultimately taken by the dds::sub::DataReader objects, the middleware will eventually hit against some of the QoS-imposed resource limits. Note that this may occur when just a single dds::sub::DataReader cannot keep up with its corresponding dds::pub::DataWriter. The behavior in this case depends on the setting for the RELIABILITY. If reliability is dds::core::policy::ReliabilityKind_def::BEST_EFFORT, then RTI Connext is allowed to drop samples. If the reliability is dds::core::policy::ReliabilityKind_def::RELIABLE, RTI Connext will block the dds::pub::DataWriter or discard the sample at the dds::sub::DataReader in order not to lose existing samples.
The constant dds::core::LENGTH_UNLIMITED may be used to indicate the absence of a particular limit. For example setting dds::core::policy::ResourceLimits::max_samples_per_instance to dds::core::LENGTH_UNLIMITED will cause RTI Connext not to enforce this particular limit.
If these resource limits are not set sufficiently, under certain circumstances the dds::pub::DataWriter may block on a write() call even though the dds::core::policy::History is dds::core::policy::HistoryKind::KEEP_LAST. To guarantee the writer does not block for dds::core::policy::HistoryKind::KEEP_LAST, make sure the resource limits are set such that:
The setting of dds::core::policy::ResourceLimits::max_samples must be consistent with dds::core::policy::ResourceLimits::max_samples_per_instance. For these two values to be consistent, it must be true that dds::core::policy::ResourceLimits::max_samples >= dds::core::policy::ResourceLimits::max_samples_per_instance. As described above, this limit will not be enforced if dds::core::policy::ResourceLimits::max_samples_per_instance is set to dds::core::LENGTH_UNLIMITED.
The setting of RESOURCE_LIMITS max_samples_per_instance
must be consistent with the HISTORY depth
. For these two QoS to be consistent, it must be true that depth <= max_samples_per_instance.
|
inline |
Creates the default policy.
|
inline |
Creates an instance with the specified max_samples, max_instances and max_samples_per instance and default values for the rest of parameters.
|
inline |
Sets the maximum number of data samples that a DataWriter or a DataReader can manage across all instances.
Specifies the maximum number of data samples a dds::pub::DataWriter (or dds::sub::DataReader) can manage across all the instances associated with it.
For unkeyed types, this value has to be equal to max_samples_per_instance if max_samples_per_instance is not equal to dds::core::LENGTH_UNLIMITED.
When batching is enabled, the maximum number of data samples a dds::pub::DataWriter can manage will also be limited by rti::core::policy::DataWriterResourceLimits::max_batches.
[default] dds::core::LENGTH_UNLIMITED
[range] [1, 100 million] or dds::core::LENGTH_UNLIMITED, >= initial_samples, >= max_samples_per_instance, >= rti::core::policy::DataReaderResourceLimits::max_samples_per_remote_writer or >= rti::core::RtpsReliableWriterProtocol::heartbeats_per_max_samples
For dds::pub::qos::DataWriterQos max_samples >= rti::core::RtpsReliableWriterProtocol::heartbeats_per_max_samples in rti::core::policy::DataWriterProtocol::rtps_reliable_writer if batching is disabled.
|
inline |
Getter (see setter with the same name)
|
inline |
Sets the maximum number of instances that a DataWriter or a DataReader can manage.
[default] dds::core::LENGTH_UNLIMITED
[range] [1, 1 million] or dds::core::LENGTH_UNLIMITED, >= initial_instances
|
inline |
Getter (see setter with the same name)
|
inline |
Sets the maximum number of data samples per instance that a DataWriter or a DataReader can manage.
While an unkeyed type is logically considered as a single instance, for unkeyed types this value has to be equal to max_samples or dds::core::LENGTH_UNLIMITED.
[default] dds::core::LENGTH_UNLIMITED
[range] [1, 100 million] or dds::core::LENGTH_UNLIMITED, <= max_samples or dds::core::LENGTH_UNLIMITED, >= dds::core::policy::History::depth
|
inline |
Getter (see setter with the same name)
dds::core::policy::ResourceLimits & initial_samples | ( | int32_t | the_initial_samples | ) |
<<extension>> Sets the number of samples that a DataReader or a DataWriter will preallocate.
Specifies the initial number of data samples a dds::pub::DataWriter (or dds::sub::DataReader) will manage across all the instances associated with it.
[default] 32
[range] [1,100 million], <= max_samples
int32_t initial_samples | ( | ) | const |
Getter (see setter with the same name)
dds::core::policy::ResourceLimits & initial_instances | ( | int32_t | the_initial_instances | ) |
<<extension>> Sets the number of instances that a DataReader or a DataWriter will preallocate.
[default] 32
[range] [1,1 million], <= max_instances
int32_t initial_instances | ( | ) | const |
Getter (see setter with the same name)
dds::core::policy::ResourceLimits & instance_hash_buckets | ( | int32_t | the_instance_hash_buckets | ) |
<<extension>> Sets the number of hash buckets for looking up instances
The instance hash table facilitates instance lookup. A higher number of buckets decreases instance lookup time but increases the memory usage.
[default] 1 [range] [1,1 million]
int32_t instance_hash_buckets | ( | ) | const |
Getter (see setter with the same name)