For the reliability protocol (and the 47.9 DURABILITY QosPolicy), this QosPolicy determines the actual maximum queue size when the 47.12 HISTORY QosPolicy is set to KEEP_ALL.

In general, this QosPolicy is used to limit the amount of system memory that Connext can allocate. For embedded real-time systems and safety-critical systems, pre-determination of maximum memory usage is often required. In addition, dynamic memory allocation could introduce non-deterministic latencies in time-critical paths.

It includes the members in Table 47.41 DDS_ResourceLimitsQosPolicy. For defaults and valid ranges, please refer to the API Reference HTML documentation.

Table 47.41 DDS_ResourceLimitsQosPolicy


Field Name




Maximum number of live DDS samples that Connext can store for a DataWriter/DataReader. This is a physical limit.



Maximum number of active instances that can be managed by a DataWriter/DataReader. (See 40.8.7 Active State and Minimum State.)

For DataReaders, max_instances must be <= max_total_instances in the 48.2 DATA_READER_RESOURCE_LIMITS QosPolicy (DDS Extension).

See also: 47.22.2 Example.



On a DataWriter, this resource limit represents the maximum number of DDS samples of any one instance that Connext will store for a DataWriter.

On a DataReader, this resource limit represents the maximum number of DDS samples of any one instance that are stored in the DataReader output queue—that is, the queue from which the application takes/reads samples.

For keyed types and DataReaders, this value only applies to DDS samples with an instance state of DDS_ALIVE_INSTANCE_STATE.

If a keyed Topic is not used, then max_samples_per_instance must equal max_samples.

How this property behaves depends on your HISTORY and RELIABILITY QoS configurations. See 47.12 HISTORY QosPolicy.



Initial number of DDS samples thatConnext will store for a DataWriter/DataReader. (DDS extension)



Initial number of instances that can be managed by a DataWriter/DataReader. (DDS extension)



Number of hash buckets, which are used by Connext to facilitate instance lookup. (DDS extension).

One of the most important fields is max_samples, which sets the size and causes memory to be allocated for the send or receive queues. For information on how this policy affects reliability, see 32.4.2 Tuning Queue Sizes and Other Resource Limits.

When a DataWriter or DataReader is created, the initial_instances and initial_samples parameters determine the amount of memory first allocated for those Entities. As the application executes, if more space is needed in the send/receive queues to store DDS samples or as more instances are created, then Connext will automatically allocate memory until the limits of max_instances and max_samples are reached.

You may set initial_instances = max_instances and initial_samples = max_samples if you do not want Connext to dynamically allocate memory after initialization.

For keyed Topics, the max_samples_per_instance field in this policy represents the maximum number of DDS samples with the same key that are allowed to be stored by a DataWriter (in the DataWriter’s queue) or by the DataReader (in the DataReader's output queue—that is, the queue from which the application takes/reads samples). The max_samples_per_instance field is a logical limit. The hard physical limit is determined by max_samples. However, because the theoretical number of instances may be quite large (as set by max_instances), you may not want Connext to allocate the total memory needed to hold the maximum number of DDS samples per instance for all possible instances (max_samples_per_instance * max_instances) because during normal operations, the application will never have to hold that much data for the Entity.

So it is possible that an Entity will hit the physical limit max_samples before it hits the max_samples_per_instance limit for a particular instance. However, Connext must be able to store max_samples_per_instance for at least one instance. Therefore, max_samples_per_instance must be <= max_samples.

If a keyed data type is not used, there is only a single instance of the Topic, so max_samples_per_instance must equal max_samples.

Once a physical or logical limit is hit, then how Connext deals with new DDS data samples being sent or received for a DataWriter or DataReader is described in the 47.12 HISTORY QosPolicy setting of DDS_KEEP_ALL_HISTORY_QOS. It is closely tied to whether or not a reliable connection is being maintained.

Although you can set the RESOURCE_LIMITS QosPolicy on Topics, its value can only be used to initialize the RESOURCE_LIMITS QosPolicies of either a DataWriter or DataReader. It does not directly affect the operation of Connext, see 18.1.3 Setting Topic QosPolicies.

47.22.1 Configuring Resource Limits for Asynchronous DataWriters

When using an asynchronous Publisher, if a call to write() is blocked due to a resource limit, the block will last until the timeout period expires, which will prevent others from freeing the resource. To avoid this situation, make sure that the DomainParticipant’s outstanding_asynchronous_sample_allocation in the 44.4 DOMAIN_PARTICIPANT_RESOURCE_LIMITS QosPolicy (DDS Extension) is always greater than the sum of all asynchronous DataWritersmax_samples.

47.22.2 Example

If you want to be able to store max_samples_per_instance for every instance, then you should set

max_samples >= max_instances * max_samples_per_instance

But if you want to save memory and you do not expect that the running application will ever reach the case where it will see max_instances of instances, then you may use a smaller value for max_samples to save memory.

In any case, there is a lower limit for max_samples:

max_samples >= max_samples_per_instance

If the 47.12 HISTORY QosPolicy’s kind is set to KEEP_LAST, then you should set:

max_samples_per_instance = HISTORY.depth

47.22.3 Properties

This QosPolicy cannot be modified after the Entity is enabled.

There are no requirements that the publishing and subscribing sides use compatible values.

47.22.4 Related QosPolicies

47.22.5 Applicable Entities

47.22.6 System Resource Considerations

Larger initial_* numbers will increase the initial system memory usage. Larger max_* numbers will increase the worst-case system memory usage.

Increasing instance_hash_buckets speeds up instance-lookup time but also increases memory usage.