Configuring Resource Limits in Connext DDS Micro

Unlike Connext DDS Professional, Connext DDS Micro only supports finite resource limits to ensure that memory growth is bounded. Configuring these limits is therefore an important part of the system design. Here are some considerations to help you determine the optimal resource limits:

  • How many DomainParticipants are needed in the system

  • How many DataReaders and DataWriters are needed (for each DomainParticipant and across the whole system)

  • How many DataWriters each DataReader needs to receive samples from

  • How many DataReaders each DataWriter needs to communicate with

  • How many instances are published or received by a given DataWriter or DataReader

  • The rate at which DataWriters publish samples

  • The rate at which DataReaders process samples

 

This article provides an explanation of some of the resource limits and how they affect the behavior of different DDS entities. Note that unless highlighted otherwise, these resource limit QoS settings are not specific to Connext DDS Micro. For more details, please refer to the section “Configuring Resource Limits” in the corresponding API reference manual (for example, the C API Reference Manual for Connext DDS Micro 3.0.3).

 

DomainParticipantFactory

max_participants (Connext DDS Micro only): Limits the number of local DomainParticipants that can be created.

 

DomainParticipant

The DomainParticipant resource limits affect the discovery process as well as local entity creation.

  • local_<entity>_allocation: Limits the number of entities (Topics, Publishers, Subscribers, DataReaders and DataWriters) that can be created within the participant.

  • remote_<entity>_allocation: Limits the number of entities that can be discovered by the participant. Note that “remote” does not refer to other machines or even other applications but simply entities that need to be discovered, i.e., the ones belonging to other domain participants.

Both  local_<entity>_allocation and remote_<entity>_allocation have a significant impact on memory allocation.

For remote_participant_allocation, it is important to take into account that if a remote participant loses connection, its resources can only be reused once the liveliness loss is detected. This may prevent the discovery of a new participant for some time if this resource limit is too restrictive.

  • matching_writer_reader_pair_allocation and matching_reader_writer_pair_allocation: Limits the number of matching local DataWriter and remote or local DataReader pairs and vice versa. 

 

DataReaders and DataWriters

When these entities are created, resources are allocated to maintain states for instances and for matched DataReaders or DataWriters, and to accommodate the DataWriter and DataReader history queues. Some of the resource limits restrict the amount of memory that is allocated while others determine how the resources are distributed among entities and instances.

  • max_samples: Restricts the number of samples that can be kept in any one DataReader or DataWriter queue.

  • max_instances: Sets the maximum number of instances that can be managed. For a DataWriter, this is the maximum number of instances it can publish while for a DataReader it is the maximum number of instances it can receive samples for (even if the matched DataWriters publish samples for more instances).

  • max_samples_per_instance: Limits the number of samples that can be kept in the queue per instance (less than or equal to max_samples). This resource limit does not affect memory allocation (unlike max_samples) but rather determines how the resources for the DataReader or DataWriter queues are distributed among the different instances.

  • max_remote_writers (DataReader only): Limits the number of DataWriters that a given DataReader can receive data from. If there are more DataWriters than max_remote_writers, the DataReader will not receive samples from all of them.

  • max_remote_readers (DataWriter only): Limits the number of DataReaders a DataWriter can communicate with. If there are more DataReaders than max_remote_readers, the DataWriter will not send samples to all of them.

  • max_remote_writers_per_instance (DataReader only): Limits the number of remote DataWriters that a DataReader maintains state for per instance. If the max_remote_writers_per_instance DataWriters, known to the DataReader, no longer update an instance, the instance state will transition to NOT_ALIVE_NO_WRITERS even if there are still other writers (not tracked by the DataReader at the time) updating this instance. Once a new sample is received, the instance state transitions back to ALIVE.

  • max_samples_per_remote_writer (DataReader only): Limits the maximum number of samples that can be cached per remote DataWriter. This limit is useful for preventing one DataWriter from filling up the queue and starving other DataWriters.

 

Further aspects to consider when configuring resource limits:

  • Tools like Admin Console create DDS entities that need to be accounted for in the resource limits of the Connext DDS Micro applications. More information for integrating RTI Connext DDS Micro 3.0.3 with RTI tools can be found here.

  • The resource limits affecting the DataReader and DataWriter queues interact with the history and reliability QoS configurations. For best-effort communication, the resources can be reused as soon as the write call returns (DataWriter) or the loan is returned (DataReader). For reliable communication, max_samples and max_samples_per_instance determine how many unacknowledged samples can be kept in the DataWriter queue (per instance). If this limit is reached and keep-last history QoS is used, the oldest (possibly unacknowledged) sample in the queue will be replaced. With keep-all history QoS, samples are not replaced unless they are acknowledged by all known DataReaders. If the history depth or max_samples_per_instance is reached for a reliable DataReader, the oldest sample is replaced by a new one only if keep-last history is used. If keep-all history is used, the sample is rejected when the history depth or max_samples_per_instance is reached for a reliable DataReader.

  • A DataReader's max_sample limit could be hit even if best-effort reliability is used. This can happen if a waitset or polling is used and the return of loaned samples is slower than the arrival of new samples. In this case, it may not be possible for the new samples to replace older ones in the queue until the loan is returned.

  • A DataWriter writing large samples with asynchronous publish mode (supported in Connext DDS Micro 3.0.0 and above) may run out of resources if the write() API is called faster than the samples are sent. This is because resources are only freed once the complete sample has been sent.
Programming Language: