How much shared memory is required by RTI Connext applications?

Note: Applies to RTI Connext 4.x and above.

By default, every time you create a DDS participant with SHMEM transport enabled, the middleware creates two shared memory segments: one to receive discovery traffic and one to receive user traffic.

Each one of the shared memory segments will contain a queue of messages whose size is configured using two transport parameters:

  • received_message_count_max: Defines the maximum number of messages that the queue can hold
  • receive_buffer_size: Defines the maximum number of bytes (across messages) that the queue can hold

If a DataWriter writes samples into the shared memory queue at a faster rate than the reader can take, the new messages will be dropped. If the middleware is configured in BEST EFFORT mode this will result in sample losses. If the middleware is configured in RELIABLE mode this will result on repair traffic.

Notice that the above behavior is similar to having receive and send socket buffer sizes in the UDPv4 transport. If your socket buffer sizes are small, messages may be dropped before being delievered to the middleware. With UDPv4, you can use command line tools like netstat to figure out if UDPv4 packets are being dropped. With the shared memory transport a log message is written:

NDDS_Transport_Shmem_send:failed to add data. shmem queue for port 0x1cf4 is full (received_message_count_max=32, receive_buffer_size=73728). Try to increase queue resource limits.

Prior to Connext 5.0.0 this message was printed at a Info verbosity level. In Connext 5.0.0 the verbosity was changed to be warning to facilitate debugging.  When the DataReader is stopped ungracefully the DataWriter may fill up the queue and the above message may be displayed until the DataWriter detects that the DataReader is gone using the liveliness mechanism. Therefore, after Connext 5.0.x the verbosity has been changed back to Informational.

Note, the behavior of the shared memory transport has not changed, only the verbosity of the NDDS_Transport_Shmem_send failure message.

In general, if you want to reduce the number of samples that are dropped by the shared memory transport you have multiple options:

  1. Increase the size of the shared memory queue by adjusting received_message_count_max and receive_buffer_size. Notice that, by default, receive_buffer_size is set automatically base on the value of receive_buffer_size according to the following formula:
    size = receive_buffer_size + message_size_max + received_message_count_max * fixedOverhead
    In many cases, you will have to adjust only  received_message_count_max.
  2. For reliable communications, you can also throttle a DataWriter using   min_send_window_size and  max_send_window_size.
  3. You can use a combination of 1 and 2.

The memory sizes of these segments are defined in the RTI Connext Core Libraries and Utilities User's Manual, in the table, Properties for the Builtin Shared Memory Transport (receive_buffer_size property).

In addition, if a DataReader specifies a new unicast input port via the Transport_Unicast QoS policy (ReaderQos.unicast[0].port), then another shared-memory segment will be created as specified above as in Participant initialization.