Which QoS parameters are important to tune for throughput testing?

Note: Applies to RTI Connext 4.x and above.

An example throughput test is provided in NDDSHOME/example/[CPP or Java]/performance/throughput. 

The most common problem encountered in performance testing is a publisher that is sending too fast for the receiver to process. Symptoms of this include a publisher losing the subscription, poor performance numbers, or lost samples. These problems can be addressed by tuning some of the default QoS parameters. 

In this discussion, we assume that you are testing for reliable throughput, not best-effort. 

You need to configure the writer side so that it does not completely fill up the reader queues and buffers before the writer's send queue becomes full. If this is not done, the writer can get very far ahead of the reader—to the point where the reader has a full receive queue to process while the writer is already blocked with its own full queue. When this happens, if the reader takes too long to process its queue, and/or if the fast_heartbeat_period is not set to a value at least ½ of the writer timeout, the subscriber will time out. 

For good performance, the QoS settings should be such that: 

max_blocking_time >= time to go from full to low water mark
max_samples * message_size_max <= receive_buffer_size

You should also make sure that heartbeats_per_max_samples is set so that: 

(max_samples / heartbeats_per_max_samples) * message_size_max < receive_buffer_size / 2

To make the writer block for a sufficient period, set: 


The default value for heartbeats_per_max_samples (8) is typically a good number, provided that the above inequalities are met. 

Suppose you calculated that max_samples can be 64. Then to match the above criteria, you should set the following QoS, assuming an unkeyed topic: 

datawriter_qos.resource_limits.max_samples = 64; 
datawriter_qos.resource_limits.max_instances = 1; 
datawriter_qos.resource_limits.max_samples_per_instance = 64; 
datawriter_qos.resource_limits.initial_samples = 64; 
datawriter_qos.resource_limits.initial_instances = 1; 

The high_watermark and fast_heartbeat_period should be set in such a way that the system has an opportunity to repair lost packets before the writer times out. Given that max_samples is set to 64, some good settings are: 

datawriter_qos.protocol.rtps_reliable_writer.high_watermark = 40; 
datawriter_qos.protocol.rtps_reliable_writer.low_watermark = 20; 
datawriter_qos.protocol.rtps_reliable_writer.fast_heartbeat_period.sec = 0; 
datawriter_qos.protocol.rtps_reliable_writer.fast_heartbeat_period.nanosec = 10000000; //10ms 
datawriter_qos.protocol.rtps_reliable_writer.heartbeats_per_max_samples = 8;

On the DataWriter, it is also good to decrease the delay in response time to NACKs so issues can be recovered faster:

datawriter_qos.protocol.rtps_reliable_writer.min_nack_response_delay.nanosec = 0; 
datawriter_qos.protocol.rtps_reliable_writer.min_nack_response_delay.sec = 0; 
datawriter_qos.protocol.rtps_reliable_writer.max_nack_response_delay.nanosec = 0; 
datawriter_qos.protocol.rtps_reliable_writer.max_nack_response_delay.sec = 0;

On the DataReader, it is also good to decrease the delay in response time to heartbeats so issues can be recovered faster:

datareader_qos.protocol.rtps_reliable_reader.min_heartbeat_response_delay.nanosec = 0; 
datareader_qos.protocol.rtps_reliable_reader.min_heartbeat_response_delay.sec = 0; 
datareader_qos.protocol.rtps_reliable_reader.max_heartbeat_response_delay.nanosec = 0; 
datareader_qos.protocol.rtps_reliable_reader.max_heartbeat_response_delay.sec = 0;