Data transfer time.

5 posts / 0 new
Last post
Last seen: 2 years 8 months ago
Joined: 10/11/2017
Posts: 9
Data transfer time.


I'm based on Connext DDS Pro 5.3.0 with linux (redhat 4.4.7) and VS2013.
On the same Linux PC I ran 2 RTI component one as a provider (writing 30MBytes data each 66ms and 10KBytes data each 33ms simultaneously) and second one as a subscriber of both data.
I'm based on "BuiltinQosLibExp::Generic.StrictReliable.LargeData.FastFlow" generic profile (my profile enclosed to this topic). Subscriber is based on "waitSet" reception.
With a sequence of 500 frames (30MBytes) and 1000 frames (10KBytes), checking RTI source_timestamp and reception_timestamp stored in DDS_SampleInfoSeq struct each frames reception I had following delta time (source - reception) :

  • For 30Mbytes data: average value = 49 ms, with 42 ms and 62 ms as min and max values.
  • For 10KBytes data: average value = 30 ms, with 58 ms and 0.2 ms as min and max values.

According to you, is that normal ?

In addition, at each data reception, I store current time (just after DDSDataReader::take commnand) and I compute delta time between this time and reception_timestamp and I had following delta times:

  • For 30Mbytes data: average value = 26 ms, with 16 ms and 29 ms as min and max values.
  • For 10KBytes data: average value= 0.4 ms, with 18 ms and 0.2 ms as min and max values.

According to you, is that normal this delay between reception_timestamp and waitset notification ?

In fact I'm a little bit surprised by those values. Let me know how I can proceed to improve those timings ?



File qos_profiles.xml15.69 KB
Last seen: 2 months 3 weeks ago
Joined: 02/11/2016
Posts: 143

Hey Patrice,


To make sure:

1. which qos are you using? the file you linked has multiple profiles listed in it

2. can you link the code you are using to make the measurement? if not there are some questions about how you are testing it:

2.1. are you sending a burst of many messages every 33 miliseconds? or a single message that has some sequence in it?

2.2. do you have some monitoring of cpu / ram while you are running the tests?

2.3. it is implied but are you using the c API?

2.4. is it possible for you to use a newer linux version? I believe 4.4.7 is no longer formally supported by RTI


My interest is also piqued!


Good luck,


Last seen: 2 years 8 months ago
Joined: 10/11/2017
Posts: 9

Hey Roy,

Thanks for the reply.

1- QOS I used:

  • For "Publisher" qos I use:
    • <datawriter_qos topic_filter="SequenceData_*" base_name="DataWriter_1k5"/> for 10k data size each 33ms. 
    • <datawriter_qos topic_filter="ProcessedPixels30MB_*" base_name="DataWriter_30MB"/> for 30MB data size each 66 ms.
  • For "Subscriber" qos I use:
    • <datareader_qos topic_filter="SequenceData_*" base_name="DataReader_1k5"/> for 10k data size reception each 33ms.
    • <datareader_qos topic_filter="ProcessedPixels30MB_*" base_name="DataReader_30MB"/> for 30MB data size reception each 66 ms.

2. To make measurement, I save the time at differents steps:

  • Subscriber:
    • As, in my code, data reception can be done through a WaitSet or a DDSDataReaderListener and DDSDomainParticipant::get_current_time() fails in case of callback call, I "get_current_time" using gettimeofday as follow:

inline DDS_Time_t getDDSTimeofday() {

static const DDS_UnsignedLong USEC_to_NANOSEC = 1000UL;

struct timeval tv;
if (gettimeofday(&tv, NULL) != 0) return DDS_TIME_ZERO;
DDS_Time_t crtTime;
crtTime.sec = tv.tv_sec;
crtTime.nanosec = tv.tv_usec * USEC_to_NANOSEC;
return crtTime;


    • So, here after, the code in on_data_available() function where I save steps times :

void DataReader <Foo>::on_data_available(DDSDataReader* reader)

typename Foo::DataReader *dataReader = Foo::DataReader::narrow(reader);
if (dataReader == NULL) {

LOG_ERROR(Topic<Foo>::_topicName << ": DataReader narrow error");

typename Foo::Seq dataSeq;
DDS_SampleInfoSeq infoSeq;
if (retCode == DDS_RETCODE_NO_DATA) {

LOG_ERROR(Topic<Foo>::_topicName << ": dataReader->take error=" << retCode);

else if (retCode != DDS_RETCODE_OK) {

LOG_ERROR(Topic<Foo>::_topicName << ": take error " << retCode);


for (int i = 0; i < dataSeq.length(); ++i) {

if (_seqDataLengthMax < dataSeq.length()) {

_seqDataLengthMax = dataSeq.length();
LOG_INFO(Topic<Foo>::_topicName << ": _seqDataLengthMax=" << _seqDataLengthMax << " dataSeq.maximum=" << dataSeq.maximum());


if (infoSeq[i].valid_data) {

if (infoSeq[i].view_state == DDS_NEW_VIEW_STATE) {

LOG_INFO(Topic<Foo>::_topicName << ": Found new instance; _patientName = " << dataSeq[i]._patientName);


// Get current time
// ----------------
DDS_Time_t crtTime = getDDSTimeofday();

SavedData savedData;
savedData._cbTS = DDS_Time_t::from_nanos(dataSeq[i]._header._cbNanoSec);
savedData._wrTS = DDS_Time_t::from_nanos(dataSeq[i]._header._writeNanoSec);
savedData._txTS = infoSeq[i].source_timestamp;
savedData._rxTS = infoSeq[i].reception_timestamp;
savedData._rdTS = crtTime;

    • For each data received I compute different delta time (based on savedData structure above). I attach to this topic one results files, with followinfg computed time meaning:
      • Cb = Send Callback time (DDSDomainParticipant::get_current_time() called in Publisher side).
      • Wr = After writing data (Just before FooDataWriter::write() call) (DDSDomainParticipant::get_current_time() called in Publisher side).
      • Tx = DDS_SampleInfoSeq::source_timestamp => RTI middleware time
      • Rx = DDS_SampleInfoSeq::reception_timestamp => RTI middleware time
      • Rd = Read time (after FooDataReader::take) (see here before).

2.1: Data are sent periodically, each 33 msec for the 10kb and 66 msec for the 30 MB.

2.2: No

2.3: I'm based on C++ API.


  • I do the tests (results attached to this topic) on a HP Z820, with "Red Hat Enterprise Workstation release 6.7 (Santiago).
  • Software compiled on a VMWare Helios 6.7 (Red Hat 6), with "TARGET_ARCH = x64Linux2.6gcc4.4.5" selected by rtiddsgen (gcc version: gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16)).

Hope I'm clear enough.



File Attachments: 
Last seen: 2 months 3 weeks ago
Joined: 02/11/2016
Posts: 143

Hey Patrice,


I'm not an expert on flow controllers but I think the flow controller you setup is slowing you down.

Perhaps you could give some builtin qos a try? for example: 

  • Generic.StrictReliable.LargeData.FastFlow
  • Generic.KeepLastReliable.LargeData.FastFlow


Last seen: 2 years 8 months ago
Joined: 10/11/2017
Posts: 9

Hi  Roy,

Neither do I. I'm already based on "Generic.StrictReliable.LargeData.FastFlow" builtin qos.  I did some tests with differents flow_controller, but without real success.