push_on_write FALSE with multi-channel datawriters issue

5 posts / 0 new
Last post
Offline
Last seen: 2 years 3 weeks ago
Joined: 01/27/2014
Posts: 22
push_on_write FALSE with multi-channel datawriters issue

Hello,

I tried to use multi-channel datawriters with QoS protocol push_on_write set to FALSE (in order to have some kind of batching).

It seems to not work, and lead to erratic behavior in reception when I configure several channels for a datawriter.

When I switch push_on_write to TRUE, everything goes fine.

Is there some limitations on this configuration settings ?

Boris.

AttachmentSize
File qos.xml2.47 KB
Organization:
Howard's picture
Offline
Last seen: 1 week 21 min ago
Joined: 11/29/2012
Posts: 567

If you disable push_on_write, the performance is determined completely by the reliability protocol's ability to send NACKs and ask for the data to be sent (which is usually done for repairing data).

So, to tune the performance, you'll have to understand the DDS reliability protocol and the QOS configuration parameters that are required to control the frequency of HB's and thus NACKs being sent.

You can read all about Reliable communications here:

https://community.rti.com/static/documentation/connext-dds/6.0.1/doc/manuals/connext_dds/html_files/RTI_ConnextDDS_CoreLibraries_UsersManual/index.htm#UsersManual/reliable.htm#reliable_1394042328_873873%3FTocPath%3DPart%25203%253A%2520Advanced%2520Concepts%7C11.%2520Reliable%2520Communications%7C_____0

 

Howard's picture
Offline
Last seen: 1 week 21 min ago
Joined: 11/29/2012
Posts: 567

So, I just noticed that you included the XML file which I assume you're using to configure the QOS that you're using.

I note that the "SignalSampleInstances" profile, which I guess is the one that you're using to define your DataWriters/DataReaders for the push_on_write=false use case, has the following settings:

<history>
    <kind>KEEP_LAST_HISTORY_QOS</kind>
    <depth>1</depth>
</history>

So, is your datatype "keyed"?  If so, how many instances (unique key value) of your data do you expect to being sending at any one time?  Will you send more than one data sample per instance at a time?  When I say at a time, I mean do you expect a "batch" of data to include multiple samples of data for a single instance?

If your datatype is not keyed, i.e., there is no "@key" annotation for the data structure definition in your IDL file...then having a history depth of 1 will prevent DDS from actually keeping and thus sending more than 1 data sample at a time.  There is no "batching" effect that you would be able to see using "push_on_write=false" in this case.

There would only be 1 data sample stored (per instance) that would ever be repaired on a NACK.

<max_bytes_per_nack_response>10000</max_bytes_per_nack_response>

How big is a data sample.  The setting above limits the size of a repair packet to 10,000 bytes.  So if your data is around 10,000 bytes, only 1 Data sample would be sent per UDP packet that's being repaired on a NACK.  And thus, again, there would be no "batching" effect.

<heartbeats_per_max_samples>100</heartbeats_per_max_samples>

So, because you are not setting <min/max_send_window_size> nor <max_samples> for ResourceLimits, DDS calculates the number of samples sent per HB as

100,000,000/100 (hb per max samples) = 1 million

so, you have to send 1 million samples before the protocol piggybacks a HB with a data sample.  Is this what you expected?

<high_watermark>1</high_watermark>
<low_watermark>0</low_watermark>

<fast_heartbeat_period>
<sec>0</sec>
<nanosec>5000000</nanosec>
</fast_heartbeat_period>

So with the settings above, after you send a single sample, the protocol will start sending HBs at the fast_heartbeat_period, which is set to 5 ms.

5 milliseconds is a LONG time.  Is getting data every 5 milliseconds going to meet your performance requirements?

On Linux, you can set this value to as low as 1 ms.  On Windows...it's tricky...you would have to experiment and see if you can get Windows to wake up at 1 ms periods accurately.

 

Offline
Last seen: 2 years 3 weeks ago
Joined: 01/27/2014
Posts: 22

Than you Howard for your detailed answer.

I am going to add more details:

    My topic is keyed.
    A typical send rate is ~50 different keys each 25ms.
    Data sample is around 100 bytes.
    Getting data with a latency of 5ms meet my requirements.

But my main issue was linked to the fact that I got erratic behavior when using MultiChannel with TopicQueries and with push_on_write set to false.

After many testing, I undestood that the issue was linked to the use of TopicQueries with MultiChannel (I still did not understand why...)

So I abandoned TopicQueries, and everything works well now, with only MultiChannel and push_on_write false.

Howard's picture
Offline
Last seen: 1 week 21 min ago
Joined: 11/29/2012
Posts: 567

Well, I'm not sure that anyone's actually tested TopicQueries with push_on_write=false nor with Multi-Channel datawriters with push_on_write=false let alone TopicQueries + push_on_write=false + multi-channel datawriters...

You can certainly file a bug with RTI's support team if you have a configuration that shows the problem and can be reproduced easily.