DDS memory limits for sequence of structures

2 posts / 0 new
Last post
Last seen: 7 months 2 weeks ago
Joined: 03/05/2020
Posts: 1
DDS memory limits for sequence of structures


I'm trying to send large amount of data. The idl file looks like:


const long MAX_BUFFER_SIZE = 1024 * 1024
module vertexCloud {
    struct vertex {
        float x;
        float y;
        float z;
    struct data {
        sequence <vertex, MAX_BUFFER_SIZE> vertexData;
If I write the instance data, the publisher gets only on_subscription_matched and on_liveliness_changed - both only once.
The maximum() for vertexData in the instance data returns MAX_LENGTH value (defined as 1024 * 1024 so MAX_BUFFER_SIZE).
I'm also calling ensure_length(MAX_LENGTH, MAX_LENGTH);
But for MAX_LENGTH = 1024 * 64, I'm getting the on_data_available without problems (of course I have smaller amount of data).
I found the - I'm using the standard hello_world implementation with new datafields and I didn't use extra settings for large data sending. Now I'm not sure - from one point of view it will be needed for >64kB of data but it works somehow for larger amount (we have 65536 items containing three floats).
jmorales's picture
Last seen: 4 months 3 weeks ago
Joined: 08/28/2013
Posts: 40

Hi Zomg,

What QoS settings are you using?

My initial guess is has to be related to that, if the size is 1048576 * sizeOf(vertex), that will be greater than ~64k. At that size you will need to use asynchronous publishing (for reliable data), so when starting the app, if you don't have that QoS set, Connext DDS would see that and print an error message. However, if somehow you are using Best Effort, you will not get that error. In that case, the message will be fragmented as 64k chunks and be sent, if any of the fragments is lost, the sample will never reach the destination, that is very likely if the sample is big enough.

Does it seem like a possible scenario to you?