Hi,
I'm trying to send large amount of data. The idl file looks like:
const long MAX_BUFFER_SIZE = 1024 * 1024module vertexCloud { struct vertex {
float x;
float y;
float z;
}; struct data { sequence <vertex, MAX_BUFFER_SIZE> vertexData; };};
Hi,
I'm trying to send large amount of data. The idl file looks like:
const long MAX_BUFFER_SIZE = 1024 * 1024module vertexCloud { struct vertex {
float x;
float y;
float z;
}; struct data { sequence <vertex, MAX_BUFFER_SIZE> vertexData; };};
Hi Zomg,
What QoS settings are you using?
My initial guess is has to be related to that, if the size is 1048576 * sizeOf(vertex), that will be greater than ~64k. At that size you will need to use asynchronous publishing (for reliable data), so when starting the app, if you don't have that QoS set, Connext DDS would see that and print an error message. However, if somehow you are using Best Effort, you will not get that error. In that case, the message will be fragmented as 64k chunks and be sent, if any of the fragments is lost, the sample will never reach the destination, that is very likely if the sample is big enough.
Does it seem like a possible scenario to you?