Dynamic Type QoS for large data

2 posts / 0 new
Last post
Offline
Last seen: 1 year 2 months ago
Joined: 10/23/2013
Posts: 43
Dynamic Type QoS for large data

I have an application that uses DDS Dynamic datatype APIs for registering and sending a vector of data.  The application works fine when the vector is small ( ~2000 uint16 elements).  However, when the size is increased to about 1,000,000 I receive the following error.  I have tried adjusting various QoS policies, but none of these correct the problem.  Can you recommend the proper QoS parameter to adjust?

Thanks.

Mark.

=======================================

DDS_DynamicData_set_ushort_array:buffer full trying to add field Vidhandle (id=0)

DDS_DynamicData_to_stream:internal error trying to add required members

PRESWriterHistoryDriver_initializeSample:!serialize

WriterHistoryMemoryPlugin_addEntryToSessions:!initialize sample

WriterHistoryMemoryPlugin_getEntry:!add virtual sample to sessions

WriterHistoryMemoryPlugin_addSample:!get entry

PRESWriterHistoryDriver_addWrite:!add_sample

PRESPsWriter_writeInternal:!collator addWrite

=====================================================

These are QoS settings I tried changing...

                    <participant_user_data_max_length>1843200</participant_user_data_max_length>

                    <topic_data_max_length>1843200</topic_data_max_length>

                    <publisher_group_data_max_length>1843200</publisher_group_data_max_length>

                    <subscriber_group_data_max_length>1843200</subscriber_group_data_max_length>

                    <writer_user_data_max_length>1843200</writer_user_data_max_length>

                    <reader_user_data_max_length>1843200</reader_user_data_max_length>

 

Organization:
rip
rip's picture
Offline
Last seen: 4 days 44 min ago
Joined: 04/06/2012
Posts: 324

Hi,

None of those due what you think they do.  The "group_data" and "user_data" and "topic_data" fields are information that are use-specific (non-DDS, non-QoS, etc:  Purely "application defined data").  While they are supplied by the QoS in DDS, you use them in Discovery to pass system-level information around, for purposes /other/ than DDS.  For example, in an IA scenario, when you need to pass mutual-authentication keys or certs, you can do so using the _data QoS fields.

What you want is the message_size_max fields (see the documentation).  Keep in mind however that the message_size_max fields and related structures (start here) are by transport AND you are limited to whatever the smallest supportable value is (for UDPv4, this is 64K, which is a UDP imposed limit, not an RTI limit).   If you try to set the UDP transport.message_size_max to something larger than that, it won't work, and also if you are using shmem AND UDPv4, you are still limited to 64K in either.  The out-of-the-box default through 5.0.0 is 9k, which is probably where your 2k size vector is hitting the limit.

For 1M sized vectors you will /also/ need to enable Asynchronous publishing, because the 1M sized vector won't fit in a 64K UDP packet.

Note the above is all Full Connext DDS.  Micro has different constraints (no asynch publisher for example).  If on Micro, your application will have to be able to chunk and reassemble the vector into bits that a Micro Writer can handle, with reassembly also being at application level on the subscribers (this is what Asynchronous writes does for you).

Regards,

Rip