Problem recording and subscirbing to large data topic on Admin Console

6 posts / 0 new
Last post
Bey
Offline
Last seen: 3 years 4 months ago
Joined: 11/10/2020
Posts: 3
Problem recording and subscirbing to large data topic on Admin Console

Hi everyone, started exploring DDS RTI this month and currently have some issues as below. I am trying to subscribe and record image files (resolution of 640 by 480) from a publisher through the DDS Admin Console and Recording Service, however it is not working. Does anyone have experience or advice on how I can get this to work? I understand that it is due to the fact that UDP packets only supports up to 64k, below are some info of my current settings, is there anything thaat I am missing? Thank you

Below is my current IDL:

typedef sequence<octet,3> widthBGR[640];

struct _topic_ReqImg_t {
    sequence<widthBGR,480> image;    //@ID 0
};

 

I have tried the xml setup below:

<qos_profile name="DefaultProfile" base_name="BuiltinQosLibExp::Generic.StrictReliable.LargeData.FastFlow" is_default_qos="true">
     
            <datawriter_qos>
                 <publish_mode>
                     <kind>ASYNCHRONOUS_PUBLISH_MODE_QOS</kind>
                    <flow_controller_name>DDS_FIXED_RATE_FLOW_CONTROLLER_NAME</flow_controller_name>
                </publish_mode>
            </datawriter_qos>

            <datareader_qos>
                <reliability>
                    <kind>RELIABLE_RELIABILITY_QOS</kind>
                </reliability>
                <history>
                    <kind>KEEP_ALL_HISTORY_QOS</kind>
                </history>
            </datareader_qos>


            <participant_qos>
                <receiver_pool>
                    <buffer_size>65530</buffer_size>
                </receiver_pool>

                <discovery_config>
                    <publication_writer_publish_mode>
                        <kind>ASYNCHRONOUS_PUBLISH_MODE_QOS</kind>         
                    </publication_writer_publish_mode>
                    <subscription_writer_publish_mode>
                        <kind>ASYNCHRONOUS_PUBLISH_MODE_QOS</kind>
                    </subscription_writer_publish_mode>  
                </discovery_config>

                <property>
                    <value>
                        <element>
                            <name>dds.transport.UDPv4.builtin.recv_socket_buffer_size</name>
                            <value>9991072</value>
                        </element>
                        <element>
                            <name>dds.transport.UDPv4.builtin.parent.message_size_max</name>
                            <value>65530</value>
                        </element>
                        <element>
                            <name>dds.transport.UDPv4.builtin.send_socket_buffer_size</name>
                            <value>65530</value>
                        </element>
                    </value>
                </property>
                <transport_builtin>
                    <mask>UDPV4 | SHMEM</mask>
                </transport_builtin>
            </participant_qos>
        </qos_profile>

 

Keywords:
Howard's picture
Offline
Last seen: 13 hours 42 min ago
Joined: 11/29/2012
Posts: 565

Hi Bey,

A few things:

1) your idl is using sequences which are variable sized arrays.  Is that what you need?  Or are you trying to send/receive fixed sized images of 640x480 which pixel being 3 bytes?

And what you have is a variable sized array (sequence) of up to 480 elements where each element is a fixed array of 640 variable sized pixels of up to 3 pixels (byte) (each pixel is a sequence of up to 3 bytes).

That's unusual.  If you wanted a datatype that is of fixed sized, it would be better done as

@nested
struct pixel {
   value[3];
};

struct _topic_ReqImg_t {
    pixel image[640][480];
};

 

2) Connext DDS will internally fragment and reassemble any data > UDP max packet size, but to send the data reliably, you must configure it with ASYNCHRONOUS publish mode for your datawriter as your app does.  Deriving your QOS profile from "BuiltinQosLibExp::Generic.StrictReliable.LargeData.FastFlow" is the right thing to do.

 

3) Your XML QOS config has some issues:

Since you defined your QSO profile to be derived from

"BuiltinQosLibExp::Generic.StrictReliable.LargeData.FastFlow" 

Then you don't need to set these QOS, as they are already set for you...

            <datawriter_qos>
                 <publish_mode>
                     <kind>ASYNCHRONOUS_PUBLISH_MODE_QOS</kind>
                    <flow_controller_name>DDS_FIXED_RATE_FLOW_CONTROLLER_NAME</flow_controller_name>
                </publish_mode>
            </datawriter_qos>

            <datareader_qos>
                <reliability>
                    <kind>RELIABLE_RELIABILITY_QOS</kind>
                </reliability>
                <history>
                    <kind>KEEP_ALL_HISTORY_QOS</kind>
                </history>
            </datareader_qos>

In addition

                <receiver_pool>
                    <buffer_size>65530</buffer_size>
                </receiver_pool>

is not necessary.  By default, the size will be automatically determined.

                <discovery_config>
                    <publication_writer_publish_mode>
                        <kind>ASYNCHRONOUS_PUBLISH_MODE_QOS</kind>         
                    </publication_writer_publish_mode>
                    <subscription_writer_publish_mode>
                        <kind>ASYNCHRONOUS_PUBLISH_MODE_QOS</kind>
                    </subscription_writer_publish_mode>  
                </discovery_config>

is usually not necessary, unless your data structures are very complex.  This is only used for discovery and has nothing to do with the size of the user data.

                        <element>
                            <name>dds.transport.UDPv4.builtin.parent.message_size_max</name>
                            <value>65530</value>
                        </element>

is not necessary, the default value set by Connext DDS should be used.

                        <element>
                            <name>dds.transport.UDPv4.builtin.send_socket_buffer_size</name>
                            <value>65530</value>
                        </element>

is too small.  It should be at least 2x bigger.  The default size of 128KB set by Connext DDS should be fine.

                        <element>
                            <name>dds.transport.UDPv4.builtin.recv_socket_buffer_size</name>
                            <value>9991072</value>
                        </element>

This is fine...but a strange number.  Almost but not quite 10 MB.

                <transport_builtin>
                    <mask>UDPV4 | SHMEM</mask>
                </transport_builtin>

is not needed, as that's the default value.

 

So fundamentally, other than setting a larger recv_socket_buffer_size, I would recommend deleting all of the other settings (as long as you're deriving from the

"BuiltinQosLibExp::Generic.StrictReliable.LargeData.FastFlow"

QOS profile.

 

4) Now to your exact problem...

I'm not sure what use you have for Admin Console to subscribe to an image.  It can only display the raw values.  Also, Admin Console is written in Java/Eclipse, it's likely to be a bottleneck among all of the applications that may be subscribing to the image stream. 

For saving data to disk, you definitely will want to use RTI Recording Service.  What is not working?  Are there any error messages being printed?  What behavior do you observe?

In any case, just as you configured the QOS profile of your app to send large data, you'll want to configure RTI Recording Service to subscribe to your topic using the large data QOS profile.

So in your RTI Recording Service XML configuration file, you'll want to add

  <datareader_qos base_name="BuiltinQosLibExp::Generic.StrictReliable.LargeData.FastFlow"/>

to the <topic> or <topic_group> in a <session>.

You may also want to add

            <participant_qos>
                <property>
                    <value>
                        <element>
                            <name>dds.transport.UDPv4.builtin.recv_socket_buffer_size</name>
                            <value>9991072</value>
                        </element>
                    </value>
                </property>
            </participant_qos>

to the <domain_participant> in the XML configuration file to increase performance.

Bey
Offline
Last seen: 3 years 4 months ago
Joined: 11/10/2020
Posts: 3

Hi Howard,

Thank you very much for the detailed explaination, I am mainly using the admin console to subscribe to the topic to see how well it is publilshing. I did the amendments as suggested above, however it just get stuck at publishing the data and after awhile I receive this error message: "Segmentation fault (core dumped)" which means the program is trying to access to memory it is not authorized to? Also if I use a smaller image (at around 100+ by 100+ pixels), its working fine. Are there any other suggestions or stuffs that I am missing out?

Also can I check if it is feasible to publish and record such large data at the rate of 25 - 30fps and together with some other smaller data such as IMU reliably at real-time?

Once again, thanks for the assistance!

Howard's picture
Offline
Last seen: 13 hours 42 min ago
Joined: 11/29/2012
Posts: 565

So, since Admin Console is just a DDS application, the modification to the QOS also needs to be done for the Participant and DataReader created by Admin Console to deal with large data.

So, you're saying that Admin Console is producing a segfault when you subscribe to your large data object?

Well, that shouldn't happen...but you should change the QOS of the Admin Console for both the participant as well as the DataReader used for subscription.

"View" -> "Preferences" for the participant.

and

"Advanced Settings" when you subscribe to data.

Having said that, for data of this size, it's a better idea to create your own dummy app that subscribes to the data if all you're doing is seeing if the publisher is publishing.  Admin Console is creating a huge amount of internal processing/memory allocation to allow you to see every byte of the image.   So, lots of unnecessary memory and CPU being consumed.

 

Recording Service should be able to receive and record data at the rates that you need...with the caveat that the disk that is able to handle the write speed required (but with only a 640*480*3 byte image at 30 Hz...should be no problem with even spinning harddrives).

Bey
Offline
Last seen: 3 years 4 months ago
Joined: 11/10/2020
Posts: 3

Okay noted, I will create a dummy script to subscribe to the data instead. However, the segfault error is produced when I am running my script to publish the large image data to the dds. Do I have to make some modifications or adjustments to the resource limits for the xml?

Thanks for the clarification on the possibility of receiving and recording the image data at the rates stated.

Howard's picture
Offline
Last seen: 13 hours 42 min ago
Joined: 11/29/2012
Posts: 565

Sorry, I misunderstood.  I thought that the RTI Admin Console was seg faulting.  But I think that the situation is that your own publishing application is seg faulting.

If that's that case, I would use a debugger to figure out where/why it's seg faulting.  It's likely in your own programming logic somewhere.  Are you testing return codes for error?  Are you stomping on your own memory, using an unitialized pointer?  Anyhow, a debugger should do the trick.