File datawriter

4 posts / 0 new
Last post
Offline
Last seen: 10 years 4 months ago
Joined: 09/26/2013
Posts: 5
File datawriter

Dear all,

I was just wondering how I can publish an entire file in DDS. Most confidently I will not exceed the file size constraint. Better if you give me example of its java implementation.

 

regards,

Organization:
rip
rip's picture
Offline
Last seen: 15 hours 43 min ago
Joined: 04/06/2012
Posts: 324

Hi,

I'm wondering why you think it would be any different from sending any other data? 

Also, there is no "file size constraint".  DDS isn't file based in any way so there's a bit of "wait, what?" involved here when I read your question.

Open the file for reading as a stream (google: java file stream reader), read in the stream in 62k chunks.  copy it to a series of samples of the form

struct FileChunkType {
    string<1024> filename; //@key
    sequence<octet, 63308> data; 
};

(1k + 62k = 63k, + some RTPS overhead, keeping it under the UDP message size max limit)

On the receiving side, simply open a file stream (google: java file stream writer) and write each 'data' segment out to it, and close it at the end.  Not very elegant, but it gets the job done.

I'm either misunderstanding your question, or you're assuming DDS is more complicated than it is.

In any case, I would more likely use DDS as the command channel and use some tool which is specifically tuned to file transfer, like ftp or tftp as the transfer agent -- ie use DDS to publish that I need a specific file, and where I'd like it to be, and let some service use tftp to put it there for my use.

Regards,

Rip

 

Offline
Last seen: 10 years 4 months ago
Joined: 09/26/2013
Posts: 5

thanx rip,

first of all, i was trying to publish very long string and i got the serialize error message. (i think it is complaining about the string length, since i'm new to dds). so if i have 1M of data, do I have to partition the message at the application level or can i still use some QoS so that the middleware will take care of it? 

rip
rip's picture
Offline
Last seen: 15 hours 43 min ago
Joined: 04/06/2012
Posts: 324

Chunk and reassambly can be done either at the application level (ie, in your code) or in the middleware. 

If you are going to do it in the application, you should first change the UDPv4 builtin parent message_size_max value from the default 9k to the max allowed by UDPv4, which is 65535.  Note that if you change the UDPv4 to that value, you should also change the SHMEM builtin parent message_size_max to the same value, if you have are also using SHMEM.  A good pattern is here.

If you want the middleware to chunk and reassemble an instance which is greater than 64k, you need to enable Asynchronous publishing.  This is done via the Publisher QoS (and, see this hint), and in the DataWriter, it's PUBLISH_MODE QoS, which should be explicitly set to DDS_ASYNCHRONOUS_PUBLISH_MODE_QOS in order to use the publisher's asynch writer.

struct MaxFileTxSize { 
    string<1024> filename; //@key 
    sequence<octet, 1048576> payload; 
};

Note that you can use this type to send a single file under 1M in size, or you can use application-level chunk and reassemble if the file is > 1M (still needing to enable asynchronous publish_mode in the datawriter). 

The problem is that there are memory constraints I haven't talked about, so using a sequence of 1M octets can be problematic if your History is set to anything greater than 2.  Depending on the "average" size of a file to be sent, you can reduce the sequence length to something more in line with this value and so save some memory.

 

(edited)