Write overhead when no subscribers attached

3 posts / 0 new
Last post
Last seen: 18 hours 13 min ago
Joined: 01/07/2021
Posts: 7
Write overhead when no subscribers attached


When profiling our DDS application it seems like a write call will lead to that serialization is performed even in the case when no-one is subscribing to the topic. Is that observation correct? And if so, is there any way around this so that we can reduce the overhead related to write calls on unsubscribed topics.

Our use case is for logging of pretty big data types which by default is disabled but for troubleshooting purposes we want to be able to turn it on at runtime.



Howard's picture
Last seen: 4 hours 47 min ago
Joined: 11/29/2012
Posts: 211

In DDS, data is serialized on write(), not on send.  Data is stored in the DataWriter's send queue in serialized form.  There are lots of reasons for this

1) You don't want to have to serialize the data to send to each DataReader.   DataReaders may be discovered at different times.

2) The same data may need to be send multiple times to the same DataReader...when the connection is RELIABLE, but the network is losing packets.

So, there is a way to avoid the serialization of the data on the call to write().  And that is to use the FlatData "language binding"...which stores the data in serialized form so that when you call write(), there's no additional serialization required.

However, it does require you to define the data type to have the FlatData language binding in IDL and the application to use the FlatData API to manipulate the data structure.  A tradeoff between complexity and performance.

It can certainly help with large/complex data types.  But, probably not as useful for smaller data.




Of course, the alternative is to not call write() for your logged data unless there are subscribers.  And you can do that by calling


first to check if there are any readers.

Last seen: 18 hours 13 min ago
Joined: 01/07/2021
Posts: 7

Yeah FlatData could be a good solution for the big data writers. Will look into that, thanks!