Minimizing stored sample data

3 posts / 0 new
Last post
Offline
Last seen: 11 years 6 days ago
Joined: 02/05/2013
Posts: 16
Minimizing stored sample data

Hello-

As we gradually add new types to our system we are starting to reach the limits of available memory.  We have one or two types that are particularly large.

Suppose we have a single type that contains an array of data totalling 100MB in size.  Assume we want strict reliability and have no need at present to queue up multiple reads.  Consider it an error if we cannot process the current read before the next write is posted to our endpoint.  Assume this is not a keyed topic and there is 1 remote writer.

I am finding that under no circumstances can I reduce the memory used for this single topic to less than 3 samples.  I've worked through a lot of the QoS settings.  It seems that at least 1 sample is required for re-assembly (initial_fragmented_samples & max_fragmented_samples) and one sample is required to cache a completely re-assembled sample in the receive queue for handing to the application.

I'm not exactly sure what to expect wrt the minimum amount of memory for such a topic, type, and associated data reader.  I've gone through the documentation but I suspect there is something to learn from the experts here.

Can you summarize the minimum set of samples I could expect to cache?

Also, per documentation I suppose VxWorks may support a zerp-copy buffer.  The scenario tested above is on MS Windows.

Sincerely-

Todd Moody

fercs77's picture
Offline
Last seen: 2 months 2 weeks ago
Joined: 01/15/2011
Posts: 30
Hi Todd,
 
For unkeyed topics with DDS_ResourceLimitsQosPolicy::initial_samples set to 1 and DDS_DataReaderResourceLimitsQosPolicy::initial_fragmented_samples set to 1, the middleware allocates three samples of the maximum size for each DataReader:
 
- The first sample is the initial sample for incoming data
- The second sample is used by NOT_ALIVE_NO_WRITERS samples
- The third sample is used to manage fragmentation
 
You can set DDS_DataReaderResourceLimitsQosPolicy::dynamically_allocate_fragmented_samples to TRUE to not preallocate any sample for fragmentation. By doing that, the number of preallocated samples will be two. Unfortunately, the middleware cannot be configured to not allocate less than two samples.
 
The rational behind preallocating samples in the middleware was to provide determinism in real-time applications. However, this may have significant impact in the memory usage required by your Connext application. In future releases, we want to provide more flexible sample memory allocation schemas where you will be able to choose between preallocating samples or do dynamic memory allocation when needed. 
 
We already support this functionality on the DataWriter side through the usage of the property:
 
dds.data_writer. history.memory_manager. fast_pool.pool_buffer_max_size
 
See "chapter 20 Sample-Data Memory Management" in the Core Library and Utilities user's manual for further details.
 
Regards,
    Fernando
Offline
Last seen: 9 years 9 months ago
Joined: 03/08/2013
Posts: 8

Hi Fernando,


is there an update on the feature of dynamic memory allocation on DataReader side? This feature would make our lives much easier at the moment.
I read some posts in this forum on how to get that feature, but the solutions required modifying the generated code from the idls, which I really don't want to get into.

Thanks, Ulrich