[DDS 5.1.0] Memory footprint different from one platform to another

5 posts / 0 new
Last post
Offline
Last seen: 3 years 1 week ago
Joined: 07/04/2016
Posts: 6
[DDS 5.1.0] Memory footprint different from one platform to another

Hello,

I have been struggling for some months now in understanding memory allocation/consumption made by DDS layer.

Let's start with the context :

4 main topics are used called "Alarm", "State", "Measure", "GenericData", all containing basic information such as boolean values, timestamps and integers.

On the production platform, there are about ~50 computers publishing these data through datawriters, and one server listening to these topics, with one subscriber per publishing server. One subscriber contains each one instance of listener for each of the described types.

The process running on the listening server consumes all of the memory available on the server (now this server has 12GBytes of RAM, everything is taken by the DDS listening process).

On the development platform, I have the same listening server, with the exact same DDS configuration file. Instead of 50 publishing computers, I have only one, containing one participant, but the same amount of publisher (~50), and the exact same amount of published datas. The listening server of this platform consumes only 2 to 3GBytes of memory. (this server has 12GBytes of RAM, just like production platform).

So my question is :

Why does the production server consume all of the available memory, whereas my development server only consumes 3Gbytes, with the same amount of datas and publishers ? Does the number of participant add this much memory consumption ? Or the number of publishing server ?

Thank you for your help,

Mathieu

 

 

Offline
Last seen: 3 years 1 week ago
Joined: 07/04/2016
Posts: 6

By the way, the listening server runs on Windows Server 2008r2, .NET 4.0 framework, DDS 4.1.0. The project is written in C#, and C# assemblies classes are loaded in a Panorama application ( https://codra.net/fr/ )

jmorales's picture
Offline
Last seen: 8 months 2 weeks ago
Joined: 08/28/2013
Posts: 60

I would think this is related to the resource limits of your entities:

You mentioned that your listener thread is the one allocating all this memory: What are you doing in your listener? Are you returning the loan of the samples you are reading? Are you doing any kind of processing in the listener? Also, it might be interesting to see your resource limits QoS settings.

Not sure if this applies to your case, but it might help: https://community.rti.com/best-practices/never-block-listener-callback

In addition to that, have you tried any profiler there to see what is exaclty the function allocating memory? 

Offline
Last seen: 3 years 1 week ago
Joined: 07/04/2016
Posts: 6

Hello,

Thanks for the answer.

In the listening server :

- The listener does not block the listening thread when a sample is notified, the process is quite simple, without any loops.

- The code does return the loan of the samples I'm reading (but only in on_data_available function, the other methods are empty)

One more information : on the listening server, there are 50 instances of objects containing each a subscriber, and listeners for each of the described topics.

The QoS settings are attached to this post. For all these 4 topics, I'm using the following library/profile described in the QoS file :

libAGSCommunMiddlewareDDS::Transient

We looked into umdh logs file generated, the only conclusion is that there are about 10 stacks allocated though nddscore.dll, each of them taking about 500 MBytes. These logs were generated on production listening server when it had 5GBytes of memory.

I tried analyzing dump files and .Net memory, without any success (i.e. most of the memory is not allocated for .net objects)

Many thanks for your help,

Mathieu

File Attachments: 
Offline
Last seen: 3 years 1 week ago
Joined: 07/04/2016
Posts: 6

Hello,

I got UMDH files from the production platform  which show the following elements :

- Despite a > 10Gbytes process, the delta of these files show 3.65GBytes (the first umdh log file was made while the process takes only 350Mo~400MBytes, the 2nd one, the process is more than 10GBytes)

- 2.96GBytes represents stacks related to dds (4447 stacks over 9776)

On my development platform, I did the same exercise, to find these results :

- The delta log file show 1.3GBytes, 565Mbytes related to dds (4177 stacks over 7091)

I'm a bit confused as I think I do have a representative development/test platform, but I don't have the same memory consumption.

Thank you,

Mathieu