Datareader memory leak using random keys

6 posts / 0 new
Last post
Offline
Last seen: 2 years 9 months ago
Joined: 01/27/2014
Posts: 22
Datareader memory leak using random keys

Hello,

I noticed a different memory consumption on the reader size when using random instance keys or not.

Try that:

  • Create a datawriter and a datareader on the same topic, each one in two different process (not tryed in the same process)
  • On the reader side, just take the samples and do nothing with.
  • On the writer side, execute in a loop:
    • Create 50,000 random keys among a 128 bits integer can offer
    • Write 50,000 instances of a topic, each instance having its own random key
    • Wait a while for the datareader take all the instances
    • Release the instances
    • Wait a while for the datareader release all the instances

You will see the memory growing, by huge steps (it takes some iterations)

Now, try the same without random keys (use in each loop the same set of 50,000 different keys):

  • No memory grow

 

I am on Windows VS2017-64, using RTI DDS 5.3.1

I think the issue is linked to the constant increasing of hash buckets allocated by the datareader to store instances.

Can you reproduce the issue ?

I there a way to limit the number of hash buckets ? I can see in the doc that default value of instance_hash_buckets is 1. What does mean this value ? Is it a limit ?
In a next try, I will set a value of 10,000 to see the results.

 

I joined my QoS xml files. I use the profile SIG_CodecSignal


I use RTI DDS 5.3.1 VS2017-64

AttachmentSize
Package icon qos.zip5.01 KB
Organization:
maxx's picture
Offline
Last seen: 1 year 2 months ago
Joined: 08/26/2020
Posts: 9

Hello,

First, to answer your question on the instance_hash_buckets, by default the number of hash buckets for instance lookup is of size 1, but this can be changed up to 1.1 million buckets for faster lookup. So, allocating this up front may help you get consistent results with random keys. 

The documentation you linked is for the .NET API, is this the language binding you are using in your application? The 5.3.1 version of this documentation can be found here: https://community.rti.com/static/documentation/connext-dds/5.3.1/doc/api/connext_dds/api_dotnet/structDDS_1_1ResourceLimitsQosPolicy.html#a02b7f28349f3727a14ad0b044e79973c

Unfortunately, Google search tends to rank older versions of the API documentation a bit higher than the newer versions!

Maxx

Offline
Last seen: 2 years 9 months ago
Joined: 01/27/2014
Posts: 22

Hello, thanks for your fast reply.

Indeed, I already saw this documentation, but for me it is not very clear. Or perhaps that's not the solution to my issue.

I explain:

I use the default value of 1 for instance_hash_buckets. With this minimum value, I see the memory constanly increasing, up to several GB if I let the loop iterates 50 times.

I cannot decrease instance_hash_buckets, 0 is not allowed.

What value shall I choose to set a limit on memory consumption ?

Howard's picture
Offline
Last seen: 19 hours 22 min ago
Joined: 11/29/2012
Posts: 618

Hi Boris,

First, since your project has a support contact with RTI, questions such as this can be submitted to our support team to help resolve.  Our professional support team can usually address questions like this quickly, efficiently...as well as in your time zone.  Questions submitted to this forum are only addressed ad hoc and so no guarantees on when and if your question can be resolved.

For your specific question, what do you mean by "release"?

  • Release the instances
  • Wait a while for the datareader release all the instances

Do you mean "unregister" by the datawriter?  Do you dispose the instances as well?  Do you dispose first and then unregister or vice versa?

How do you mean that the "datareader release all the instances"?  If you are waiting for Connext DDS to do something, how do you know its been done?

 

With regards to memory growth..by default DataReaders will have some memory of ALL instances that it has ever seen, even those that have been disposed/unregistered by DataWriters.

Please see this documentation for Connext 6.1 8.3.8.6 Instance Resource Limits and Memory Management.  Read specifically about Active versus Minimum Instance states with respect to Attached and Detached Instances.

It also applies to Connext 5.3.1, but was not very well documented at that time...it was just briefly described under "max_total_instances and max_instances" here.

https://community.rti.com/static/documentation/connext-dds/5.3.1/doc/manuals/connext_dds/html_files/RTI_ConnextDDS_CoreLibraries_UsersManual/index.htm#UsersManual/DATA_READER_RESOURCE_LIMITS_Qos.htm#receiving_2076951295_330118%3FTocPath%3DPart%25202%253A%2520Core%2520Concepts%7CReceiving%2520Data%7CDataReader%2520QosPolicies%7C_____2

 

By default, there are no limits set on the number of instances (max_instances) nor total number of instances (max_total_instances, which includes instances that have been detached) that a DataReader will be able to "store"...and thus as you generate more and more unique keys, more and more memory will be used.

You can constrain this memory, and even not have DataReaders use keep a minimum state for instances (in those cases, where an instance key value is only used and then never used again when the instance is disposed and unregistered) by configuring the following QoSes:

https://community.rti.com/static/documentation/connext-dds/5.3.1/doc/api/connext_dds/api_dotnet/structDDS_1_1DataReaderResourceLimitsQosPolicy.html#a58204a359b598d5b7d980e680019d753

https://community.rti.com/static/documentation/connext-dds/5.3.1/doc/api/connext_dds/api_dotnet/structDDS_1_1DataReaderResourceLimitsQosPolicy.html#a1d244479c2f9e22add97287395f9f9fe

 

Also, just by disposing/unregistering instances by the DataWriter does not automatically cause DataReaders to purge the instances by default.  These QoSes control the timing of that process:

https://community.rti.com/static/documentation/connext-dds/5.3.1/doc/api/connext_dds/api_dotnet/structDDS_1_1ReaderDataLifecycleQosPolicy.html

By default, these duration setttings are set to infinite, which means no purging takes place...

 

 

 

Offline
Last seen: 2 years 9 months ago
Joined: 01/27/2014
Posts: 22

Hello, for information, the RTI support (Maxx) resolved my issue.

As in my case, I do not use multi-channel DataWriter nor Persitence service, I have to add this QoS:

      <datareader_qos>
        <reader_resource_limits>
          <keep_minimum_state_for_instances>false</keep_minimum_state_for_instances>
        </reader_resource_limits>
      </datareader_qos>

Without this QoS:

I/O (in blue) monitors the IO transfer to datareader. Larges slots indicate instances creation, short one instances unregistering.

Memory of the receiver process (in yellow) normally increases when creating, decreases when unregistering (small waves).

But there is large jumps at the loop 1, 2, 3, 6, 11 during instance creation and it continue like this, less and less frequently but scaled up by ~2 each time.

The experiment takes 10 minutes magnitude.

 

With this QoS:

Memory (in yellow) normally increases when creating, decreases when unregistering (small waves) and there is no memory jumps. Note the memory scale.

Thanks for your answers.

Boris.

maxx's picture
Offline
Last seen: 1 year 2 months ago
Joined: 08/26/2020
Posts: 9

Glad it's performing, and that I could be a help! :)