DTO memory release JAVA

2 posts / 0 new
Last post
Offline
Last seen: 2 years 2 weeks ago
Joined: 08/03/2017
Posts: 1
DTO memory release JAVA

I'm told that DDS maintains a reference to DTOs created when using the example code here:

code 

public void on_data_available(DataReader reader) {
            HelloWorldDataReader HelloWorldReader =
                (HelloWorldDataReader)reader;
            
            try {
                HelloWorldReader.take(
                    _dataSeq, _infoSeq,
                    ResourceLimitsQosPolicy.LENGTH_UNLIMITED,
                    SampleStateKind.ANY_SAMPLE_STATE,
                    ViewStateKind.ANY_VIEW_STATE,
                    InstanceStateKind.ANY_INSTANCE_STATE);

                for(int i = 0; i < _dataSeq.size(); ++i) {
                    SampleInfo info = (SampleInfo)_infoSeq.get(i);

                    if (info.valid_data) {
                        System.out.println(
                            ((HelloWorld)_dataSeq.get(i)).toString("Received",0));


                    }
                }
            } catch (RETCODE_NO_DATA noData) {
                // No data to process
            } finally {
                HelloWorldReader.return_loan(_dataSeq, _infoSeq);
            }
        }
    }

 

Do I need to use the copy_from method prior to passing the DTO to the rest of my code?  The explanation given was that RTI maintains a reference to the DTO and will simply update the fields when new data is received instead of creating a new HelloWorld DTO.  The explanation stated the DTOs wouldn't operate properly unless it was explicitely released by first performing a copy (so I'm not holding a reference to the original) and calling return_loan on the reader.  This doesn't make sense to me as I'd expect the references to be cleared once the sequence is cleared.

Is this accurate?  Do I actually need to copy every single time a DTO comes in?  I'd prefer to minimize overhead if possible since these DTOs will be sent at a relatively high rate.

Offline
Last seen: 1 month 1 week ago
Joined: 02/11/2016
Posts: 142

Hey,

It's true that RTI passes its own instances to the user in this api, to allow the best performance where a copy of the instance isn't needed (say, if your application holds some object and updates its fields quickly during the on_data_available).

In this sense RTI is trying to give you better performance by not forcing a copy.

Your options are basically:

1. Use data from new samples to update your own "state" instead of keeping the new samples themselves (this means you may need to rethink your system architecture but could prove easy to do in some use cases)

2. There are other APIs (other than take) that if I recall correctly will give you a copy of the data (for example: read/take_next_sample). (this is simple but behind the scenes will invoke a copy, so if you are worried about throughput you should test it out, personally i wouldn't be so worried, though)

3. You may also choose to hold on to the loan and let the application return it when it can if you only need to hold on to objects "temporarily" (although this approach may cause memory to explode if SOMEONE forgets to return the loan). (this is simply risky since someone may decide that he needs to hold on to everything for a long period of time, and with the high throughput you expect to have, managing it will become hard)

4. Another option is of course to copy each sample you receive, which can be relatively inefficient but may still be good enough. Sometimes people think their throughput is higher than it actually is (or they think instance creation + copying samples takes longer than it does) (this is basically equivalent to option 2 but gives you more flexibility in how you copy and how you take/read samples).

 

At any rate as a best practice I think you should aim for option 1 and if system architecture seems too forced (or if you've already done a lot of system architecture work and you just want to get it over with) you  can resort to option 2 or 4 (2 is a bit "simpler", I would say).

 

Good luck,

Roy.