DataReader's memory keeps growing when multiple DataWriters come up and go away in the system
In a scenario where multiple DataWriters come up and go away in the system, a matching DataReader application will experience a continual memory growth, this may be confused with a memory leak.
This observed behavior is due to the fact that DataReader entities must maintain information about the discovered DataWriters, even after these ones are disposed.
The reason for maintaining this reader state is that several DataWriters may share the same Virtual Writer GUID, and in that scenario the DataReader needs to know information about all those writers, even the disposed ones. This allows the DataReader to, for example, detect and filter duplicate samples.
However this behavior can be modified using the following property:
<property> <value> <element> <name>dds.data_reader.state.filter_redundant_samples</name> <value>0</value> </element> </value> </property>
When filter_redundant_samples is set to 0, the reader state is not maintained and Connext DDS does not filter duplicate samples that may be coming from the same virtual writer. To enable durable reader state, this property must be set to 1 (which is the default).