6.6.4. Fixes Related to Filtering and TopicQuery
6.6.4.1. [Critical] Connext application using filtering feature may have crashed after running out of memory
In release 6.1.1.2, a Connext application using filtering features (that is, ContentFilteredTopic, QueryConditions, or TopicQuery) may have crashed after running out of memory. This problem has been resolved.
[RTI Issue ID CORE-12661]
6.6.4.2. [Critical] Creation of a ContentFilteredTopic or reception of TopicQuery samples may have taken long time for complex types
The creation of a ContentFilteredTopic or reception of TopicQuery samples, may have taken a long time for complex types. This issue has been resolved.
[RTI Issue ID CORE-12179]
6.6.4.3. [Critical] rti::topic::find_registered_content_filters led to infinite recursion
The function rti::topic::find_registered_content_filters() was incorrectly implemented and would lead to infinite recursion and stack overflow in any application that called it. This problem has been resolved. This function returns the names of previously registered custom content filters. It is a little-used feature and does not affect the commonly used SQL content filter.
[RTI Issue ID CORE-12512]
6.6.4.4. [Critical] Incorrect results for Unions when using DynamicData or Content Filters
When using a DynamicDataReader, samples containing a union may have had incorrect or invalid data after deserialization if the DataReader’s type contained members that were not present in the DataWriter’s type and those members had non-zero default values.
When using content filters, the filter results may have been incorrect if the type contained a union and the filter expression filtered on fields within the union that were present in the DataReader’s type but were not present in the DataWriter’s type and those members had non-zero default values.
For example, see this DataWriterType:
struct innerStructPub {
short shortMember;
};
@mutable
union ComplexUnionTypePub switch(long) {
case 0:
long longMember;
case 1:
innerStructPub structMember;
};
and this DataReaderType:
struct innerStructPub {
short shortMember;
};
@mutable
union ComplexUnionTypePub switch(long) {
case 0:
long longMember;
case 1:
innerStructPub structMember;
};
struct innerStructSub {
short shortMember;
@default(5) long longMemberWithDefault;
};
@mutable
union ComplexUnionTypeSub switch(long) {
case 0:
long longMember;
case 1:
innerStructSub structMember;
};
In the above types, the member longMemberWithDefault is only present in the DataReader’s type and has a default value of 5, so any sample that is received from the DataWriter should have this value set to 5 when read from the DataReader’s queue. Instead, the value was incorrectly 0 when using DynamicData.
In addition, if this member was used as part of a content filter expression, a DataReader always used the value of 0 instead of 5 when evaluating a sample from a DataWriter using the DataWriterType which could lead to incorrect filter results. These issues have been fixed.
[RTI Issue ID CORE-12517]
6.6.4.5. [Major] Unnecessary repair traffic for DataWriters using TopicQueries and asynchronous publishing
Samples that are sent in response to a TopicQuery are directed to the DataReader that created that TopicQuery. This means that those samples are only sent to the DataReader that made the request and have that DataReader’s GUID attached to each sample in the sample’s metadata. All other DataReaders receive GAP protocol messages, indicating to them that a given sequence number or set of sequence numbers is not meant for them.
Due to a defect, when a DataReader sent a NACK message requesting some TopicQuery samples to be repaired, if the requested sequence numbers included samples that were meant for a different DataReader, the DataWriter did not filter these samples and resend a GAP message. Instead, the DataWriter sent the DataReader samples that were not meant for it and the DataReader had to filter these samples out itself. As a result, the DataReaders may have received samples that should have been filtered out on the DataWriter side, leading to an increase in network traffic.
The problem only affected repair traffic. When a sample was filtered out by the DataWriter because it was directed to a different DataReader, the DataWriter sent a GAP protocol message to the DataReader. If the GAP message was lost, the DataReader NACKed for the sample; instead of sending a new GAP message, the DataWriter sent the sample. This problem has been resolved.
[RTI Issue ID CORE-12589]
6.6.4.6. [Major] Continuous creation of TopicQueries may have led to unnecessary memory fragmentation in OS memory allocator
In releases 6.0.x and 6.1.x, the continuous creation of TopicQueries may have led to unnecessary memory fragmentation in the OS memory allocator of the applications that receive the TopicQuery requests and dispatch responses. This issue may have resulted in an unexpected increase of the resident set size (RSS) memory of the application receiving and dispatching the TopicQueries compared to previous Connext releases. This problem has been fixed.
[RTI Issue ID CORE-12352]
6.6.4.7. [Major] Samples may have been unnecessarily filtered by Connext DataReader when DataWriter was from different DDS vendor
A Connext DataReader using a ContentFilteredTopic unnecessarily evaluated its filter on samples coming from a different vendor DataWriter that already marked the samples as passing the DataReader filter. This issue may have led to an increase in CPU utilization on the DataReader side, but it did not affect functional correctness or bandwidth utilization.
The problem occurred because Connext was not compliant with the way a filter signature is calculated according to the Section 9.6.4.1, Content filter info (PID_CONTENT_FILTER_INFO), in the Real-time Publish-Subscribe Protocol DDS Interoperability Wire Protocol (DDSI-RTPSTM) Specification version 2.5).
This problem has been resolved.
[RTI Issue ID CORE-12531]
6.6.4.8. [Minor] Unnecessary sample filtering on a DataReader for samples already filtered by a DataWriter
When doing writer-side filtering, a late-joining DataReader using a ContentFilteredTopic may have spent unnecessary CPU cycles evaluating samples that pass the ContentFilteredTopic’s expression. When using writer-side filtering, the filter evaluation is done by the DataWriter and it should not be necessary for the DataReader to do it again on samples that pass the filter expression. This problem, which only occurred for late-joining DataReaders, has been fixed.
[RTI Issue ID CORE-11084]