5.21. Data Corruption
5.21.1. [Critical] Undefined behavior using XCDR2 with keyed topic types with key union members
XCDR2 with keyed topic types with key union members was not supported. For example:
union MyUnion switch(long) {
case 0:
long m_long;
case 1:
short m_short;
};
struct StructWithUnionKey {
@key MyUnion m_union;
long m_long;
};
The behavior was undefined if any of your topic types had a union key member going from a potential segmentation fault to an erroneous key hash in which two different instances could be considered equal.
[RTI Issue ID CORE-14186]
5.21.2. [Critical] Stack overflow if value of “rti.monitor.config.publish_thread_options” property had 512 or more characters
If RTI Monitoring Library was enabled for a DomainParticipant and
the rti.monitor.config.publish_thread_options
property was specified
with a string value of 512 or more characters, a stack overflow happened
in the application. Now, if the property value has 512 characters or
more, an error will be printed.
[RTI Issue ID MONITOR-643]
5.21.3. [Critical] Failure to send serialized key with dispose when using dds.data_writer.history.memory_manager.fast_pool.pool_buffer_max_size property
You may have seen a serialization error when disposing instances. This
error occurred under specific conditions: when the
writer_qos.protocol.serialize_key_with_dispose
setting was enabled
(set to TRUE) and the
dds.data_writer.history.memory_manager.fast_pool.pool_buffer_max_size
was configured to a size that required the serialization buffer to be
allocated from the heap instead of from pre-allocated memory.
This error only occurred in cases where the serialized key was not aligned on a 4-byte boundary.
[RTI Issue ID CORE-14370]
5.21.4. [Critical] Error uncompressing samples when using batching and setting serialize_key_with_dispose to TRUE
A DataReader failed to uncompress batch samples when the DataWriter set the QoS writer_qos.protocol.serialize_key_with_dispose to TRUE, and the batch sample contained one or more dispose messages.
When this problem occurred, the DataReader printed an error like this:
ERROR RTIOsapi_Zlib_uncompress:The input data was corrupted
ERROR RTICdrStream_uncompress:!uncompress sample
ERROR PRESReaderQueue_decodeAndUncompress:FAILED TO TRANSFORM | Stream decompression failed.
ERROR PRESCstReaderCollator_newData:FAILED TO TRANSFORM | Failed to decode and/or uncompress sample.
[RTI Issue ID CORE-14344]
5.21.6. [Critical] DataReader on a Topic using an appendable type may have received samples with incorrect value
A DataReader subscribing to a Topic on an appendable type may have received incorrect samples from a matching DataWriter.
The problem only occurred when the DataWriter published a type with fewer members than the DataReader type. For example, consider a DataWriter on FooBase and a DataReader on FooDerived:
@appendable struct FooBase {
sequence<uint8,1024>base_value;
};
@appendable struct FooDerived {
sequence<uint8,1024> base_value;
@default(12) uint8 derived_value;
};
When the DataWriter published a sample with type FooBase
, in some
cases the DataReader received a sample in which the field
derived_value was set to 0 instead of 12.
This issue was caused by a bug in which Connext was not setting the padding bits in the encapsulation header for a serialized sample as required by the OMG ‘Extensible and Dynamic Topic Types for DDS’ specification, version 1.3. As a result, some of the padding bytes were interpreted as data.
Note: This fix may lead to a compatibility issue causing a Connext Professional DataWriter to not match with a Connext Micro or Connext Cert DataReader. For details, see Extensible Types Compliance Mask in the RTI Connext Core Libraries Extensible Types Guide.
[RTI Issue ID CORE-9042]