5.18. Crashes

5.18.1. [Critical] Crash when deserializing PID_TYPE_OBJECT_LB with class ID of RTI_OSAPI_COMPRESSION_CLASS_ID_NONE

A DomainParticipant would crash when deserializing an endpoint discovery message containing PID_TYPE_OBJECT_LB that was followed by a compression class ID of RTI_OSAPI_COMPRESSION_CLASS_ID_NONE (0x00000000). This should not have occurred in normal operation but was possible to encounter if a packet had been tampered with or corrupted. Now the DomainParticipant does not crash but will log an error message and not deserialize the compressed type object.

[RTI Issue ID CORE-14079]

5.18.2. [Critical] Potential crash while calling DynamicData APIs when running out of system memory

Running out of memory during certain DynamicData initialization API calls, such as DDS_DynamicDataTypeSupport_new, may have resulted in a crash. Now, running out of memory during DynamicData initialization APIs will provoke those APIs to gracefully fail.

[RTI Issue ID CORE-14232]

5.18.3. [Critical] Potential crash when calling DDS_TypeCodeFactory_create_value_tc_ex with a NULL ex parameter

Performing a call to DDS_TypeCodeFactory_create_value_tc_ex with a NULL ex parameter (which is a valid input value) may have resulted in a crash. Specifically, the crash would have been triggered if passing a not NULL concrete_base and an exception occurred. DDS_TypeCodeFactory_create_value_tc_ex no longer crashes if passing a NULL ex parameter.

[RTI Issue ID CORE-14224]

5.18.4. [Critical] Crash when calling DDS_DataWriter_set_qos with a NULL qos parameter

Performing a call to DDS_DataWriter_set_qos with a NULL qos parameter resulted in a crash. Now, an illegal call to DDS_DataWriter_set_qos will gracefully fail and return DDS_RETCODE_BAD_PARAMETER.

[RTI Issue ID CORE-14223]

5.18.5. [Critical] Crash when performing an illegal call to DDS_DataWriter_get_qos

Performing an illegal call to DDS_DataWriter_get_qos (see Restricted Operations in Listener Callbacks in the RTI Core Libraries User’s Manual) resulted in a crash. Now, an illegal call to DDS_DataWriter_get_qos will gracefully fail and return DDS_RETCODE_ILLEGAL_OPERATION.

[RTI Issue ID CORE-14222]

5.18.6. [Critical] Crash if SPDP2 participant received unexpected field in participant discovery message *

This issue was fixed in 7.2.0, but not documented at that time.

A participant using Simple Participant Discovery 2.0 would crash if it received a bootstrap or configuration message with a PID that it was unable to deserialize. This occurred with a subset of PIDs. The participant would crash if it received a bootstrap message with any of the following fields present: PID_PROPERTY_LIST, PID_USER_DATA, PID_ENTITY_NAME, PID_ROLE_NAME. The participant would crash if it received a configuration message with any of the following fields present: PID_DOMAIN_TAG, PID_IDENTITY_TOKEN, PID_PERMISSIONS_TOKEN, PID_TRANSPORT_INFO.

Now if these fields are unexpectedly present in a participant discovery message, the participant will not attempt to deserialize them and will simply skip the field.

[RTI Issue ID CORE-13695]

5.18.7. [Critical] Crash during DomainParticipant initialization if failure to get local address mapping when using UDPV4_WAN transport

Using UDPV4_WAN when creating a DomainParticipant may have resulted in a crash if Connext failed to get the local address mapping.

[RTI Issue ID CORE-14272]

5.18.8. [Critical] Crash when converting a DynamicData object to a CDR buffer

If a DynamicData object was bound to another DynamicData object or was populated as a result of a call to DynamicData::get_complex_member, and the top-level DynamicData object was an unbounded type, any attempt to convert the nested DynamicData object to a CDR buffer resulted in a crash.

For example, using the following types:

struct MyA {
    string str;
};

struct MyB {
    MyA my_a;
};

Creating a DynamicData object with type MyB, then getting a DynamicData object for member my_a, and finally converting that DynamicData object to a serialized buffer in CDR format, resulted in a crash.

[RTI Issue ID CORE-14167]

5.18.9. [Critical] Potential crash if allocation of RTI Monitoring Library’s publish thread failed

A crash could occur if the allocation of the RTI Monitoring Library’s publish thread failed. Now, the creation of the thread will gracefully fail.

[RTI Issue ID MONITOR-644]

5.18.10. [Critical] Segmentation fault upon destruction of DDSGuardCondition or DDSWaitset

When using the Traditional C++ or Modern C++ API, a segmentation fault occurred if the DomainParticipantFactory was finalized in the same scope as stack-allocated DDSGuardCondition or DDSWaitset instances.

[RTI Issue ID CORE-8967]

5.18.11. [Critical] Crash if participant received endpoint discovery sample and was not able to allocate memory to process it

If a DomainParticipant received an endpoint discovery sample and was unable to allocate the memory to properly process it (for example, if the system was out of memory), the DomainParticipant may have crashed.

[RTI Issue ID CORE-14342]

5.18.12. [Critical] Possible exception after using a Condition object if it was not explicitly disposed

An exception may have occurred after using a ReadCondition or GuardCondition if you did not explicitly dispose it. This may have also affected the methods TakeReplies(SampleIdentity relatedRequestId) and ReadReplies(SampleIdentity relatedRequestId) from the C# Request-Reply API.

[RTI Issue ID CORE-14154]

5.18.13. [Critical] Potential crash or errors when using SHMEM transport in QNX *

If you used the shared-memory (SHMEM) transport in Connext 7.2.0, you may have seen unexpected errors or crashes in your applications during startup. The errors were more likely to occur when all the applications in the system were started at the same time.

[RTI Issue ID CORE-14038]

5.18.14. [Critical] Crash if participant failed to allocate memory for endpoint discovery type plugins

If a DomainParticipant failed to allocate memory for the Simple Endpoint Discovery plugins (for example, because the system was out of memory), the DomainParticipant crashed while starting up.

[RTI Issue ID CORE-14343]

5.18.15. [Critical] Modern C++ Distributed Logger may hang or crash upon instance finalization *

The Modern C++ Distributed Logger may have produced a crash after calling DistLogger::finalize() when a DomainParticipant had been set by the user.

[RTI Issue ID DISTLOG-238]

5.18.16. [Critical] Invalid multicast locator could cause precondition error or segmentation violation

When particular combinations of Data(p) messages were received by a DomainParticipant, a precondition error (using the debug version of our libraries) or a segmentation violation (using the release version of our libraries) occurred. Such a combination of messages is beyond the scope of regular system operations and could only arise through the manipulation of Data(p) RTPS messages. Such manipulation might have stemmed from internal testing scenarios or unauthorized access by a malicious entity in systems where DDS security measures were not fully implemented or enforced.

There were various combinations of Data(p) messages that triggerred this behavior, but the essential scenario required the discovery by a DomainParticipant of at least two other DomainParticipants. One of these discovered DomainParticipants initially used a multicast locator, but later switched to unicast locators due to a corrupted Data(p) RTPS message. The other discovered DomainParticipant used unicast locators and was discovered before the first DomainParticipant switched to unicast locators.

[RTI Issue ID CORE-14349]

5.18.17. [Critical] Crash during DomainParticipant enable operation when running out of system memory

Running out of memory during DomainParticipant enable() may have resulted in a crash. Now, running out of memory during DomainParticipant enable() provokes the enable operation to gracefully fail.

[RTI Issue ID CORE-14220]

5.18.18. [Critical] Segmentation fault when a reader was deleted while a remote writer cleanup event was scheduled *

To prevent unbounded memory growth when the Instance State Consistency feature is enabled, DataReaders periodically purge information associated with DataWriters that are no longer communicating with the DataReader. If the purge event ran for a deleted DataReader, a segmentation violation occurred.

[RTI Issue ID CORE-14438]

5.18.19. [Critical] Race condition between the creation of a Replier and the call to its Listener

A race condition may have caused a ReplierListener to be called in a state where the Replier was not fully created, potentially causing a crash or an exception. This issue has been resolved in the Modern C++ and Java APIs. A fix for other APIs is expected soon.

[RTI Issue ID REQREPLY-132]

5.18.20. [Critical] Undefined behavior when Requesters or Repliers for same service name were concurrently created and deleted

A race condition in the creation and destruction of the Requester type may have caused a failure or crash when several Requesters for the same service name were created or deleted in different threads. The Replier type was also affected.

Protecting the creation and destruction of these objects with a mutex resolved the problem. This issue has been resolved in the Modern C++ and Java APIs. A fix for other APIs is expected soon.

[RTI Issue ID REQREPLY-127]

5.18.21. [Critical] Hang led to crash if Monitoring Library 2.0 was enabled then right away disabled *

When Monitoring Library 2.0 is enabled, it creates a set of threads used for collecting and publishing telemetry data.

If the library was disabled right after enablement, there was a chance that one of these threads was still setting up all the components it needs to operate. Setting up these components required taking a semaphore that was already taken by the disablement operation. At the same time, the disablement operation was waiting for that thread to finish. This situation led to a hang.

The hang was not indefinite. The disablement of the library waits for a fixed period of time for the thread to finish. After that period, the disablement continues regardless of whether the thread was stopped. Once the disable operation released the semaphore, the thread that was waiting for it continued with its execution, accessing already freed memory and producing a crash.

Before the crash, these errors occurred:

ERROR RTIOsapiJoinableThread_shutdown:Join timeout (20000 millisec) for thread expired
ERROR RTI_MonitoringEventSnapshotThread_finalize:FAILED TO FINALIZE | Monitoring Event Snapshot Thread

The hang is now fixed.

[RTI Issue ID MONITOR-664]

5.18.22. [Critical] Possible crash when creation of TCP Transport failed

A failure in the creation of the TCP Transport could lead to a crash while trying to finalize the partially created transport. This error is now managed without crashing.

[RTI Issue ID COREPLG-731]

5.18.23. [Critical] Possible crash upon destruction of TCP transport if it was created programmatically and it logged messages

When creating the TCP Transport plugin object programmatically, a crash could happen when the transport was destroyed, if the transport produced log messages during its execution. The crash did not happen when the transport was created through QoS configuration. Programmatic creation of the transport now works properly.

[RTI Issue ID COREPLG-718]



* This bug does not affect you if you are upgrading from 6.1.x or earlier.