9.6. Regressions in 7.0.0
The following regressions were introduced in Connext 7.0.0.
9.6.1. Core Libraries
9.6.1.1. Memory leaks and errors when using DynamicDataReaders or FlatData DataReaders plus DDS-fragmentation and compression or encryption
When a DynamicDataReader or DataReader using a FlatData type receives a fragmented sample that was either compressed or encrypted, the memory used to store the sample’s serialized data is leaked and errors similar to the following are printed:
FATAL rCo79661##01Rcv [PARSE MESSAGE|0x01016350,0x5A66C0E7,0x8DA940ED:0x80000004
{Entity=DR,MessageKind=DATA_FRAG}|
RECEIVE FROM 0x01018673,0xDB0C9361,0xB2B4181F:0x80000003]
Mx02:/home/user/osapi.1.0/srcC/memory/heap.c:1104:RTI0x2022004:inconsistent
free/alloc: block id 0 being freed with "RTIOsapiHeap_allocateBufferAligned" and
was allocated with "RTIOsapiHeap_unknownFunction"
Fixed in: 7.5.0
[RTI Issue ID CORE-15231]
9.6.1.2. Some properties no longer accept LENGTH_UNLIMITED string as valid value
Some properties,
such as dds.data_writer.history.memory_manager.fast_pool.pool_buffer_max_size
,
can’t be set to the special “unlimited” value using the string “LENGTH_UNLIMITED”.
Use “-1” instead.
Fixed in: 7.5.0
[RTI Issue ID CORE-14328]
9.6.1.3. Possible segmentation fault while enabling a DataWriter that enables batching
Consider the following scenario:
You are using the Java API.
The
DataWriterQos
hasbatch.enable
set totrue
.
Attempting to enable a DataWriter with that QoS occasionally
fails with a segmentation fault in the internal function
PRESTypePluginDefaultEndpointData_calculateBatchBufferSize()
.
Fixed in: 7.4.0
[RTI Issue ID CORE-14659]
9.6.1.4. Running out of memory during DomainParticipant creation causes DomainParticipantFactory finalization to hang
During DomainParticipant creation, if the internal
function REDAWorkerFactory_createWorker
runs out of memory and prints this error:
REDAWorkerFactory_createWorker:no space on heap for array with 1024 elements of size 8 bytes
then DomainParticipant creation will fail. The problem was that when you later tried to finalize the DomainParticipantFactory, this operation would hang and repeatedly print these errors:
REDAWorkerFactory_destroyWorkerEx:!take mutex
Fixed in: 7.4.0
[RTI Issue ID CORE-14962]
9.6.1.5. Arbitrary read access while parsing malicious RTPS message
Arbitrary read access can occur while parsing a malicious RTPS message. User impact with or without Security is a vulnerability in the Connext application that could result in the following:
Arbitrary read access while parsing a malicious RTPS message.
Remotely exploitable.
Potential impact on confidentiality of Connext application.
CVSS Base Score: 8.2 HIGH
CVSS v3.1 Vector: AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:H
Mitigation: protect the network Connext is running so untrusted peers cannot inject malicious RTPS messages.
Fixed in: 7.1.0
[RTI Issue ID CORE-13160]
9.6.1.6. DDS fragmentation leads to more fragments than expected for a sample
You may notice that when using middleware-level fragmentation and a
flow controller where bytes_per_token
is set to a value smaller than the
minimum transport message_size_max
across all installed transports, the
number of sample fragments generated for a sample may be bigger than expected.
Although this is not a functional issue, it may lead to performance degradation.
Fixed in: 7.1.0
[RTI Issue ID CORE-13190]
9.6.1.7. TCP transport won’t run with Windows debug libraries if socket_monitoring_kind is IOCP
An internal error prevents the TCP transport from running on Windows systems
with debug libraries when socket_monitoring_kind
is set to the recommended value
of NDDS_TRANSPORT_TCPV4_SOCKET_MONITORING_KIND_WINDOWS_IOCP
.
Trying to use the library causes the following error:
Mx02:c:\jenkins\workspace\connextdds\release7.0.0.0\x64win64vs2017\src\osapi.1.0\srcc\thread\thread.c:2179:RTI0x200003b:!precondition:
"strlen(name) >= 16"
RTIOsapiThread_newWithStack:!create initialize
Fixed in: 7.1.0
[RTI Issue ID COREPLG-654]
9.6.2. Security Plugins
9.6.2.1. Segmentation fault in Java while enabling a DataWriter that protects payloads and enables batching
Consider the following scenario:
You are using the Java API.
The Governance Document has
<data_protection_kind>
set to a value other thanNONE
for a given topic.The
DataWriterQos
hasbatch.enable
set totrue
.
Attempting to enable a DataWriter of that topic and with that QoS
fails with a segmentation fault in the internal function
PRESTypePluginDefaultEndpointData_calculateBatchBufferSize
.
Fixed in: 7.4.0
[RTI Issue ID SEC-2457]
9.6.2.2. Unexpected “Fragment data not supported by this writer” error when using Security
You may see the following error when trying to run an application
setting the dds.participant.protocol.rtps_overhead
property and using
the Security Plugins. The same configuration did not fail in previous releases.
ERROR COMMENDFacade_canSampleBeSent:NOT SUPPORTED | Fragment data not supported by this writer.
To work around the issue, you can remove the property
dds.participant.protocol.rtps_overhead
from the participant configuration.
This is also the recommended configuration starting with 7.0.0, since the overhead
is automatically calculated by the middleware.
Fixed in: 7.1.0
[RTI Issue ID SEC-1813]
9.6.2.3. Session keys not renewed as often as they should when using RTPS SIGN protection
The Security Plugins update the session keys after protecting some message
blocks. The cryptography.max_blocks_per_session
property determines how
many message blocks can be encrypted using the same session key.
However, cryptography.max_blocks_per_session
has an effective value larger
than the property value when using RTPS SIGN
(or SIGN_WITH_ORIGIN_AUTHENTICATION
) protection. The problem leads to
slightly overused session keys in some scenarios.
Fixed in: 7.1.0
[RTI Issue ID SEC-1786]
9.6.3. Persistence Service
9.6.3.1. Persistence Service XSD schema broken
The Persistence Service XSD schema is broken due to an additional closing tag.
Fixed in: 7.1.0
[RTI Issue ID PERSISTENCE-276]