2.2.1. RTI Connext Core Libraries

The following issues affect backward compatibility in the Core Libraries starting in 7.x releases. Issues in the Core Libraries may affect components that use these libraries, including Infrastructure Services and Tools.

2.2.1.1. Deprecations or Removals

Deprecated means that an item is still supported, but will be removed in a future release. Removed means that an item is discontinued or no longer supported.

Attention

For a complete list of deprecations and removals in this release, see What’s New in 7.3.0 and individual products’ Release Notes.

2.2.1.1.1. Durable writer history, durable reader state, and Persistence Service no longer support external databases

As described in What’s New in 7.0.0, support for external databases was deprecated starting in release 6.1.1, and release 6.1.2 removed the ability to share a database connection (see Section 3.2.1.1). In release 7, support for external databases (e.g., MySQL) is removed from the following features and components:

  • Durable writer history

  • Durable reader state

  • Persistence Service

In Persistence Service, use the <filesystem> tag instead of the <external_database> tag to store samples on disk.

Support for durable writer history and durable reader state has been temporarily disabled in Connext 7 releases because these features were only supported with external relational databases. RTI will provide a file-based storage option for durable writer history and durable reader state in a future release. Contact RTI Support at support@rti.com for additional information regarding durable writer history and durable reader state.

2.2.1.2. API Compatibility

2.2.1.2.1. Memory leak when using coherent sets and the Copy Take/Read APIs in C

This issue affects you if you are using coherent sets and the copy take/read APIs (such as take_next_sample) in C, C++, modern C++, and Java.

Before release 7.0.0, the field SampleInfo::coherent_set_info was not copied when using the copy take/read APIs (such as take_next_sample) in C, C++, modern C++, and Java. This behavior was changed in release 7.0.0, and the optional field SampleInfo::coherent_set_info is now copied. Consequently, you will need to call the DDS_SampleInfo_finalize API in C to finalize the SampleInfo objects to avoid memory leaks. There should not be side effects in other languages, because the memory will be released when the object is destroyed.

2.2.1.2.2. Migrating from Connector to the Connext Python API

With the addition of the Connext Python API, a full production-ready Python API, Connector for Python, which provided a limited API to access the Connext databus, has been removed.

All customers that use Connector should plan to transition to the Connext Python API. Note that Connector applications and Connext applications (written in any supported language) can interoperate.

This guide will walk you through the process of migrating code from the rticonnextdds_connector package to the rti.connextdds package.

2.2.1.2.2.1. Connector Publication Example

We will work on rewriting the following Connector example code:

from time import sleep
import rticonnextdds_connector as rti

with rti.open_connector(
        config_name="MyParticipantLibrary::MyPubParticipant",
        url="ShapeExample.xml") as connector:

    output = connector.get_output("MyPublisher::MySquareWriter")

    print("Waiting for subscriptions...")
    output.wait_for_subscriptions()

    print("Writing...")
    for i in range(1, 100):
        output.instance.set_number("x", i)
        output.instance.set_number("y", i*2)
        output.instance.set_number("shapesize", 30)
        output.instance.set_string("color", "BLUE")
        output.write()

        sleep(0.5)

    print("Exiting...")
    output.wait()

2.2.1.2.2.2. Migrating the Publication Code

To transition the code from Connector to Connext, we will follow the steps below.

  1. Import the required package:

import rti.connextdds as dds

2. Replace the rti.open_connector call with a QosProvider to load the XML configuration file and create the DomainParticipant defined in XML.

provider = dds.QosProvider("ShapeExample.xml")
participant = provider.create_participant_from_config(
    "MyParticipantLibrary::MyPubParticipant")
  1. Replace the Connector Output with a DynamicData.DataWriter:

writer = dds.DynamicData.DataWriter(
    participant.find_datawriter("MyPublisher::MySquareWriter"))

Note that with the Connext Python API you can also use IDL-based Python classes, instead of DynamicData.

4. Replace the wait_for_subscriptions call with a StatusCondition and a WaitSet:

waitset = dds.WaitSet()
writer_cond = dds.StatusCondition(writer)
writer_cond.enabled_statuses = dds.StatusMask.PUBLICATION_MATCHED
waitset.attach_condition(writer_cond)
waitset.wait()
  1. Replace output.instance code with a DynamicData sample:

sample = writer.create_data()
for i in range(1, 100):
    sample["x"] = i
    sample["y"] = i*2
    sample["shapesize"] = 30
    sample["color"] = "BLUE"
    writer.write(sample)

Note that output.instance provided some automatic type conversions that are not performed by DynamicData.

  1. Replace the output.wait call with writer.wait_for_acknowledgments:

writer.wait_for_acknowledgments(dds.Duration(10))

Here’s the final publication application:

from time import sleep
import rti.connextdds as dds

provider = dds.QosProvider("ShapeExample.xml")
participant = provider.create_participant_from_config(
    "MyParticipantLibrary::MyPubParticipant")

writer = dds.DynamicData.DataWriter(
    participant.find_datawriter("MyPublisher::MySquareWriter"))

print("Waiting for subscriptions...")
waitset = dds.WaitSet()
writer_cond = dds.StatusCondition(writer)
writer_cond.enabled_statuses = dds.StatusMask.PUBLICATION_MATCHED
waitset.attach_condition(writer_cond)
waitset.wait()

print("Writing...")
sample = writer.create_data()
for i in range(1, 100):
    sample["x"] = i
    sample["y"] = i*2
    sample["shapesize"] = 30
    sample["color"] = "BLUE"
    writer.write(sample)

    sleep(0.5)

print("Exiting...")
writer.wait_for_acknowledgments(dds.Duration(10))

2.2.1.2.2.3. Connector Subscription Example

We will also work on rewriting the following Connector example code that subscribes to the data written by the previous example:

import rticonnextdds_connector as rti

with rti.open_connector(
        config_name="MyParticipantLibrary::MySubParticipant",
        url="ShapeExample.xml") as connector:

    input = connector.get_input("MySubscriber::MySquareReader")

    print("Waiting for publications...")
    input.wait_for_publications() # wait for at least one matching publication

    print("Waiting for data...")
    for i in range(1, 500):
        input.wait() # wait for data on this input
        input.take()
        for sample in input.samples.valid_data_iter:
            x = sample.get_number("x")
            y = sample.get_number("y")
            size = sample.get_number("shapesize")
            color = sample.get_string("color")
            print("Received x: " + repr(x) + " y: " + repr(y) +
                " size: " + repr(size) + " color: " + repr(color))

2.2.1.2.2.4. Migrating the Subscription Code

Steps 1 to 4 are similar to the publication code, which only leaves the code to read data. Among the multiple ways to read data, we use reader.take_data_async(), which makes the code simpler:

async for data in reader.take_data_async():
    print(f'Received x: {data["x"]} y: {data["y"]} size: {data["shapesize"]} color: {data["color"]}')

Since we’re using coroutines, we need to run the code with asyncio.run or rti.asyncio.run.

Here’s the final subscription application:

import rti.connextdds as dds
import rti.asyncio

async def subscription_example():

    provider = dds.QosProvider("ShapeExample.xml")
    participant = provider.create_participant_from_config(
        "MyParticipantLibrary::MySubParticipant")

    reader = dds.DynamicData.DataReader(
        participant.find_datareader("MySubscriber::MySquareReader"))

    print("Waiting for publications...")
    waitset = dds.WaitSet()
    reader_cond = dds.StatusCondition(reader)
    reader_cond.enabled_statuses = dds.StatusMask.SUBSCRIPTION_MATCHED
    waitset.attach_condition(reader_cond)
    waitset.wait()

    print("Waiting for data...")
    async for data in reader.take_data_async():
        print(f'Received x: {data["x"]} y: {data["y"]} size: {data["shapesize"]} color: {data["color"]}')


rti.asyncio.run(subscription_example())

2.2.1.2.2.5. Conclusion

The examples provided in this guide show the most straightforward way to migrate from Connector to Connext, but there are more options and features to consider. For example, you can define all your entities and QoS programmatically in Python code, instead of using the XML file and the QosProvider. You can use IDL-derived Python data types instead of DynamicData. There are several other ways to wait for data and read it. And there are many more additional features that are not available in Connector.

See the Connext Python API Reference documentation for an overview of the API.

2.2.1.2.3. Year 2038 support

The type that is used to represent seconds in DDS_Time_t (C and C++ API), Time_t (Java API), and Time (Modern C++, C#, and Python API) has changed from a 32-bit integer to a 64-bit integer.

This change was made in anticipation of a future update to the OMG DDS specification, but for now the change is an RTI extension. Therefore, this change could affect portability of code between DDS vendors who follow the specification as it is today.

2.2.1.2.4. Constant rti::core::policy::ReceiverPool::LENGTH_AUTO has been replaced

Starting in 7.2.0, the constant rti::core::policy::ReceiverPool::LENGTH_AUTO has been removed and replaced with rti::core::length_auto. If your application code utilized the constant rti::core::policy::ReceiverPool::LENGTH_AUTO, the compilation will fail. To resolve this issue, replace all instances of the removed constant (rti::core::policy::ReceiverPool::LENGTH_AUTO) with the new constant (rti::core::length_auto).

2.2.1.3. Configuration Changes

2.2.1.3.1. Communication with earlier releases when using DomainParticipant partitions

Connext 6.1.2 and earlier applications are part of the empty DomainParticipant partition. If you are using the new DomainParticipant partition feature in release 7.3.0 (see PARTITION QosPolicy, in the RTI Connext Core Libraries User’s Manual) and you want to communicate with earlier applications, change the configuration of the Connext 7.3.0 DomainParticipants to join the empty partition. For example:

<partition>
   <name>
      <element>P1</element>
      <element></element> <!-- empty partition -->
   </name>
</partition>

2.2.1.3.2. DDS_TransportMulticastQosPolicy will now fail if using TCP or TLS as transports

In previous releases, if the TRANSPORT_MULTICAST QoS Policy was configured with TCP or TLS as the transport, which is incompatible with multicast, the application ran without issue, simply not using multicast to send the data.

Now, if TCP or TLS is used as the multicast transport, the application will fail with an exception: “Multicast over TCP or TLS is not supported”. For example, the following Quality of Service (QoS) configuration will result in error:

<datareader_qos>
   <multicast>
      <kind>AUTOMATIC_TRANSPORT_MULTICAST_QOS</kind>
      <value>
         <element>
            <receive_address>239.255.0.1</receive_address>
            <receive_port>8080</receive_port>
            <transports>
               <element>tcpv4_lan</element>
            </transports>
         </element>
      </value>
   </multicast>
</datareader_qos>

2.2.1.3.3. Error for max_app_ack_response_length longer than 32kB

This issue affects you if you are setting max_app_ack_response_length to a value greater than 32kB.

Connext incorrectly allowed setting max_app_ack_response_length in the DATA_READER_RESOURCE_LIMITS QoS Policy longer than the maximum serializable data, resulting in the truncation of data when the length got close to 64kB.

Connext now enforces a maximum length of 32kB for max_app_ack_response_length as part of DataReader QoS consistency checks. Connext will now log an error if you try to set max_app_ack_response_length longer than 32kB.

2.2.1.3.4. Potential unexpected delay in receiving samples due to randomization of sample sent time between min and max response delays

This issue affects you if the min_heartbeat_response_delay, max_heartbeat_response_delay, min_nack_response_delay, and max_nack_response_delay fields are not set to 0 in your application. By default they are not set to 0. (If you’re using any of the KeepLastReliable or StrictReliable builtin QoS profiles, such as “BuiltinQosLib::Generic.StrictReliable”, this issue will not affect you, because in those profiles the delays are set to 0.)

In releases after 6.1.2, the heartbeat and NACK response delays were not randomly generated between the minimum and maximum values. (These values are set in the min_heartbeat_response_delay, max_heartbeat_response_delay, min_nack_response_delay, and max_nack_response_delay fields.) The actual responses were closer to the minimum value (e.g., min_heartbeat_response_delay) than the maximum value (e.g., max_heartbeat_response_delay).

Starting in release 7.0.0, these delays are truly random between the minimum and maximum values. This change may lead to an additional delay (up to the ‘max’ delay) in receiving samples in lossy environments or in starting to receive historical samples with transient local configuration. You may not have seen this delay before (assuming you are using the same configuration as before). You can reduce the delay by adjusting the ‘min’ and ‘max’ ranges for heartbeat and NACK responses.

2.2.1.3.5. DataWriters no longer match DataReaders that request inline QoS

This change only affects you if you were setting reader_qos.protocol.expects_inline_qos to TRUE in your DataReaders, which should not be the case. (This issue may affect you if you interact with other vendors’ DataReaders that set expects_inline_qos to TRUE; however, in that case, communication likely did not occur, because Connext DataWriters do not send inline QoSes.)

Previously, Connext DataWriters matched DataReaders that set expects_inline_qos in the DATA_READER_PROTOCOL QoS Policy to TRUE. This behavior was incorrect because Connext DataWriters do not support sending inline QoS; they were not honoring the DataReaders’ requests and therefore they should not have matched.

Starting in release 7.0.0, DataWriters no longer match DataReaders that request inline QoS (i.e., DataReaders that set reader_qos.protocol.expects_inline_qos to TRUE).

2.2.1.3.6. Reduced number of participant announcements at startup

In previous releases, when a DomainParticipant was first created it sent out a participant announcement and then DiscoveryConfigQos.initial_participant_announcements number of announcements in addition to the first one. Starting in release 7, initial_participant_announcements configures the exact number of announcements that are sent out when a participant is created, there is no additional announcement.

2.2.1.3.7. Enhanced validation for FlatData language binding in QoS settings for C++ APIs

Since the debut of the RTI FlatData™ language binding in release 6.0.0, you have been able to specify XCDR as the data representation in QoS settings for entities that use FlatData, although the only valid data representation for FlatData types is XCDR2. When XCDR was selected in these scenarios, the system would usually issue a warning and convert the data representation to XCDR2 to maintain functionality. While this process was intended to safeguard against configuration errors, it could lead to confusion about the actual data representation being used and could cause a segmentation fault. Note that the FlatData language binding is only valid for the Traditional C++ and Modern C++ APIs.

To ensure clarity and system integrity, the QoS validation logic has been refined. From this release forward, when specifying the data representation QoS for entities using the FlatData language binding, strict enforcement is applied to require XCDR2. Any attempt to configure an entity with XCDR as the data representation for FlatData types will be blocked, with the entity creation call failing.

Starting in this release, if you use the Traditional C++ and Modern C++ APIs, you should examine your entity QoS configurations to ensure compliance with the new validation rules. Make adjustments to use XCDR2 where the FlatData language binding is in use, to align with the updated and more stringent requirements.

2.2.1.3.8. MonitoringLoggingForwardingSettings security_forwarding_level renamed security_event_forwarding_level

Because of the change documented in SECURITY Syslog facility meaning and name have changed, the security_forwarding_level field of MonitoringLoggingForwardingSettings has been renamed security_event_forwarding_level.

2.2.1.4. Logging Changes

2.2.1.4.1. SECURITY Syslog facility meaning and name have changed

Starting in release 7.3.0, the security facility has been renamed security_event. Its meaning has changed, too: before 7.3.0, this facility was used for security events logged with the Security Plugins Logging Plugin and for RTI TLS Support log messages related to OpenSSL; starting in 7.3.0, this facility is used only for security events logged with the Security Plugins Logging Plugin.

As a result of this change, keep in mind that, if your application uses a Logger Device and relies on the facility field of LogMessage, some messages that used to have the security facility now have middleware. Those security-related messages that now have the middleware facility can still be identified using the LogMessage::is_security_message boolean.

2.2.1.4.2. LogMessage is_security_message meaning has changed

Starting in release 7.3.0, LogMessage::is_security_message will also be true for the following messages:

  • Security Plugins Logging Plugin security events.

  • Any message logged by the Security Plugins.

Previously, only RTI TLS Support log messages related to OpenSSL (for example, SSL handshake failures or certificate validation failures) had this field set to true (they still do).

Keep this change in mind if your application uses a Logger Device and relies on the is_security_message field of LogMessage.

2.2.1.4.3. Logging category names changed in Activity Context

The following logging category names changed in the Activity Context:

  • DISC is now Discovery.

  • SEC is now Security.

For example, the following message:

ERROR [CREATE DP|LC:DISC]DDS_DomainParticipantFactory_create_participant_disabledI:ERROR: Inconsistent QoS

is now logged as:

ERROR [CREATE DP|LC:Discovery]DDS_DomainParticipantFactory_create_participant_disabledI:ERROR: Inconsistent QoS

Keep this change in mind if your application uses a Logger Device and parses the Activity Context looking for specific logging categories.

2.2.1.5. Library Size

In release 7.3.0, the size of the libraries increased as expected compared to 6.1.1/6.1.2, due to the addition of new functionality. The following table shows the differences:

Table 2.1 Library Size Comparison for x64Linux3gcc4.8.2 in Bytes

6.1.1/6.1.2

7.3.0

Change (%)

libnddscpp.so

1588749

1600735

+0.75

libnddscpp2.so

1244873

1334952

+7.24

libnddsc.so

6498656

6414139

-1.30

libnddscore.so (Core Library)

7057459

8566268

+21.38

libnddssecurity.so (Builtin Security Plugins Library)

824542

862151

+4.56

2.2.1.6. Memory Consumption

2.2.1.6.1. General Memory Consumption

In general, release 7.3.0 applications consume more heap memory than 6.1.1/6.1.2 applications. Stack size is similar between the two releases; there are no significant changes in the stack size.

The following table shows the heap memory differences:

Table 2.2 Memory Comsumption Comparison for x64Linux3gcc4.8.2 in Bytes

6.1.1/6.1.2

7.3.0

Change (%)

ParticipantFactory

64070

64294

+0.35

Participant

1923567

1949435

+1.34

Type

1449

1451

+0.14

Topic

2160

2142

-0.83

Subscriber

9586

9602

+0.17

Publisher

3663

3841

+4.86

DataReader

71895

71791

-0.14

DataWriter

42030

41302

-1.73

Instance

499

502

+0.60

Sample

1376

1332

-3.20

Remote DataReader

7497

7538

+0.55

Remote DataWriter

15468

15298

-1.10

Instance registered in DataReader

890

891

+0.11

Sample stored in DataReader

917

918

+0.11

Remote Participant

81631

84746

+3.82

2.2.1.6.2. Increased availability of instance key values may result in higher memory usage

You can now always call get_key_value to determine which instance has transitioned when a sample with valid_data=FALSE is received, as long as the instance has been seen by the DataReader before.

Before this change, if the instance had previously been detached, then a call to get_key_value would have failed to retrieve the key value. In the context of instance state consistency, this meant that when a DataWriter of an instance regained liveliness after a network disconnection, and the instance transitioned back to ALIVE with invalid_data = TRUE, it was not possible to call get_key_value to identify the instance that was transitioning back to ALIVE. Now, the key value can be retrieved in this situation as long as keep_minimum_state_for_instances = TRUE in the DataReader’s DATA_READER_RESOURCE_LIMITS QoS policy.

This change results in generally higher memory usage, especially for systems with a lot of instances that may be detached.

2.2.1.7. Network Performance

In general, release 7.3.0 applications have the same performance as in 6.1.1/6.1.2 for user data exchange. For details, see RTI Connext Performance Benchmarks.

2.2.1.8. Discovery Performance

2.2.1.8.1. Simple Participant Discovery: reduced bandwidth usage may delay discovery time

In earlier Connext releases, when a participant discovered a new participant, it sent its participant announcement back to the new participant as well as to all other discovered peers and its initial peers list an initial_participant_announcements number of times. Starting in release 7.0.0, the participant announcement is sent only to the new participant a new_remote_participant_announcements number of times. The other discovered participants already have this information, so, previously, a lot of the traffic when a new participant was discovered was wasted bandwidth. The default new_remote_participant_announcements value is also smaller than initial_participant_announcements to reduce bandwidth usage. These are improvements to reduce unnecessary bandwidth usage, but they have the side effect of potentially delaying discovery between two participants that miss each other’s participant announcements.

For example, consider three participants: A, B, and C. Participant A has Participant B in its initial peers list, and B has A in its list as well. They were both started at the same time, along with 100 other participants, so all of their participant announcements to each other were dropped by the network due to buffer overflows. So A and B do not discover each other. Now, Participant C joins with participant A in its initial peers list. In earlier Connext versions, A would respond to C, as well as send an announcement to B, therefore triggering discovery between A and B (if A and B have already discovered each other, this message would be redundant and wasted bandwidth). Starting in release 7.0.0, however, A will only respond directly to C. Participants A and B will discover each other at the next participant_liveliness_assert_period (or the participant_announcement_period if you are using Simple Participant Discovery 2.0) when they send out their periodic participant announcement to their peers.

There are a number of QoS settings that can be used to speed up discovery in these cases. You can increase the number of initial_participant_announcements and/or the new_remote_participant_announcements that are sent, elongating the potential discovery phase and hedging against dropped packets. You can also try increasing the separation between the min_initial_announcement_period and max_initial_announcement_period. The initial_participant_announcements and new_remote_participant_announcements are sent at a random time between the min/max initial_anouncement_period values, making the likelihood of collisions and dropped packets less likely. The final thing to try is to increase the receive buffer sizes of the Connext transports that you are using, as well as of the OS/kernel layers, to prevent against dropped packets.

For details on discovery performance, see RTI Connext Performance Benchmarks.

2.2.1.9. Transport Compatibility

2.2.1.9.1. Communication with QNX applications in previous releases no longer possible when using shared-memory transport

In previous releases, if a QNX application using the shared-memory transport was ungracefully shut down, crashed, or otherwise had an abnormal termination while holding a POSIX semaphore used by the transport (for example, while sending data through the shared-memory transport), Connext applications launched after that point on the same domain may have waited forever for that semaphore to be released. Running QNX applications using the Connext shared-memory transport may have also led to thread priority inversion issues.

These problems were fixed in 7.2.0. However, the fix makes communication with applications from a previous Connext version impossible when using the shared-memory transport. If you try to use shared memory with old applications, you will see the following error message(s):

incompatible shared memory protocol detected.
Current version 5.0 not compatible with x.y.

OR

incompatible shared memory protocol detected.
Current version x.y not compatible with 5.0.

There is no way to be backwards-compatible. You will have to use other transports such as UDPv4.

2.2.1.10. Other Changes

2.2.1.10.1. Reduction in maximum number of Publishers and Subscribers per DomainParticipant

Starting with Connext 7.2.0, the maximum object ID for Publishers and Subscribers has been reduced from 0xFFFFFF to 0xFFFF. This reduction occurs because secure instance state entities (the builtin DataWriter and DataReader used by instance state consistency when security is enabled) must set the top byte of the object key to 0xFF. This change means that a DomainParticipant can now only contain 65535 Publishers/Subscribers (that is, you can have 65535 Publishers and 65535 Subscribers). Previously, this number was 16777215.