2.3.2.1. RTI Connext Core Libraries
The following issues affect backward compatibility in the Core Libraries starting in 7.x releases. Issues in the Core Libraries may affect components that use these libraries, including Infrastructure Services and Tools.
2.3.2.1.1. Durable writer history, durable reader state, and Persistence Service no longer support external databases
As described in What’s New in 7.0.0, support for external databases was deprecated starting in release 6.1.1, and release 6.1.2 removed the ability to share a database connection (see Section 3.1.2.1.1). In release 7, support for external databases (e.g., MySQL) is removed from the following features and components:
Durable writer history
Durable reader state
Persistence Service
In Persistence Service, use the <filesystem>
tag instead of the
<external_database>
tag to store samples on disk.
Support for durable writer history and durable reader state has been temporarily disabled in Connext 7 releases because these features were only supported with external relational databases. RTI will provide a file-based storage option for durable writer history and durable reader state in a future release. Contact RTI Support at support@rti.com for additional information regarding durable writer history and durable reader state.
2.3.2.1.2. Configuration Changes
2.3.2.1.2.1. Communication with earlier releases when using DomainParticipant partitions
Connext 6.1.2 and earlier applications are part of the empty DomainParticipant partition. If you are using the new DomainParticipant partition feature in release 7 (see PARTITION QosPolicy, in the RTI Connext Core Libraries User’s Manual) and you want to communicate with earlier applications, change the configuration of the Connext 7 DomainParticipants to join the empty partition. For example:
<partition>
<name>
<element>P1</element>
<element></element> <!-- empty partition -->
</name>
</partition>
2.3.2.1.2.2. DDS_TransportMulticastQosPolicy will now fail if using TCP or TLS as transports
In previous releases, if the TRANSPORT_MULTICAST QoS Policy was configured with TCP or TLS as the transport, which is incompatible with multicast, the application ran without issue, simply not using multicast to send the data.
Now, if TCP or TLS is used as the multicast transport, the application will fail with an exception: “Multicast over TCP or TLS is not supported”. For example, the following QoS configuration will result in error:
<datareader_qos>
<multicast>
<kind>AUTOMATIC_TRANSPORT_MULTICAST_QOS</kind>
<value>
<element>
<receive_address>239.255.0.1</receive_address>
<receive_port>8080</receive_port>
<transports>
<element>tcpv4_lan</element>
</transports>
</element>
</value>
</multicast>
</datareader_qos>
2.3.2.1.2.3. Error for max_app_ack_response_length longer than 32kB
This issue affects you if you are setting max_app_ack_response_length
to a
value greater than 32kB.
Connext incorrectly allowed setting max_app_ack_response_length
in the
DATA_READER_RESOURCE_LIMITS QoS Policy
longer than the maximum serializable data, resulting in the truncation of data
when the length got close to 64kB.
Connext now enforces a maximum length of 32kB for max_app_ack_response_length
as part of DataReader QoS consistency checks. Connext will now log an error if you
try to set max_app_ack_response_length
longer than 32kB.
2.3.2.1.2.4. Potential unexpected delay in receiving samples due to randomization of sample sent time between min and max response delays
This issue affects you if the min_heartbeat_response_delay
,
max_heartbeat_response_delay
, min_nack_response_delay
,
and max_nack_response_delay
fields are not set to 0 in your application. By
default they are not set to 0. (If you’re using any of the KeepLastReliable or
StrictReliable builtin QoS profiles, such as “BuiltinQosLib::Generic.StrictReliable”,
this issue will not affect you, because in those profiles the delays are set to 0.)
In releases after 6.1.2, the heartbeat and NACK response delays were not randomly
generated between the minimum and maximum values. (These values are set in the
min_heartbeat_response_delay
, max_heartbeat_response_delay
,
min_nack_response_delay
, and max_nack_response_delay
fields.)
The actual responses were closer to the minimum value (e.g.,
min_heartbeat_response_delay
) than the maximum value (e.g.,
max_heartbeat_response_delay
).
New in release 7, these delays are now truly random between the minimum and maximum values. This change may lead to an additional delay (up to the ‘max’ delay) in receiving samples in lossy environments or in starting to receive historical samples with transient local configuration. You may not have seen this delay before (assuming you are using the same configuration as before). You can reduce the delay by adjusting the ‘min’ and ‘max’ ranges for heartbeat and NACK responses.
2.3.2.1.2.5. DataWriters no longer match DataReaders that request inline QoS
This change only affects you if you were setting
reader_qos.protocol.expects_inline_qos
to TRUE in your DataReaders, which should
not be the case. (This issue may affect you if you interact with other vendors’
DataReaders that set expects_inline_qos
to TRUE; however, in that case,
communication likely did not occur, because Connext DataWriters do not send inline
QoSes.)
Previously, Connext DataWriters matched DataReaders that set
expects_inline_qos
in the
DATA_READER_PROTOCOL QoS Policy
to TRUE. This behavior was incorrect because Connext DataWriters do not
support sending inline QoS; they were not honoring the DataReaders’
requests and therefore they should not have matched.
In release 7, DataWriters no longer match DataReaders that request
inline QoS (i.e., DataReaders that set reader_qos.protocol.expects_inline_qos
to TRUE).
2.3.2.1.2.6. Reduced number of participant announcements at startup
In previous releases, when a DomainParticipant was first created it sent out a participant
announcement and then DiscoveryConfigQos.initial_participant_announcements
number of announcements in addition to the first one. Starting in release 7,
initial_participant_announcements
configures the exact number of
announcements that are sent out when a participant is created, there is no
additional announcement.
2.3.2.1.3. Transport Compatibility
2.3.2.1.4. API Compatibility
2.3.2.1.4.1. C# project upgrade
To upgrade your Connext C# applications to a new version, you need to update your project (.csproj) files. See Upgrading Your C# Projects for more information.
2.3.2.1.4.2. Memory leak when using coherent sets and the Copy Take/Read APIs in C
This issue affects you if you are using coherent sets and the copy take/read
APIs (such as take_next_sample
) in C, C++, modern C++, and Java.
Before release 7, the field SampleInfo::coherent_set_info
was not
copied when using the copy take/read APIs (such as take_next_sample
)
in C, C++, modern C++, and Java.
This behavior has changed in release 7, and the optional field
SampleInfo::coherent_set_info
is now copied. Consequently, you will need
to call the DDS_SampleInfo_finalize
API in C to finalize the SampleInfo
objects to avoid memory leaks. There should not be side effects in other
languages, because the memory will be released when the object is destroyed.
2.3.2.1.4.3. Migrating from Connector to the Connext Python API
Connector for Python provided a limited API to access the Connext databus. The Connext Python API provides a full DDS API.
Starting with Connext 7.2.0, the Connext Python API is no longer experimental and all customers that use Connector should plan to transition to the Connext Python API. Note that Connector applications and Connext applications (written in any supported language) can interoperate.
This guide will walk you through the process of migrating code from the
rticonnextdds_connector
package to the rti.connextdds
package.
2.3.2.1.4.3.1. Connector Publication Example
We will work on rewriting the following Connector example code:
from time import sleep
import rticonnextdds_connector as rti
with rti.open_connector(
config_name="MyParticipantLibrary::MyPubParticipant",
url="ShapeExample.xml") as connector:
output = connector.get_output("MyPublisher::MySquareWriter")
print("Waiting for subscriptions...")
output.wait_for_subscriptions()
print("Writing...")
for i in range(1, 100):
output.instance.set_number("x", i)
output.instance.set_number("y", i*2)
output.instance.set_number("shapesize", 30)
output.instance.set_string("color", "BLUE")
output.write()
sleep(0.5)
print("Exiting...")
output.wait()
2.3.2.1.4.3.2. Migrating the Publication Code
To transition the code from Connector to Connext, we will follow the steps below.
Import the required package:
import rti.connextdds as dds
2. Replace the rti.open_connector
call with a QosProvider
to load the XML
configuration file and create the DomainParticipant
defined in XML.
provider = dds.QosProvider("ShapeExample.xml")
participant = provider.create_participant_from_config(
"MyParticipantLibrary::MyPubParticipant")
Replace the Connector
Output
with aDynamicData.DataWriter
:
writer = dds.DynamicData.DataWriter(
participant.find_datawriter("MyPublisher::MySquareWriter"))
Note that with the Connext Python API you can also use IDL-based Python classes,
instead of DynamicData
.
4. Replace the wait_for_subscriptions
call with a StatusCondition
and
a WaitSet
:
waitset = dds.WaitSet()
writer_cond = dds.StatusCondition(writer)
writer_cond.enabled_statuses = dds.StatusMask.PUBLICATION_MATCHED
waitset.attach_condition(writer_cond)
waitset.wait()
Replace
output.instance
code with aDynamicData
sample:
sample = writer.create_data()
for i in range(1, 100):
sample["x"] = i
sample["y"] = i*2
sample["shapesize"] = 30
sample["color"] = "BLUE"
writer.write(sample)
Note that output.instance
provided some automatic type conversions that are
not performed by DynamicData
.
Replace the
output.wait
call withwriter.wait_for_acknowledgments
:
writer.wait_for_acknowledgments(dds.Duration(10))
Here’s the final publication application:
from time import sleep
import rti.connextdds as dds
provider = dds.QosProvider("ShapeExample.xml")
participant = provider.create_participant_from_config(
"MyParticipantLibrary::MyPubParticipant")
writer = dds.DynamicData.DataWriter(
participant.find_datawriter("MyPublisher::MySquareWriter"))
print("Waiting for subscriptions...")
waitset = dds.WaitSet()
writer_cond = dds.StatusCondition(writer)
writer_cond.enabled_statuses = dds.StatusMask.PUBLICATION_MATCHED
waitset.attach_condition(writer_cond)
waitset.wait()
print("Writing...")
sample = writer.create_data()
for i in range(1, 100):
sample["x"] = i
sample["y"] = i*2
sample["shapesize"] = 30
sample["color"] = "BLUE"
writer.write(sample)
sleep(0.5)
print("Exiting...")
writer.wait_for_acknowledgments(dds.Duration(10))
2.3.2.1.4.3.3. Connector Subscription Example
We will also work on rewriting the following Connector example code that subscribes to the data written by the previous example:
import rticonnextdds_connector as rti
with rti.open_connector(
config_name="MyParticipantLibrary::MySubParticipant",
url="ShapeExample.xml") as connector:
input = connector.get_input("MySubscriber::MySquareReader")
print("Waiting for publications...")
input.wait_for_publications() # wait for at least one matching publication
print("Waiting for data...")
for i in range(1, 500):
input.wait() # wait for data on this input
input.take()
for sample in input.samples.valid_data_iter:
x = sample.get_number("x")
y = sample.get_number("y")
size = sample.get_number("shapesize")
color = sample.get_string("color")
print("Received x: " + repr(x) + " y: " + repr(y) +
" size: " + repr(size) + " color: " + repr(color))
2.3.2.1.4.3.4. Migrating the Subscription Code
Steps 1 to 4 are similar to the publication code, which only leaves the code
to read data. Among the multiple ways to read data, we use
reader.take_data_async()
, which makes the code simpler:
async for data in reader.take_data_async():
print(f'Received x: {data["x"]} y: {data["y"]} size: {data["shapesize"]} color: {data["color"]}')
Since we’re using coroutines, we need to run the code with asyncio.run
or
rti.asyncio.run
.
Here’s the final subscription application:
import rti.connextdds as dds
import rti.asyncio
async def subscription_example():
provider = dds.QosProvider("ShapeExample.xml")
participant = provider.create_participant_from_config(
"MyParticipantLibrary::MySubParticipant")
reader = dds.DynamicData.DataReader(
participant.find_datareader("MySubscriber::MySquareReader"))
print("Waiting for publications...")
waitset = dds.WaitSet()
reader_cond = dds.StatusCondition(reader)
reader_cond.enabled_statuses = dds.StatusMask.SUBSCRIPTION_MATCHED
waitset.attach_condition(reader_cond)
waitset.wait()
print("Waiting for data...")
async for data in reader.take_data_async():
print(f'Received x: {data["x"]} y: {data["y"]} size: {data["shapesize"]} color: {data["color"]}')
rti.asyncio.run(subscription_example())
2.3.2.1.4.3.5. Conclusion
The examples provided in this guide show the most straightforward way to migrate
from Connector to Connext, but there are more options and features to
consider. For example, you can define all your entities and QoS programmatically
in Python code, instead of using the XML file and the QosProvider
. You can
use IDL-derived Python data types instead of DynamicData
. There are
several other ways to wait for data and read it. And there are many more
additional features that are not available in Connector.
See the Connext Python API Reference documentation for an overview of the API.
2.3.2.1.4.4. Year 2038 support
The type that is used to represent seconds in DDS_Time_t
(C and C++ API),
Time_t
(Java API), and Time
(Modern C++, C#, and Python API) has changed
from a 32-bit integer to a 64-bit integer.
See Configuration Changes and RTI Tools for additional impacts related to year 2038 support.
2.3.2.1.4.5. Constant rti::core::policy::ReceiverPool::LENGTH_AUTO has been replaced
In 7.2.0, the constant rti::core::policy::ReceiverPool::LENGTH_AUTO
has
been removed and replaced with rti::core::length_auto
. If your application
code utilized the constant rti::core::policy::ReceiverPool::LENGTH_AUTO
,
the compilation will fail. To resolve this issue, replace all instances of the
removed constant (rti::core::policy::ReceiverPool::LENGTH_AUTO
) with the
new constant (rti::core::length_auto
).
2.3.2.1.5. Library Size
In release 7, the size of the libraries increased as expected compared to 6.1.1/6.1.2, due to the addition of new functionality. The following table shows the differences:
6.1.1/6.1.2 |
7 |
Change (%) |
|
---|---|---|---|
libnddscpp.so |
1588749 |
1600735 |
+0.75 |
libnddscpp2.so |
1244873 |
1334952 |
+7.24 |
libnddsc.so |
6498656 |
6414139 |
-1.30 |
libnddscore.so (Core Library) |
7057459 |
8566268 |
+21.38 |
libnddssecurity.so (Security Plugins Library) |
824542 |
862151 |
+4.56 |
2.3.2.1.6. Memory Consumption
2.3.2.1.6.1. General Memory Consumption
In general, release 7 applications will consume more heap memory than 6.1.1/6.1.2 applications. Stack size is similar between the two releases; there are no significant changes in the stack size.
The following table shows the heap memory differences:
6.1.1/6.1.2 |
7 |
Change (%) |
|
---|---|---|---|
ParticipantFactory |
64070 |
64294 |
+0.35 |
Participant |
1923567 |
1949435 |
+1.34 |
Type |
1449 |
1451 |
+0.14 |
Topic |
2160 |
2142 |
-0.83 |
Subscriber |
9586 |
9602 |
+0.17 |
Publisher |
3663 |
3841 |
+4.86 |
DataReader |
71895 |
71791 |
-0.14 |
DataWriter |
42030 |
41302 |
-1.73 |
Instance |
499 |
502 |
+0.60 |
Sample |
1376 |
1332 |
-3.20 |
Remote DataReader |
7497 |
7538 |
+0.55 |
Remote DataWriter |
15468 |
15298 |
-1.10 |
Instance registered in DataReader |
890 |
891 |
+0.11 |
Sample stored in DataReader |
917 |
918 |
+0.11 |
Remote Participant |
81631 |
84746 |
+3.82 |
2.3.2.1.6.2. Increased availability of instance key values may result in higher memory usage
You can now always call get_key_value
to determine which instance has
transitioned when a sample with valid_data=FALSE
is received, as long as
the instance has been seen by the DataReader before.
Before this change, if the instance had previously been detached, then a
call to get_key_value
would have failed to retrieve the key value. In the
context of instance state consistency, this meant that when a DataWriter of an
instance regained liveliness after a network disconnection, and the instance
transitioned back to ALIVE with invalid_data
= TRUE
, it was not possible to
call get_key_value
to identify the instance that was transitioning back to
ALIVE. Now, the key value can be retrieved in this situation as long as
keep_minimum_state_for_instances
= TRUE
in the DataReader’s
DATA_READER_RESOURCE_LIMITS QoS policy.
This change results in generally higher memory usage, especially for systems with a lot of instances that may be detached.
2.3.2.1.7. Network Performance
In general, release 7 applications have the same performance as in 6.1.1/6.1.2 for user data exchange. For details, see RTI Connext Performance Benchmarks.
2.3.2.1.8. Discovery Performance
2.3.2.1.8.1. Simple Participant Discovery: reduced bandwidth usage may delay discovery time
In earlier Connext releases, when a participant discovered a new participant,
it sent its participant announcement back to the new participant as well as to
all other discovered peers and its initial peers list an
initial_participant_announcements
number of times. In release 7, the participant
announcement is sent only to the new participant a
new_remote_participant_announcements
number of times. The other discovered
participants already have this information, so, previously, a lot of the traffic
when a new participant was discovered was wasted bandwidth. The default
new_remote_participant_announcements
value is also smaller than
initial_participant_announcements
to reduce bandwidth usage. These are
improvements to reduce unnecessary bandwidth usage, but they have the side
effect of potentially delaying discovery between two participants that miss
each other’s participant announcements.
For example, consider three participants: A, B, and C. Participant A has
Participant B in its initial peers list, and B has A in its list as well.
They were both started at the same time, along with 100 other participants,
so all of their participant announcements to each other were dropped by the
network due to buffer overflows. So A and B do not discover each other. Now,
Participant C joins with participant A in its initial peers list. In earlier
Connext versions, A would respond to C, as well as send an announcement to
B, therefore triggering discovery between A and B (if A and B have already
discovered each other, this message would be redundant and wasted bandwidth).
In release 7, however, A will only respond directly to C. Participants A and B
will discover each other at the next participant_liveliness_assert_period
(or the participant_announcement_period
if you are using
Simple Participant Discovery 2.0)
when they send out their periodic participant announcement to their peers.
There are a number of QoSs that can be used to speed up discovery in these
cases. You can increase the number of initial_participant_announcements
and/or the new_remote_participant_announcements
that are sent, elongating
the potential discovery phase and hedging against dropped packets. You can
also try increasing the separation between the min_initial_announcement_period
and max_initial_announcement_period
. The initial_participant_announcements
and new_remote_participant_announcements
are sent at a random time between
the min/max initial_anouncement_period values, making the likelihood of
collisions and dropped packets less likely. The final thing to try is to
increase the receive buffer sizes of the Connext transports that you are
using, as well as of the OS/kernel layers, to prevent against dropped packets.
For details on discovery performance, see RTI Connext Performance Benchmarks.
2.3.2.1.9. Configuration Changes
2.3.2.1.9.1. Reduction in maximum number of Publishers and Subscribers per DomainParticipant
Starting with Connext 7.2.0, the maximum object ID for Publishers and
Subscribers has been reduced from 0xFFFFFF
to 0xFFFF
. This reduction
occurs because secure instance state entities (the builtin DataWriter and DataReader used
by instance state consistency when security is enabled) must set the top byte of the
object key to 0xFF
. This change means that a DomainParticipant can now only contain 65535
Publishers/Subscribers (that is, you can have 65535 Publishers and 65535
Subscribers). Previously, this number was 16777215.