.. include:: /../getting_started/vars.rst .. _section-Product-Core-700: RTI Connext Core Libraries ************************** The following issues affect backward compatibility in the Core Libraries starting in 7.x releases. Issues in the Core Libraries may affect components that use these libraries, including Infrastructure Services and Tools. .. _MG-104: Durable writer history, durable reader state, and Persistence Service no longer support external databases ========================================================================================================== As described in :link_whats_new_710_700:`What's New in 7.0.0 <>`, support for external databases was deprecated starting in release 6.1.1, and release 6.1.2 removed the ability to share a database connection (see :numref:`section-MG-148`). In release 7, support for external databases (e.g., MySQL) is removed from the following features and components: - Durable writer history - Durable reader state - *Persistence Service* In *Persistence Service*, use the ```` tag instead of the ```` tag to store samples on disk. Support for durable writer history and durable reader state has been temporarily disabled in |CONNEXT| 7 releases because these features were only supported with external relational databases. RTI will provide a file-based storage option for durable writer history and durable reader state in a future release. Contact RTI Support at support@rti.com for additional information regarding durable writer history and durable reader state. .. MG-104 Configuration Changes ===================== Communication with earlier releases when using DomainParticipant partitions --------------------------------------------------------------------------- |CONNEXT| 6.1.2 and earlier applications are part of the empty *DomainParticipant* partition. If you are using the new *DomainParticipant* partition feature in release 7 (see :link_partitions_usersman_710:`PARTITION QosPolicy, in the RTI Connext Core Libraries User's Manual <>`) and you want to communicate with earlier applications, change the configuration of the *Connext* 7 *DomainParticipants* to join the empty partition. For example: .. code-block:: xml P1 DDS_TransportMulticastQosPolicy will now fail if using TCP or TLS as transports ------------------------------------------------------------------------------- In previous releases, if the :link_transmulticast_usersman_710:`TRANSPORT_MULTICAST QoS Policy <>` was configured with TCP or TLS as the transport, which is incompatible with multicast, the application ran without issue, simply not using multicast to send the data. Now, if TCP or TLS is used as the multicast transport, the application will fail with an exception: "Multicast over TCP or TLS is not supported". For example, the following QoS configuration will result in error: .. code-block:: xml :emphasize-lines: 9 AUTOMATIC_TRANSPORT_MULTICAST_QOS 239.255.0.1 8080 tcpv4_lan Error for max_app_ack_response_length longer than 32kB ------------------------------------------------------ This issue affects you if you are setting ``max_app_ack_response_length`` to a value greater than 32kB. |CONNEXT| incorrectly allowed setting ``max_app_ack_response_length`` in the :link_drresourcelimits_usersman_710:`DATA_READER_RESOURCE_LIMITS QoS Policy <>` longer than the maximum serializable data, resulting in the truncation of data when the length got close to 64kB. |CONNEXT| now enforces a maximum length of 32kB for ``max_app_ack_response_length`` as part of *DataReader* QoS consistency checks. |CONNEXT| will now log an error if you try to set ``max_app_ack_response_length`` longer than 32kB. .. MG-96 Potential unexpected delay in receiving samples due to randomization of sample sent time between min and max response delays ---------------------------------------------------------------------------------------------------------------------------- This issue affects you if the ``min_heartbeat_response_delay``, ``max_heartbeat_response_delay``, ``min_nack_response_delay``, and ``max_nack_response_delay`` fields are *not* set to 0 in your application. By default they are not set to 0. (If you're using any of the KeepLastReliable or StrictReliable builtin QoS profiles, such as "BuiltinQosLib::Generic.StrictReliable", this issue will not affect you, because in those profiles the delays are set to 0.) In releases after 6.1.2, the heartbeat and NACK response delays were not randomly generated between the minimum and maximum values. (These values are set in the ``min_heartbeat_response_delay``, ``max_heartbeat_response_delay``, ``min_nack_response_delay``, and ``max_nack_response_delay`` fields.) The actual responses were closer to the minimum value (e.g., ``min_heartbeat_response_delay``) than the maximum value (e.g., ``max_heartbeat_response_delay``). New in release 7, these delays are now truly random between the minimum and maximum values. This change may lead to an additional delay (up to the 'max' delay) in receiving samples in lossy environments or in starting to receive historical samples with transient local configuration. You may not have seen this delay before (assuming you are using the same configuration as before). You can reduce the delay by adjusting the 'min' and 'max' ranges for heartbeat and NACK responses. .. MG-117 DataWriters no longer match DataReaders that request inline QoS --------------------------------------------------------------- This change only affects you if you were setting ``reader_qos.protocol.expects_inline_qos`` to TRUE in your |DRs|, which should not be the case. (This issue *may* affect you if you interact with other vendors' |DRs| that set ``expects_inline_qos`` to TRUE; however, in that case, communication likely did not occur, because |CONNEXT| |DWs| do not send inline QoSes.) Previously, |CONNEXT| *DataWriters* matched *DataReaders* that set ``expects_inline_qos`` in the :link_drprotocol_usersman_710:`DATA_READER_PROTOCOL QoS Policy <>` to TRUE. This behavior was incorrect because |CONNEXT| *DataWriters* do not support sending inline QoS; they were not honoring the *DataReaders'* requests and therefore they should not have matched. In release 7, *DataWriters* no longer match *DataReaders* that request inline QoS (i.e., *DataReaders* that set ``reader_qos.protocol.expects_inline_qos`` to TRUE). .. MG-102 .. _CORE-13180: Reduced number of participant announcements at startup ------------------------------------------------------ In previous releases, when a |DP| was first created it sent out a participant announcement and then ``DiscoveryConfigQos.initial_participant_announcements`` number of announcements in addition to the first one. Starting in release 7, ``initial_participant_announcements`` configures the exact number of announcements that are sent out when a participant is created, there is no additional announcement. .. CORE-13180 API Compatibility ================= C# project upgrade ------------------ See :ref:`section-product-csharp-612`. C API PolicyHelper read-only functions receive a const input ------------------------------------------------------------ In the C API, the following functions have changed their ``policy`` parameters from non-``const`` to ``const``: - ``DDS_PropertyQosPolicyHelper_lookup_property`` - ``DDS_PropertyQosPolicyHelper_lookup_property_with_prefix`` - ``DDS_PropertyQosPolicyHelper_get_properties`` - ``DDS_DataTagQosPolicyHelper_lookup_tag`` As a result, if you were previously casting the ``policy`` parameter to a non-``const`` in order to avoid a ``-Wdiscarded-qualifiers`` gcc warning, you should now remove this cast. Memory leak when using coherent sets and the Copy Take/Read APIs in C --------------------------------------------------------------------- This issue affects you if you are using coherent sets and the copy take/read APIs (such as ``take_next_sample``) in C, C++, modern C++, and Java. Before release 7, the field ``SampleInfo::coherent_set_info`` was not copied when using the copy take/read APIs (such as ``take_next_sample``) in C, C++, modern C++, and Java. This behavior has changed in release 7, and the optional field ``SampleInfo::coherent_set_info`` is now copied. Consequently, you will need to call the ``DDS_SampleInfo_finalize`` API in C to finalize the SampleInfo objects to avoid memory leaks. There should not be side effects in other languages, because the memory will be released when the object is destroyed. .. MG-97 Changes to DataReader read/take methods in Python API ----------------------------------------------------- .. include:: ../../710/product710/read_take_python.txt Library Size ============ In release 7, the size of the libraries increased as expected compared to 6.1.1/6.1.2, due to the addition of new functionality. The following table shows the differences: .. list-table:: Library Size Comparison for x64Linux3gcc4.8.2 in Bytes :name: TableCoreLibrarySize700 :widths: 70 10 10 10 :header-rows: 1 :class: longtable * - - 6.1.1/6.1.2 - 7 - Change (%) * - libnddscpp.so - 1588749 - 1600735 - +0.75 * - libnddscpp2.so - 1244873 - 1334952 - +7.24 * - libnddsc.so - 6498656 - 6414139 - -1.30 * - libnddscore.so (Core Library) - 7057459 - 8566268 - +21.38 * - libnddssecurity.so (Security Plugins Library) - 824542 - 862151 - +4.56 Memory Consumption ================== In general, release 7 applications will consume more heap memory than 6.1.1/6.1.2 applications. Stack size is similar between the two releases; there are no significant changes in the stack size. The following table shows the heap memory differences: .. list-table:: Memory Comsumption Comparison for x64Linux3gcc4.8.2 in Bytes :name: TableCoreMemoryComsumption700 :widths: 70 10 10 10 :header-rows: 1 :class: longtable * - - 6.1.1/6.1.2 - 7 - Change (%) * - ParticipantFactory - 64070 - 64294 - +0.35 * - Participant - 1923567 - 1949435 - +1.34 * - Type - 1449 - 1451 - +0.14 * - Topic - 2160 - 2142 - -0.83 * - Subscriber - 9586 - 9602 - +0.17 * - Publisher - 3663 - 3841 - +4.86 * - |DR| - 71895 - 71791 - -0.14 * - |DW| - 42030 - 41302 - -1.73 * - Instance - 499 - 502 - +0.60 * - Sample - 1376 - 1332 - -3.20 * - Remote |DR| - 7497 - 7538 - +0.55 * - Remote |DW| - 15468 - 15298 - -1.10 * - Instance registered in |DR| - 890 - 891 - +0.11 * - Sample stored in |DR| - 917 - 918 - +0.11 * - Remote Participant - 81631 - 84746 - +3.82 Network Performance =================== In general, release 7 applications have the same performance as in 6.1.1/6.1.2 for user data exchange. For details, see `RTI Connext Performance Benchmarks `_. Discovery Performance ===================== Simple Participant Discovery: reduced bandwidth usage may delay discovery time ------------------------------------------------------------------------------ In earlier *Connext* releases, when a participant discovered a new participant, it sent its participant announcement back to the new participant as well as to all other discovered peers and its initial peers list an ``initial_participant_announcements`` number of times. In release 7, the participant announcement is sent only to the new participant a ``new_remote_participant_announcements`` number of times. The other discovered participants already have this information, so, previously, a lot of the traffic when a new participant was discovered was wasted bandwidth. The default ``new_remote_participant_announcements`` value is also smaller than ``initial_participant_announcements`` to reduce bandwidth usage. These are improvements to reduce unnecessary bandwidth usage, but they have the side effect of potentially delaying discovery between two participants that miss each other's participant announcements. For example, consider three participants: A, B, and C. Participant A has Participant B in its initial peers list, and B has A in its list as well. They were both started at the same time, along with 100 other participants, so all of their participant announcements to each other were dropped by the network due to buffer overflows. So A and B do not discover each other. Now, Participant C joins with participant A in its initial peers list. In earlier *Connext* versions, A would respond to C, as well as send an announcement to B, therefore triggering discovery between A and B (if A and B have already discovered each other, this message would be redundant and wasted bandwidth). In release 7, however, A will only respond directly to C. Participants A and B will discover each other at the next ``participant_liveliness_assert_period`` (or the ``participant_announcement_period`` if you are using :link_spdp2_usersman_710:`Simple Participant Discovery 2.0 <>`) when they send out their periodic participant announcement to their peers. There are a number of QoSs that can be used to speed up discovery in these cases. You can increase the number of ``initial_participant_announcements`` and/or the ``new_remote_participant_announcements`` that are sent, elongating the potential discovery phase and hedging against dropped packets. You can also try increasing the separation between the ``min_initial_announcement_period`` and ``max_initial_announcement_period``. The ``initial_participant_announcements`` and ``new_remote_participant_announcements`` are sent at a random time between the min/max initial_anouncement_period values, making the likelihood of collisions and dropped packets less likely. The final thing to try is to increase the receive buffer sizes of the *Connext* transports that you are using, as well as of the OS/kernel layers, to prevent against dropped packets. For details on discovery performance, see `RTI Connext Performance Benchmarks `_. (Experimental) Simple Participant Discovery 2.0: interaction with Security Plugins ---------------------------------------------------------------------------------- There are currently known scalability challenges with large-scale systems using the *Security Plugins* in combination with :link_spdp2_usersman_710:`Simple Participant Discovery 2.0 <>`, where "large" means multiple hundreds of participants discovering each other at once. These issues may manifest as excess bandwidth usage, lack of complete discovery, liveliness lost events, or incomplete authentication periods. These issues will be addressed in upcoming releases.