I have CAD application where i am tring to publish and subcribe from one CAD machine to other CAD machine. I can able to publish and subscribe the data but when i work with same application for a day , there was only one instance where I saw the CAD got crashed.I can see following exception been created
7-22-2019 21:46:44.529 DDS.Retcode_OutOfResources:
[1563850004.528993] U0000000000000e20
WriterHistoryVirtualWriterList_removeVirtualSample:RTI0x2000007:!virtual sample not found
[1563850004.528993] U0000000000000e20
WriterHistoryVirtualWriterList_removeVirtualSample:RTI0x2000007:!virtual sample not found
[1563850004.528993] U0000000000000e20
PRESWriterHistoryDriver_addWrite:RTI0x2000007:!out of resources
[1563850004.528993] U0000000000000e20
PRESPsWriter_writeInternal:RTI0x2000005:collator out of resources
at DDS.DataWriter.write_untyped(Object instance_data, InstanceHandle_t& handle) in f:\home\build3\rti\waveworks\ndds531\connextdds\dds_dotnet.1.0\srccpp\managed\managed_publication.cpp:line 2161
at tcos_core.DDS_Interface.sendMessage(Object instance) in C:\Work\TNG_DEV\TCOS_DDS_LIBRARIES\TCOS_CORE_C#\TCOS_CORE\DDS_Interface.cs:line 88
at TMDS.DDSUtilities.Publish(String serializedMessage, String publishTopic) in E:\Work\Projects\Denver\CrashFixes\_Dispatcher_Denver_Sprint 32_Hotfix 1134a_DDS\_TMDS-COMMUNICATION\DDSBroadCast\DDSUtilities.vb:line 56
Same Configuration (USER_QOS_PROFILE.xml) are used on both machine but the machine which going to publish it got crashed every time with above error.
Initially we thought it could be happen becouse of memory space So we had increase RAM size to 16 GB and disk space by 20GB but it wont help us.
Same communication we are doing with socket programming and with socket programming we are not able to see any crashes but when we use DDS after some period application get Crashed.
Here I am sharing My QOS Profile
<dds xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="C:/Program Files/rti_connext_dds-5.3.1/resource/schema/rti_dds_qos_profiles.xsd"
version="5.3.1">
<!-- QoS Library containing the QoS profile used in the generated example.
-->
<qos_library name="DDS_Message_Library">
and DataReader created in the example code.
-->
<qos_profile name="DDS_Message_Profile" base_name="BuiltinQosLibExp::Generic.StrictReliable" is_default_participant_factory_profile ="true">
<participant_factory_qos>
<logging>
<verbosity>ALL</verbosity>
<category>ENTITIES</category>
<print_format>MAXIMAL</print_format>
<output_file>DDS_VorboseLogger.txt</output_file>
</logging>
</participant_factory_qos>
<!--publisher_qos>
<presentation>
<access_scope>TOPIC_PRESENTATION_QOS</access_scope>
<ordered_access>true</ordered_access>
</presentation>
</publisher_qos>
<presentation>
<access_scope>TOPIC_PRESENTATION_QOS</access_scope>
<ordered_access>true</ordered_access>
</presentation>
</subscriber_qos-->
<datawriter_qos>
<history>
<!--kind>DDS_KEEP_ALL_HISTORY_QOS</kind-->
<kind>DDS_KEEP_LAST_HISTORY_QOS</kind>
<depth>2</depth>
</history>
<reliability>
<kind>RELIABLE_RELIABILITY_QOS</kind>
<max_blocking_time>
<sec>10</sec>
<nanosec>0</nanosec>
</max_blocking_time>
<acknowledgment_kind>APPLICATION_AUTO_ACKNOWLEDGMENT_MODE</acknowledgment_kind>
</reliability>
<publish_mode>
<kind>ASYNCHRONOUS_PUBLISH_MODE_QOS</kind>
</publish_mode>
<destination_order>
<kind>DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS</kind>
<!--source_timestamp_tolrearnce></source_timestamp_tolrearnce-->
</destination_order>
<publication_name>
<name>DDS_MessageDataWriter</name>
</publication_name>
<!-- TQ is available only for transient local DWs -->
<durability>
<kind>PERSISTENT_DURABILITY_QOS</kind>
</durability>
<durability_service>
<service_cleanup_delay>
<sec>0</sec>
<nanosec>0</nanosec>
</service_cleanup_delay>
</durability_service>
<protocol>
<rtps_reliable_writer>
<heartbeat_period>
<sec>0</sec>
<nanosec>20000</nanosec>
</heartbeat_period>
<!-- See high_watermark -->
<fast_heartbeat_period>
<sec>0</sec>
<nanosec>20000</nanosec>
</fast_heartbeat_period>
<late_joiner_heartbeat_period>
<sec>0</sec>
<nanosec>20000</nanosec>
</late_joiner_heartbeat_period>
</rtps_reliable_writer>
<serialize_key_with_dispose>true</serialize_key_with_dispose>
</protocol>
<writer_data_lifecycle>
<autodispose_unregistered_instances>true</autodispose_unregistered_instances>
<autopurge_unregistered_instances_delay>
<sec>0</sec>
</autopurge_unregistered_instances_delay>
</writer_data_lifecycle>
</datawriter_qos>
<datareader_qos>
<subscription_name>
<name>DDS_MessageDataReader</name>
</subscription_name>
<!-- TQ is available only for volatile DRs -->
<durability>
<!--kind>VOLATILE_DURABILITY_QOS</kind-->
<kind>PERSISTENT_DURABILITY_QOS</kind>
</durability>
<history>
<!--kind>DDS_KEEP_ALL_HISTORY_QOS</kind-->
<kind>DDS_KEEP_LAST_HISTORY_QOS</kind>
<depth>2</depth>
</history>
<destination_order>
<kind>DDS_BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS</kind>
<!--source_timestamp_tolrearnce></source_timestamp_tolrearnce-->
</destination_order>
<reliability>
<kind>RELIABLE_RELIABILITY_QOS</kind>
</reliability>
<protocol>
<propagate_dispose_of_unregistered_instances>true</propagate_dispose_of_unregistered_instances>
</protocol>
</datareader_qos>
<participant_qos>
<!--
The participant name, if it is set, will be displayed in the
RTI tools, making it easier for you to tell one
application from another when you're debugging.
-->
<participant_name>
<name>DDS_MessageParticipant</name>
<role_name>DDS_MessageParticipantRole</role_name>
</participant_name>
<receiver_pool>
<buffer_size>1048576</buffer_size>
</receiver_pool>
<property>
<value>
<element>
<name>dds.transport.UDPv4.builtin.recv_socket_buffer_size</name>
<value>1048576</value>
</element>
<element>
<name>dds.transport.UDPv4.builtin.parent.message_size_max</name>
<value>1048576</value>
</element>
<element>
<name>dds.transport.UDPv4.builtin.send_socket_buffer_size</name>
<value>1048576</value>
</element>
</value>
</property>
<resource_limits>
<type_object_max_serialized_length>131072</type_object_max_serialized_length>
</resource_limits>
</participant_qos>
</qos_profile>
</qos_library>
</dds>
Please help me to resolve this issue.