Error when creating datawriter

2 posts / 0 new
Last post
Offline
Last seen: 4 years 1 month ago
Joined: 05/21/2014
Posts: 46
Error when creating datawriter

I am writing some code in Java for an implentation of RTI DDS. One of the requirments is that we send logs every month over the network, as well as script files as results files that will be large in size ( > 1000000 characters). Since these files will be very large I tried to make the max size of the data being sent over RTI as Integer.MAX_VALUE. However when I then start my system with the multiple writers with this Integer.MAX_VALUE as the largest string that can be sent I get the following outputcreate_datawriter error

create_datawriter error

create_datawriter error

create_datawriter error

TypeSupportNativePeer_on_endpoint_attached:!get max serialized sample size
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
WriterHistorySessionManager_new:!create newAllocator
WriterHistoryMemoryPlugin_createHistory:!create sessionManager
PRESWriterHistoryDriver_new:!create _whHnd
PRESPsService_enableLocalEndpointWithCursor:!create WriterHistoryDriver
PRESPsService_enableLocalEndpoint:!enable local endpoint
[D0200|Pub(308)|T=logdata|DELETE Writer]TypeSupportNativePeer_on_endpoint_detached:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_on_endpoint_attached:!get max serialized sample size
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_get_serialized_key_max_size:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
WriterHistorySessionManager_new:!create newAllocator
WriterHistoryMemoryPlugin_createHistory:!create sessionManager
PRESWriterHistoryDriver_new:!create _whHnd
PRESPsService_enableLocalEndpointWithCursor:!create WriterHistoryDriver
PRESPsService_enableLocalEndpoint:!enable local endpoint
[D1000|Pub(308)|T=scriptset|DELETE Writer]TypeSupportNativePeer_on_endpoint_detached:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_on_endpoint_attached:!get max serialized sample size
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
WriterHistorySessionManager_new:!create newAllocator
WriterHistoryMemoryPlugin_createHistory:!create sessionManager
PRESWriterHistoryDriver_new:!create _whHnd
PRESPsService_enableLocalEndpointWithCursor:!create WriterHistoryDriver
PRESPsService_enableLocalEndpoint:!enable local endpoint
[D1300|Pub(308)|T=resultsdata|DELETE Writer]TypeSupportNativePeer_on_endpoint_detached:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_on_endpoint_attached:!get max serialized sample size
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_get_serialized_key_max_size:!precondition: endpoint_data == ((void *)0)
TypeSupportNativePeer_get_serialized_sample_max_size:!precondition: endpoint_data == ((void *)0)
WriterHistorySessionManager_new:!create newAllocator
WriterHistoryMemoryPlugin_createHistory:!create sessionManager
PRESWriterHistoryDriver_new:!create _whHnd
PRESPsService_enableLocalEndpointWithCursor:!create WriterHistoryDriver
PRESPsService_enableLocalEndpoint:!enable local endpoint
[D1500|Pub(308)|T=alarmstatus|DELETE Writer]TypeSupportNativePeer_on_endpoint_detached:!precondition: endpoint_data == ((void *)0) 

What does this output mean and what is happening to cause this output?

rip
rip's picture
Offline
Last seen: 1 day 1 hour ago
Joined: 04/06/2012
Posts: 324

Hi,

RTI DDS statically allocates all the memory it thinks it will need when you instantiate an entity.  You're probably trying to allocate 80 petabytes of RAM or something (can't tell without seeing the IDL and QoS settings).

If your data model is four longs, and your history depth is 8, then 8 slots will be allocated, each of 16 bytes (plus overhead), total 128 bytes for data (not counting sample_info) per writer or reader using that model.  The core manages its own heap, but if it doesn't have the necessary space it will allocate more from the system heap.  There are QoS settings to make this happen at the front on boot, so you get deterministic runtime use of the middleware's heap.

If your data model is four strings of Integer.MAX_VALUE (32bit int is 4Gb) length with a history depth of 8, then 8 slots will be allocated, each of 16Gb (plus overhead).  So,  128Gb of heap, per writer or reader for that type. 

So while 80 petabytes might be seen as hyperbole... consider that it is possible with bad IDL, incorrect assumptions and unmanaged QoS, to get there.

From the error messages, it isn't even getting to that point.  The amount that needs to be allocated is too large, and you've probably already blown through Java's availalbe heap (controlled using flags passed to the JVM at start up, this is before you run your application within). 

Start by rethinking what you want to do, and how you would do it using DDS.  Probably send the data in chunks:

struct LargeFileTransferPayloadType {
    long id; //@key
    long session; //@key
    string<16384> payload;
    long checksum;
};

and reassemble on the receiver side.  History.depth = 1, strict reliable and careful use of max_samples_per_instance and max_instances.  Then tune for what is happening on the network. You probably want to use the asynch writer from the publisher and use flow control too, so you don't swamp your bandwidth with log data, to the detriment of everyone/everything else out there trying to use the network.  You'll need some internal protocol for handling setup and teardown to.

Also, DDS isn't really the optimum method for file transfer.  Consider using DDS to setup an SFTP session instead, and let the SFTP handle the actual transfer.