Using 64-bit and larger types

5 posts / 0 new
Last post
Offline
Last seen: 4 years 11 months ago
Joined: 05/07/2015
Posts: 2
Using 64-bit and larger types

I would like to be able to send 128-bit integer values over DDS but I cannot see how to initially implement this in IDL. I have tried working with 64-bit integers using the following IDL

ULONG64 UUID_MSB; // UUID, most significant bits

ULONG64 UUID_LSB; // UUID, least significant bits

This seems to parse OK when run through the code generator. However when I try to compile the resulting C++ code, I get a load of errors associated with the ULONG64. The first couple are shown below.

Error 2 error C3861: 'ULONG64_get_typecode': identifier not found c:\demorig\04_sw_implementation\idregistry\idregistry\idregistry_registerresourceid\idregistry.cxx 2835 1 IdRegistry_RegisterResourceId_subscriber
Error 3 error C3861: 'ULONG64_get_typecode': identifier not found c:\demorig\04_sw_implementation\idregistry\idregistry\idregistry_registerresourceid\idregistry.cxx 2836 1 IdRegistry_RegisterResourceId_subscriber

Please can anyone help with this? Should it work? Is there a way to use 128-bit integers so I do not have to split into 2 64-bit values?

rip
rip's picture
Offline
Last seen: 3 days 13 hours ago
Joined: 04/06/2012
Posts: 321

try

struct myUUIDType {
    long long UUID_MSB;
    long long UUID_LSB;
};

cf section 3.3.4 of the Core user's manual (under the Documentation link to the left), page 3-39 for the IDL specific type names.

It runs through the code generator because rtiddsgen believes you, when you tell it that there is such as thing as an IDL-defined, 'struct ULONG64 {...};' type available at compile time of your application.

Offline
Last seen: 4 years 11 months ago
Joined: 05/07/2015
Posts: 2

OK. Thanks very much.

I take it there is no 128-bit integer type?

rip
rip's picture
Offline
Last seen: 3 days 13 hours ago
Joined: 04/06/2012
Posts: 321

64bit is the largest.  The OMG DDS group would need to look at adding that to the next iterarion of the IDL spec for DDS.  Should be easier, now that they've taken the step to separate IDL (CORBA) from IDL (DDS).

 

 

Gerardo Pardo's picture
Offline
Last seen: 6 months 1 week ago
Joined: 06/02/2010
Posts: 589

It seems like most programming languages lack support for 128-bit integers. For example in Java and C# you need to use BigInt which is arbitrary sized, but apparently low performance. In C/C++ you need to use a copiler extension or a library like boost, ... Because of this it would be hard for IDL to support 128-bit integers.

IDL focuses on defining of types that are portable and can easily be mapped to the common programming languages, as these languages do not support Int128 natively the mapping would have to be to some other type, so you might as well use that type in the IDL as well.

What I would do is define Int128 as this:

 struct Int128 {
    long long high;
    unsigned long long low;
};

This will reserve 128 bits. If your language has 128-bit integer support you could copy from that "Native Int128" to the "struct Int128" back and forth so that all your arithmetic is done in the "Native 128" and the sending and receiving as "struct Int128". In some languages like C/C++ you could even save the copy and reinterpret the pointer as needed as long as the endianess allows it. For example the casting above would only work for BigEndian machines to have it work on little endian you would have needed to define the type as:

struct Int128 {
    unsigned long long low;
    long long high;
};

Gerardo