rtiddsspy / DdsDynamicData support of (absurdly) large enumerations

5 posts / 0 new
Last post
jcwenger's picture
Offline
Last seen: 2 years 11 months ago
Joined: 01/21/2015
Posts: 11
rtiddsspy / DdsDynamicData support of (absurdly) large enumerations

I have an application in which some of my enumerations are, by any sensible measure, absurdly large.  One specific example relevant to this is an enumeration with 76 defined enumeration values inside the braces.

This leads to some very large case statements in FooPlugin_deserialize_sample, as it attempts to assert that the input eumeration value is valid, but this has not (yet?) appeared to cause any real code problem.

However, this does interfere with DynamicData / Typcode support.  Specificially, rtiddsspy handling of this struct fails...  "rtiddsspy:  register_type error 4" --  for any struct containing an member of this enum type.  I presume this is due to the number of elements in the array of enumeration values exceeding some defined maximum value.  rtiddsspy recovers from this error, but abandons any attempt to interpet the struct as a dyanimc type.  This results in an inability to print samples of this struct.

 

What mitigations exist for this?  I could declare the struct as an integral type instead, while still being able to compare the IDL struct's values to the number in the integral type, but this would require static_cast whenever I used it, and show an API with an integral value instead of a strongly typed enum which allows its interpretationto be explicit.

Is there any mechanism such as an IDL annotation to annotate "this is an enum, but it's too big, don't generate typecode, instead just interpret it as an integral type when it comes to serialization and deserialization, but treat it as the strongly typed enum it is, when it comes to compiled code that includes the headers?

Is there some mechanism to alias the type, so that the on-the-wire messages are a related type that contains an integral value, but when brought out at the API, the strongly typed enum is preserved?

Is there some mechanism to reconfigure the DynamicData support to chagne the limit, such that it would accept these (absurdly) large enumerations?

Any other alternate suggestions?

--Jason C. Wenger

Keywords:
gianpiero's picture
Offline
Last seen: 2 months 1 week ago
Joined: 06/02/2010
Posts: 177

Hello Jason,

Can it be that the whole typecode is very big? If that is the case, DDS will not send the typecode on the wire and spy won't be able to register the type correctly. You can increase the limits for sending the type code on the wire by changing some QoS. Look at this howto to find out how.

Let me know,
   Gianpiero

jcwenger's picture
Offline
Last seen: 2 years 11 months ago
Joined: 01/21/2015
Posts: 11

Thanks for the info, @gianpiero -- The type itself is small but the enum typecode is very large, I'm sure the enum alone pushes it over the 2-3k limit that's referenced in the link.

The bigger issue is that from a Dynamic Data perspective, knowing the enum values is very low priority, while knowin the structure of the remaining fields is a high priority, and not having to manually configure QoS is a high priority.

It would be much better if there was a mechanism to have a typecode for the entire class, but have the enum show as an undecoded integral type. --  the ideal case would be a mechanism to have a line-item veto on whether typecode is serialized -- Typecode for everything except this one enum, which gets typecode of its integral storage representation instead of its enumerated values.  I need to use ContentFilteredTopics -- so I want typecode overall.

Perhaps it is best to do something like this:

typedef unsigned short FooType;

enum FooTypeValues
{
    ...many lines
}

struct FooContainingessage
{
   Bar id; // @key
   Baz member;
   FooType enumMember;
}


And then simply compare and assign the integral payload of enumMember using the values declaed in FooTypeValues.

Gerardo Pardo's picture
Offline
Last seen: 1 day 5 hours ago
Joined: 06/02/2010
Posts: 601

Is an interesting idea to make the TypeCode more compact. We would give up the ability to validate that the enum was defined consistently on both sides... We are actually already looking at more efficient ways to distribute the type information which could also help this use case and not loose the visibility into the enum literals during type compatibility...

In the meantime it seems the approach you describe is the best you may have available. Does the generated code from that IDL do what you intend also in Java and C#?

Gerardo

jcwenger's picture
Offline
Last seen: 2 years 11 months ago
Joined: 01/21/2015
Posts: 11

@Gerrardo, yes, you lose the type safety.  You also mitigate a performance hit.  Right now, if I leave the values in the enumeration, then deserialize gets generated with a switch statement with 76 case labels, making 76 conditional jumps, which ensure that a deserialized message is actuallyin range, falling through to a default label that yells if it tries to deserialze something unknown..  That can't compile down to anything terribly healthy in terms of object code runtime performance.  :)  

I could do this today with some really evil codegen -- If I have EnumerationsFull.idl and EnumerationsBrief.idl, and codegen both, but mix together the EnumerationsFull.h (containing the C enum definition with all the fields) with the EnumerationsBriefSupport.cpp (that says, Field F is a long, prints as an integer, packs as a 32b)  and etc -- I could get a hybrid - (also assuming that C++ enum gets stored in-memory as an int32), But that's too much evil to inject into my build environment. :)