We are looking into using the DynamicData / DynamicType interfaces specified by the DDS-XTypes standard. The spec mentions "may be lower performance to use than plain data objects" for the use of "dynamic language bindings". Before we dive deeper into trying it and building prototypes with that we would like to ask for an assessment about the performance implications. We are intrested in a rough estimation (rather then exact numbers).
Is the performance impact mostly limited to the discovery phase or does it affect the handling of every single message?
What is the to-be-expected overhead for each message handled / serialized / transfered?
Thanks!
Hi Dirk,
Your question is really about the DynamicData aspect of XTYPES. XTYPES itself does not force the use of DynamicData so if you use XTYPES to get the flexibility of being able to evolve your type system but define the data types in IDL/XML and generate the serailization/deserialization code, there should be no performance impact. Well, almost if your type is "Mutable" rather than the "Extensible" default then the encapsulation is a bit more verbose... But it is the minimal necessary to support modifying data types without breaking interoperability.
Regarding the use of DynamicData. It is hard to answer this question in a generic fashion because it will largely depend on the particular vendor implementation as well as the complexity of the data-type itself.
The impact is per message. There is very little impact at discovery time.
Assume you have types such as this:
If you generate code from the above types in IDL then the serialization/deserialzation code is generated to access each field in the data type via direct member access and then call functions (which we, RTI, typically optimize to macros) to copy the bytes into a stream.
So we will generate something that ends up being (in pseudo-code):
This has omitted some details, aside from error checkin it also does not manage the fact that a float may not be 4 bytes in certain platforms. The point was to illustrate that the end result is pretty close to optimal. You can see the actual code when you run
rtiddsgen
it is all placed in the <Typename>Support.c file.However if you use the dynamic data API, then there is no code generated, so:
(1) The typecode has to be interpreted to determine what needs to be serialized and the type of the element
(2) The access to the actual data value is also a function call rather than a direct member access as in:
So there are more function calls and loops iterating over each member with switches to determine the member kind do the appropriate function can be called, etc.
The above is not exactly how it is done, but it reflects the fact that things that were generated before are now "interpreted".
The actual performance impact of the extra function calls will depend on the complexity of the type. If the type is deeply nested and has a lot of primitive fields then the impact will be bigger. If the type is shallower and the main types are strings and arrays/sequences of primitve elements then the impact will be smaller. This is because the arrays of primitive elements are optimized so that a single function is called to serialize the while array rather then iterating over each member.
I think to get a more precise answer would require a test with some concrete data types...
Gerardo
Hi Gerardo,
thank you for the thorough answer.
After prototyping some stuff with DynamicData I noticed that the API does not provide a method for loaning any of the member data or getting the pointers to contiguous memory.
Did I miss that in the API or is that another case where using the DynamicData interface will cost us performance?
If it is currently not available - is it planned to add these kind of accessors in the future?
Thanks!
Hi Dirk,
By loaning a member, do you think the API
DDS_DynamicData_bind_complex_member()
would be what you're looking for? When using this API you get acces to a complex field (e.g. a struct) inside a DynamicData object. Take into account that this method has to be used in conjunction withDDS_DynamicData_unbind_complex_member()
as you go down and back up in the DynamicData object tree.Hope this helps.
Thanks,
Juanlu
Hi Juanlu,
I was thinking about the
loan_contiguous
/unloan
functions to efficiently set for examle anOctetSeq
value.The DynamicData interface doesn't seem to provide those and therefore I have to copy the bytes one more time which I would like to avoid.
Thanks
Hi Dirk,
The primitive types' sequence-based set/get methods in the DynamicData API get the contiguous buffer of a sequence and set it directly in the DynamicData sample. For example, to set an octet sequence, you could use
DDS_DynamicData_set_octet_seq()
. Have you seen these method family?Thanks,
Juanlu
Hi Juanlu,
yes, I have seen these methods and use them for now.
But they imply that the underlying data is copied. I was wondering if there is a way to avoid that extra copy similar to:
* DataReaders providing `take` / `return_load`
* Sequences providing `loan_contiguous` / `unload`
Thanks
Hi Dirk,
Yes, the set methods end up being a
memcpy
call. Although they are efficient, they're not more efficient than a loan.I don't think there's an API for what you're looking for. And as far as I know, I don't think there are plans to add this APIs any soon.
Sorry for not being very helpful.
Thanks,
Juanlu