To whom it may concern,
I have a logger application that encodes dynamic data to bytes to save into a file and then another app that reads these bytes and transforms them back to dynamic data. The decoding part is not very efficient timewise. I was wondering if the is a more efficient way to do this. Find the code blocks for the decoding and encoding parts below:
Encoding part:
cdr_data = dynamic_data.to_cdr_buffer()data_in_bytes = bytes(cdr_data)Decoding part:
# Creating and populating Int8Seq objectbytes_list = list(data_bytes)char_list = list(map(chr, bytes_list))data_int8seq = dds.Int8Seq(char_list)# Creating a dynamicData object based on the Int8Seq object informationdynamic_data = dds.DynamicData(data_type)dynamic_data = dynamic_data.from_cdr_buffer(data_int8seq)
I added an update to the develop branch on github that allows for direct conversion from unsigned bytes, so you could just pass in data_bytes directly if it is a Python buffer object (e.g. bytes).
There may be a simple speed up without rebuilding the API, you can try the following (the rebuild version from develop will be faster, however):
import array ... # convert to DynamicData object with data_bytes defined by this point data_array = array.array('b', data_bytes) dynamic_data = dds.DynamicData(data_type) dynamic_data = dynamic_data.from_cdr_buffer(data_array)thanks Marc, that works with the develop branch.