my code is follow:
from common.ronovo_topic import *
from common.TopicName import *
import time
from asyncio.coroutines import iscoroutinefunction
def cost_time(func):
def fun(*args, **kwargs):
t = time.perf_counter()
result = func(*args, **kwargs)
print(f'func {func.__name__} cost time:{time.perf_counter() - t:.8f} s')
return result
async def func_async(*args, **kwargs):
t = time.perf_counter()
result = await func(*args, **kwargs)
print(f'func {func.__name__} cost time:{time.perf_counter() - t:.8f} s')
return result
if iscoroutinefunction(func):
return func_async
else:
return fun
class DataReaderListenerImpl(dds.NoOpDataReaderListener):
@cost_time
def on_data_available(self,t_reader:dds.DataReader):
t_reader.take()
pass
participant = dds.DomainParticipant(domain_id=0)
topic = dds.Topic(
participant,TopicNames.PatientAct[0], TPC_PatientAct)
reader = dds.DataReader(dds.Subscriber(participant), topic)
# readerImpl = dds.DataReader(t_subscriber, dds.Topic(t_topic), t_qos)
# reader = RtiDataReader(readerImpl, t_cb)
listener = DataReaderListenerImpl()
reader.set_listener(listener,dds.StatusMask.ALL)
@cost_time
def take(reader):
# reader.take()
# print(a)
pass
while True:
time.sleep(0.001)
take(reader=reader)
the result of print:
func on_data_available cost time:0.00054845 s
func take cost time:0.00000328 s
func on_data_available cost time:0.00048014 s
func take cost time:0.00000279 s
func on_data_available cost time:0.00035045 s
func take cost time:0.00000237 s
i hope the cost time is under 0.01ms for one topic. But it cost 0.4ms for instance one topic.
how to do if i want Increase the rate?
Can you share the definition of TPC_PatientAct?
What are the sizes of the arrays used in the data structure? i.e., what are the values of T_custem_JNT_NUM, T_custem_DOG_NUM, T_custem_DRV_NUM, T_INSTR_NAME_LEN?
The totoal size is 1472.
I'm not sure what you mean by "total size". Is that the size of each of the arrays defined in the data type?
If so, your performance may be limited by the performance of our Connext DDS Python API with respect to data types with array or sequence members. We're working on improving the preformance for those data types. But at this time, your performance may be as good as it gets.
Sorrry for that i cannot descrip clearly. My mean is that the size of TPC_PatientAct is 1472. As follow:
cout<<sizeof(TPC_PatientAct)<<endl; it will print 1472.
Total size isn't the issue. It's the number of elements in a structure. Again, what are the values of T_custem_JNT_NUM, T_custem_DOG_NUM, T_custem_DRV_NUM, T_INSTR_NAME_LEN?
can you give some reponse?
thanks!!!
Sorry, there's not much that can be done via this forum. If you have a paid Connext developer license, you may be able to get support through RTI's support team who would be able to look directly at your issue.
It's likely that the time is spent in the complexity of your data structure. I know that RTI is working on improving the performance of the python API to serialize and deserialize data structures. I suggest that you try the latest release to see if anything improves for you. We will have another release around April 2023 as well.
By the way, I never got an answer to my question "what are the values of T_custem_JNT_NUM, T_custem_DOG_NUM, T_custem_DRV_NUM, T_INSTR_NAME_LEN?"