I have to exchange images between 2 PC (size 1536*1536 bytes) at 30 and 60 frames per sec (depends on the configuration). PC are directly connected (point to point connection via a 1Gbits or 10 Gbits ethernet board depends on the frame rate).
Requirements:
- Respect frame order.
- No data lost
- Consecutives frames: #500.
- Subscriber can come at the beginning, during or at the end of the frames flow.
According to those constaints I try to implement a QOS Profile without success, frames order is not always respected and in case of subscriber comes during frames send, write TIMEOUT occurs (max_blocking_time 10 sec)...
I can not succeed in tuning well my QOS profile according to my constraints (I attached my QOS profile to this message). What is the best settings according to my requirements ?
Regards
Patrice
Attachment | Size |
---|---|
user_qos_profiles.xml | 8.69 KB |
Hi,
In fact my problem is, I fill and write data (1536*1536 bytes size) through a timer callback each 30 msec as follow:
With a subscriber which connect during the frame flow, at the connection, write calls are blocked while sending the history to the reader. This blocking cause :
I'm based on "StrictReliable.LargeData.FastFlow" Built-in QoS profile. Is it a normal behaviour ? How can I do to respect the write call order ?
Regards
Patrice
Hey Partrice,
As far as I know, dds always respects order of messages being written by writer (on a per instance basis, unless you use the query options which are not the normal usage).
Are you using different keys for different frames?
Also, do you want the new reader to receive the messages at the "rate" they were originally written? this is not the use case of durability.
To do that I would recommend:
1. Giving up on old data (new readers join the current feed of data, missing out on old data)
2. Using another technology more built for streaming video
3. Using consumer specific topic or key and add provider logic to "replicate" the data (without using the durability qos: 1. use a different topic to coordinate new consumer topics and have the provider send data when a topic was coordinated 2. when a match is made - use a key that the consumer knows [like reader guid] and send all the data with the new key [you may use content filtered topics to avoid sending all the readers data that is designated to a specific reader]).
Hope this helps,
Roy.
Thanks Roy,
1- I don't have any key.
2- DataReader requirement is to receive all data (with historical frames => up to 500 frames). Historical frames reception don't have to respect written rate.
The scenario when frames order is not respected is:
In fact, for me, my problem is due to that FooDataWriter::write() command is blocking at the DataReader connection and first data exchange.
I'm configured in "StrictReliable.LargeData.FastFlow" (with durability kind TRANSIENT_LOCAL_DURABILITY_QOS on DR and DW). Is it normal that FooDataWriter::write() block at the DataReader connection ? Is there a QoS configuration to avoid this behavior ?
If I set durability kind "VOLATILE_DURABILITY_QOS" for the datareader_qos, i don't have the problem but I don't get the historic ;(. In this configuration, how can I retrieve historic data ?
Hope i'm clear enough
Regards
Hi,
Based on:
Is it normal that if i connect a new DataReader client during the data flow:
How can I avoid/right this issue (blocking of the writings) ?
Thanks in advance
Regards