QOS Profile for 2.25 MB at 30 fps

5 posts / 0 new
Last post
Offline
Last seen: 6 years 7 months ago
Joined: 10/11/2017
Posts: 9
QOS Profile for 2.25 MB at 30 fps

I have to exchange images between 2 PC (size 1536*1536 bytes) at 30 and 60 frames per sec (depends on the configuration). PC are directly connected (point to point connection via a 1Gbits or 10 Gbits ethernet board depends on the frame rate).
Requirements:
- Respect frame order.
- No data lost
- Consecutives frames: #500.
- Subscriber can come at the beginning, during or at the end of the frames flow.

According to those constaints I try to implement a QOS Profile without success, frames order is not always respected and in case of subscriber comes during frames send, write TIMEOUT occurs (max_blocking_time 10 sec)...

I can not succeed in tuning well my QOS profile according to my constraints (I attached my QOS profile to this message). What is the best settings according to my requirements ?

Regards

Patrice

AttachmentSize
File user_qos_profiles.xml8.69 KB
Offline
Last seen: 6 years 7 months ago
Joined: 10/11/2017
Posts: 9

Hi,

In fact my problem is, I fill and write data (1536*1536 bytes size) through a timer callback each 30 msec as follow:

  • FooTypeSupport::create_data
  • fill data
  • FooDataWriter::write()
  • FooTypeSupport::delete_data

With a subscriber which connect during the frame flow, at the connection, write calls are blocked while sending the history to the reader. This blocking cause :

  • Several write called simultaneously.
  • Frames order (write call in timer callback) is not respected...

I'm based on "StrictReliable.LargeData.FastFlow" Built-in QoS profile. Is it a normal behaviour ? How can I do to respect the write call order ?

Regards

Patrice

 
Offline
Last seen: 11 months 2 weeks ago
Joined: 02/11/2016
Posts: 144

Hey Partrice,

As far as I know, dds always respects order of messages being written by writer (on a per instance basis, unless you use the query options which are not the normal usage).

Are you using different keys for different frames?

Also, do you want the new reader to receive the messages at the "rate" they were originally written? this is not the use case of durability.

To do that I would recommend:

1. Giving up on old data (new readers join the current feed of data, missing out on old data)

2. Using another technology more built for streaming video

3. Using consumer specific topic or key and add provider logic to "replicate" the data (without using the durability qos: 1. use a different topic to coordinate new consumer topics and have the provider send data when a topic was coordinated 2. when a match is made - use a key that the consumer knows [like reader guid] and send all the data with the new key [you may use content filtered topics to avoid sending all the readers data that is designated to a specific reader]).

 

Hope this helps,

Roy.

Offline
Last seen: 6 years 7 months ago
Joined: 10/11/2017
Posts: 9

Thanks Roy,

1- I don't have any key.

2- DataReader requirement is to receive all data (with historical frames => up to 500 frames). Historical frames reception don't have to respect written rate.

The scenario when frames order is not respected is:

  • I write frames size 1536*1536 bytes each 33ms (in timer callback) 
  • write period is respected
  • Then I connect DataReader during the frame flow (DataReader QoS => get all frames with up to last 500 historic frames).
  • At the connection, during DataReader reception (historic data), write requests are stacked by the middleware. Time callback still occure each 33ms (writes are requested) but the FooDataWriter::write() command block.
  • Then after a time, stacked write resquests are executed by the middleware but not in the expected order (not as a "FIFO" order). Frame oder in the middleware represent the FooDataWriter::write() executed order and not the call order ....

In fact, for me, my problem is due to that FooDataWriter::write() command is blocking at the DataReader connection and first data exchange.

I'm configured in "StrictReliable.LargeData.FastFlow" (with durability kind TRANSIENT_LOCAL_DURABILITY_QOS on DR and DW). Is it normal that FooDataWriter::write() block at the DataReader connection ? Is there a QoS configuration to avoid this behavior ? 

If I set durability kind "VOLATILE_DURABILITY_QOS" for the datareader_qos, i don't have the problem but I don't get the historic ;(. In this configuration, how can I retrieve historic data ?

Hope i'm clear enough

Regards

Offline
Last seen: 6 years 7 months ago
Joined: 10/11/2017
Posts: 9

Hi,

Based on:

  • "StrictReliable.LargeData.FastFlow" (ie. publish_mode.kind = ASYNCHRONOUS) Built-in QoS profile (for data size 1536*1536 bytes written in 33ms periodic timer callback).
  • history.kind = KEEP_ALL_HISTORY (for DataReader and DataWriter QoS).
  • resource_limits.max_samples = 500 (for DataWriterQoS)
  • durability.kind = TRANSIENT_LOCAL_DURABILITY (for DataReader and DataWriter QoS).

Is it normal that if i connect a new DataReader client during the data flow:

  • At the connection, DataWriter is impacted (each writes requests are stacked during historic data recovery) ? Whereas there is no problem in case of DataReader is already connected before data written flow. 
  • Some Write request can failed with a TIMEOUT error with reliability.max_blocking_time = 10 sec ... !!
  • When the Publisher recover "resources" to threat pending written requests, they are unstacked not in the intial order (not as a FIFO...) ?

How can I avoid/right this issue (blocking of the writings) ? 

Thanks in advance

Regards