why should depth smaller than max_samples_per_instance?

8 posts / 0 new
Last post
Offline
Last seen: 2 days 12 hours ago
Joined: 01/13/2016
Posts: 58
why should depth smaller than max_samples_per_instance?

I look up to the QoS policy and find that "depth <= max_samples_per_instance", why?


if both depth and max_samples_per_instance is set to 1. a application write three different instance-sample at the same time, will the reader lost samples?

Offline
Last seen: 1 year 3 months ago
Joined: 05/23/2013
Posts: 64

Hi,

History QoS depth is applied per instance. If you set History QoS depth and max_samples_per_instance to 1, each writer can store a sample in each writer's history queue and samples won't be lost because of the resource limit. 

Thanks,
Kyoungho

Offline
Last seen: 11 months 1 week ago
Joined: 02/11/2016
Posts: 144

Hey,

 

Basically these two qos settings refer to different "layers".

Imagine you have 4 houses (instances) that have a maximum capacity of 10 people each (max samples per instance).

Now imagine that you decide to make a rule that you only allow 4 people into each house (that is like setting depth). This rule is not based on the basic resouce, space (like the basic resource you limit samples for, memory) but on a higher level of functional limitation (for example, you don't want to deal with more than 4 people per house).

It obviously makes no sense to say that I am ok with having 12 people in a house when I can't fit more than 10, which is basically why depth is smaller than or equal to the samples per instance.

It's worth noting that history depth is a qos setting is a per instance setting.

Regarding your other question: depth == samples per instance == 1 and 3 different instances being written at the same time -> you should receive all 3 (although technically if you set max samples [not per instance, in total] to less than 3 it  may not work). also in the unlikely case that your samples are huge (or that your memory limitations for the application you are pretty harsh) you may experience loss of data.

 

Hope this helps,

Roy.

Offline
Last seen: 2 days 12 hours ago
Joined: 01/13/2016
Posts: 58

thanks for all the answers, i see.

Gerardo Pardo's picture
Offline
Last seen: 3 days 20 hours ago
Joined: 06/02/2010
Posts: 602

Just some additionan clarification.

The max_samples_per_instance is part of the RESOURCE_LIMIT Qos policy  that applies independently of the HISTORY Qos policy.  

HISTORY controls what you want to keep in the cache and saying HISTORY kind=KEEP_LAST with depth=N1  says that you only want the last "N1" samples per instance. So say the instances represent cars and the samples updates to the position of the car, then HISTORY kind=KEEP_LAST with depth=N1 means you only want the last N1 positions per car.

Removing an older sample from a cache due to exceeding history depth is considered "normal" and desired behavior bacsuse processing older (less relevant data) comes at the expense of being able to look at the latest.

RESOURCE_LIMITS is not about specifying what is "interesting" but rather about controlling the resources on the writer side.  So RESOURCE_LIMITS max_samples_per_instance=N2 says that I do not want to hold up more than N2 samples for any single instance so that a single instance does not "starve" other instances of resources. E.g. a car that is sending too many updates of its position using up the memory I need to keep other car positions.  

Dropping a sample because "max_samples_per_instance" is exceeded is not considered "normal" this indicates some resource constraint or communication failure. Like a consumer that it is not able to keep up with the data being written. Normally if the DataWriter is configured with RELIABILITY and HISTORY kind=KEEP_ALL then the DataWriter will "block" when the  max_samples_per_instance is reached until the reliable readers acknowledge it because it does not want to "remove" that sample and it cannot exceed the resource limit...

Now, when used in combination, that is HISTORY kind=KEEP_LAST, depth = N1 and RESOURCE_LIMITS max_samples_per_instance=N2 then we have a situation where these can "interact" with each other.

As documented the constaint is that N2>=N1, otherwise it is impossible for the DataWriter (or reader) to hold its specified history depth.  The question is, does it ever make sense to have N2 > N1 (that is not equal but greater)?

As far as I can tell, N2 > N1 does not do furfill any valid use-case on a DataWriter. So is you are using HISTORY with kind=KEEP_LAST you should set RESOURCE_LIMITS.max_samples_per_instance=HISTORY.depth setting it to a larger value will serve no purpose. I am not aware of adverse effects either. It seems that since HISTORY depth applies first the other limit will just be ignored.

On the other than the DateReader has situations where samples can be "on loan" to the application and thus not reclaimable to the DataReader even if history depth is reached. In this case having  RESOURCE_LIMITS.max_samples_per_instance > HISTORY.depth does serve the valid use case of allowing the receiver cache to hold HISTORY.depth new samples per instance while the application has some samples for the same instance "on loan"

Gerardo

Offline
Last seen: 2 days 12 hours ago
Joined: 01/13/2016
Posts: 58

thanks for you reply, it did help me and i also want to clear that if i set topic qos policy do i need to set datawriter or datareader qos policy? for example if i set topic history depth to 2 and datawriter history depth to 3, What is the actual situation?

Gerardo Pardo's picture
Offline
Last seen: 3 days 20 hours ago
Joined: 06/02/2010
Posts: 602

Hi,

The final Qos is what is set in the DataWriter or the DataReader. You are "always" setting the DataWriter (or DataReader) Qos when you create the entity. The Topic Qos is just a mechanism to specify a hint of what the intended Qos for the writers and readers of the Topic should be. There are also APIs that facilitate "reusing" that TopicQos when creating a DataWriter (or DataReader). For example in the Traditional C++ API you can create a DataWriter with the call:

DDSPublisher::create_datawriter	( 
    DDSTopic * 	topic,
    const DDS_DataWriterQos & 	qos,
    DDSDataWriterListener * 	listener,
    DDS_StatusMask 	mask )	

If you pass the special value DDS_DATAWRITER_QOS_USE_TOPIC_QOS it indicates that the DDSDataWriter should be created with the combination of the default DDS_DataWriterQos set on the DDSPublisher and the DDS_TopicQos of the DDSTopic. However this mechanism is limited because the TopicQos represents only a subset of what can be configured in the DataWriter and DataReader. It is also inflexible because differnt readers and writers to the same Topic may have specialized needs that are not always the same.  

Instead we recommend you use Qos profiles to define the Qos rather than hardcoding them into the application. See https://community.rti.com/best-practices/configure-your-qos-through-profiles

When using Qos profiles you can use the topic_filter atribute to define the value of the Qos policies conditionally depending on the Topic name. This allows complete configuration of all the Qos for the DataWriter and DataReaders while still keeping the link to the Topic. In general we recommend this as the best approach.

Gerardo

Offline
Last seen: 2 days 12 hours ago
Joined: 01/13/2016
Posts: 58

thanks, i got it.