Clear DataWriter History

6 posts / 0 new
Last post
Offline
Last seen: 9 years 2 months ago
Joined: 12/11/2013
Posts: 4
Clear DataWriter History

I'm not sure if this is mentioned already, I've tried searching but couldn't find anything. 

Is there a way to clear out the DataWriters history queue?

I have a scenario where I have multiple datawriters created for a type/topic but only one can publish at a given time, the readers only get messages from one writer. If I've setup transient_local_durability on the writers they'll queue up their messages for late joining readers. So if a new writer takes over and starts writing the late joining readers will get messages from both writers. So I need a way to clear out the original writers queue.

I've looked at using flush() on the datawriters, but not sure if that is the right approach. Also, there is the data_writer_lifecycle you can setup in the qos but don't know if that will work because my understanding is you need to deregister the instance for this to work? Is that right?

Any ideas on this one? Does my explanation make sense?

Thanks.

rip
rip's picture
Offline
Last seen: 1 day 6 hours ago
Joined: 04/06/2012
Posts: 324

<writer>.flush() isn't for what you are doing.  It is related to batching, if you have batch enabled and want to manually force the current batch to be emitted, you call <writer>.flush().

There is no way to empty a writer's queue.  You will need to build logic into your application to make it aware of the possibility that it will receive data from multiple writers, and if so, that there has to be a way for the application to decide which data to accept.

Look at the OWNERSHIP QoS.  If the new writer has a higher ownership_strength then the old writer, a reader will [*] ignore the data from the old writer.

[*] caveats galore in your use case.  For example, there is no way to tell the old writer there is a new writer (your original problem).  Likewise, there is no way to tell an arbitrary reader that there will or won't be a higher-strength writer, and the order that a reader "finds" the writers in is arbitrary/not definable.  So you could set it up with ownership, but the new reader comes up and only sees the old writer -- because it doesn't know there is a new writer, it will accept the old writer's data (until the new writer is seen).  If the topic is keyed, this issue will be compounded by each Key'd instance being owned (or not) by arbitrary writers. 

You can either unregister or dispose each instance by the old writer.  Depending on your writer's queue (history.depth), a late joiner will get all of, the last values published by the old writer, and the unregister/dispose message (as appropriate) from the old writer, as well as the data from the new writer.  you'll note that your reader is still getting the duplicated data however, and will need to (at the application layer) behave accordingly.

If you can provide additional information about what your goals are, maybe we can provide additional ideas about how to attack the problem.

 

 

 

Offline
Last seen: 9 years 2 months ago
Joined: 12/11/2013
Posts: 4

Thanks for the response rip.

Here is a little bit more background on what we are trying to achieve. We have implemented something similar to Ownership where we only have one DataWriter publishing at a given time and if something happens like network failure or service stops working etc then the next DataWriter takes over and starts publishing. We haven't used Ownership QoS because we wanted a bit more control along with some other reasons, but essentially we are trying to achieve the same thing.

The issue we are running into is when this "flip" occurs and we are using History on the DataWriter some of the late joiners are getting old samples from the older DataWriter. We wanted a way to clear this history out when the flip occurs so that doesn't happen.

I've played around with registering and unregistering and seems to work out ok but I believe there will be issues since we might have Types with multiple keys thus we might start getting quite of few instances floating around.

We can't use Ownership because we are already doing something similar. Just wondering is there any filter or something similar on the DataWriter that can delete all of these instances? Or this has to be done on the reader side?

Thanks again, I know it's an obscure scenario.

 

rip
rip's picture
Offline
Last seen: 1 day 6 hours ago
Joined: 04/06/2012
Posts: 324

Does a publisher* have enough awareness to know when it is no longer the Primary DataWriter?  If so, the application might simply just delete the DataWriter entity (<publisher>.delete_datawriter(theProblematicWriter); ), which will solve the caching problem.  If it is needed (as a backup), just immediately create_datawriter(...) again.

* Not "Publisher" (the Entity), but 'publisher' being the application that is using the DataWriter to publish something.

 

Gerardo Pardo's picture
Offline
Last seen: 1 week 1 day ago
Joined: 06/02/2010
Posts: 602

Hello Spenner,

There is no operation to "wipe out" the whole history from the DataWriter cache per se. However if you are defining your data-types such that they have "key" attributes and you set your HISTORY QoS to KEEP_LAST with a depth=1, then calling "unregister_instance" on a specific instance/ley effectively removes all the history for that particular instance...

You mentioned that your late-joiner DataReaders are receiving samples from the older DataWriter. This would mean that you are setting your DURABILITY to TRANSIENT_LOCAL or higher, correct? I assume there is some reason why you need this, otherwise setting your HISTORY to VOLATILE would prevent a DataReader from getting the "old history". In fact a VOLATILE DataWriter would automaticalluy remove its samples from the cache as soon as the written samples have been acknowledged by the DataReders.

Another way you could accomplish the "switch" between DataWriters would be to use the PartitionQos. If you were to set up to partitions, say "Active" and "Backup" then you could have the DataReader (actually the Subscriber)  join the "Active" partition while the DataWriter (actually Publishers) would join either the "Active" or the "Backup" depending on their role. To swich DataWriters you would just switch Partitions on the DataWriter (actually  their parent Publishers).  The DataReader would only receive data from the DataWriter that is currently on the "Active" partition.

I am curious why you cannot use the OWNERSHIP Qos given that it was designed for that purpose... It seems you could still retain control over which writer writes the data as you do today and enfore that the DataReader gets the correct data by modifying the OWNERSHIP_STRENGH. 

Gerardo

Offline
Last seen: 9 years 2 months ago
Joined: 12/11/2013
Posts: 4

Thanks rip and Gerardo. I think the deleting the datawriter and then creating it again in succession might be the way to go. To answer your question Gerardo we don't want to use the Ownership QoS because we have less control on who the "owner" is and we have some other requirments that don't allow us to use it. We do have a few services however that do use Ownership QoS.

One questions though, should I consider any resource hits when doing these deleting and creating again? Like we may have a situation where 20 or 30 datawriters might "flip" at the same time, causing us to delete and create these again? It would go through discovery again, would it not?

thanks again guys.

-Stu