DataWriter pause problem

5 posts / 0 new
Last post
Last seen: 2 years 9 months ago
Joined: 10/14/2018
Posts: 5
DataWriter pause problem

Hello ~
We are using version 5.2.2 of DDS.
We have delivered the program using DDS.
But when the customer tells us that our program is kill process
The customer's DataWriter pauses and then resends after a few seconds.
We think this is due to the Qos configuration, but we are not sure.
Could you give me some advice on this situation?

Gerardo Pardo's picture
Last seen: 2 months 1 week ago
Joined: 06/02/2010
Posts: 598


I am not sure I understand. What do you mean by "our program is kill process".  Do you mean that program has been killed using a "kill" signal from the Operating System? Or do you mean something else?

Is the program that is killed the Subscriber receiving data from the customer's DataWriter? So, assuming I understood, what you are saying is that the customer's DataWriter gets temporarily blocked when you kill your Subscriber application?


Last seen: 2 years 9 months ago
Joined: 10/14/2018
Posts: 5



Thanks for the last answer.
It is the form of my system.
When I force process1 to terminate,
It is said that other PC does not receive data of server PC.
The customer says it is paused.


Last seen: 2 weeks 4 days ago
Joined: 02/11/2016
Posts: 143


I will guess that you are using strict reliability.

In such a case, a reliable writer may block when you try to write new data.

The reason for that is this:

A writer will allocate resources so that it can hold samples in memory until they are acknowledged by all relevant readers (readers that were matched when "write" is called).

If a reader (like the one in your process) is suddenly killed (not gracefully shut down), the writer may view this reader as an unresponsive reader.

Until the writer detects that the reader is "dead", it must store all the samples it sent the "dead" reader.

At some point the writer's cache fills up and then one of two things must happen:

1. the writer will override samples that were not acknowledged by some reader

2. the writer will block until "something" happens (it can clear some samples since they were acknowledged, it detects the death of the reader, some timeout is reached)


This is (I'm guessing) what you are experiencing.

What can you do about it:

1. set the writer to keep last (instead of keep all) - this will mean that reliability will only ever be enforced for the last x samples per instance (this may be useful for scenarios where the last x values are important but not ALL values are important).

2. tweak cache size / qos settings related to detection of dead readers to reduce the likelihood of such deaths causing your writer to block

3. accept blocking writer operations as part of your system


I hope that helps,


Last seen: 2 years 9 months ago
Joined: 10/14/2018
Posts: 5

Thank you for all the answers.
I communicated this to the customer,
I will solve the problem together.