Built-in Publication data for local DataWriters and DataReaders

16 posts / 0 new
Last post
wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8
Built-in Publication data for local DataWriters and DataReaders

I am trying to use the built-in DDS topics to monitor when DataWriters and DataReaders are being created. However, I believe I'm running into an issue where locally created DataWriters (and DataReaders) are not causing a message to be received by the builtin `DDS_PUBLICATION_TOPIC_NAME` DataReader. This is my simple example:

https://gist.github.com/wjwwood/d3862336ab3184e0db4a1a5984f96c87 (example program + an example idl file)

To build this, I just use `rtiddsgen` on the `TestMessage.idl` to generate code, compile the example file along with all the generated `.cxx` files from the idl file, and add the Connext include and link arguments. I'm using Connext 5.2 on OS X 10.11.3.

The example does this:

- Create a participant factory
- Create a participant in the disabled state (my example is using domain 42)
- Create a custom listener and attach it to the builtin `DDS_PUBLICATION_TOPIC_NAME` DataReader
- Enable the participant
- Wait for a few seconds (the problem occurs with or without this step)
- Create a DataWriter (register the type, create the publisher, topic, and then the DataWriter and narrow it)
- Wait forever

The custom listener will print out messages any time data is available from the builtin publication DataReader, i.e. whenever a new DataWriter is created. The example does not ever write data or create message instance data, but I don't believe that should be necessary.

I would expect that the DataWriter created in the example would cause the `on_data_available` to be called, but it does not. The `on_data_available` in the example does get called if I start an instance of the shapes demo. Also, if I run ddsspy, it can see the DataWriter from both my example and the shapes demo.

According to the DDS 1.4 document in section "2.2.5 Built-in Topics" in reference to the "DCPSPublication" topic's entry structure it says:

> entry created when a DataWriter is created in association with its Publisher

My guess is that Connext's builtin DataReaders for both publication and subscription (it also can be observed with subscription, but I did not try topic and participant doesn't make sense really) are setup to not receive samples from the local participant. My hope is that there is a way to change this configuration using a QoS or something, but I have not been able to find this behavior described in the documentation, nor have I found a way to change it.

This can also be observed if you modify the example called "builtin-topics":

https://community.rti.com/examples/builtin-topics

If you look at the `msg_publisher.cxx` program that is part of the builtin-topics example, it creates a custom listener for subscriptions and then creates a DataWriter. If you change it so that it listens for publication instead, you'd expect it to see the creation of its own DataWriter, but in my experience it does not. This is my diff of the example:

https://gist.github.com/wjwwood/e77445a5b02af36c873abc14e9b163f8

I looked through these forums and found some related topics, but never this exact issue. I gather it is far more common to have a "monitoring" program without any of its own DataWriters and DataReaders that are of interest. I apologize in advance if this is already been discussed and I couldn't find it.

Thanks!

Organization:
gianpiero's picture
Offline
Last seen: 10 months 2 weeks ago
Joined: 06/02/2010
Posts: 177

Hello,

Thanks for the detailed post. I think I may have a simple explanation to your issue: the datawriter is created from the same participant. Being a local entity, the on_data_available is not called. 

The core only propagates discovery messages coming from remotes entities. 

Does it make sense?

Best,
  Gianpiero

wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8

The core only propagates discovery messages coming from remotes entities. 

I agree that's probably what's happening, but that is at odds with what the DDS specification says should happen (it doesn't make an allowance for local entities to not be notifed when this event occurs). To me it seems to be an arbitrary choice to notify on the local entity or not. What I'm asking is if I can change it and if not, why is it like that and not the other way?

wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8

What I cannot figure out is if the issue is with the QoS between the Publication DataWriter (internal to the participant) and the Publication DataReader (which you get from the participant with `lookup_datareader(DDS_PUBLICATION_TOPIC_NAME)`) or if those are facades provided by Connext and the behavior is controlled by some internal logic.

If the problem is the QoS for the DataWriter/DataReader, can it be changed so that they match each other? Will that break existing machinery that assume no local discovery happens on that topic?

If it is controlled internally to Connext's implementation, is there a setting I can change?

Any further help or insight would be greatly appreciated.

wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8

A related question (sorry for the spam): Can you force ddsspy to show the built-in topics? Is there another way to debug the matching process between the builtin topic DataWriters and the DataReaders? Perhaps a higher console logging level?

Thanks!

gianpiero's picture
Offline
Last seen: 10 months 2 weeks ago
Joined: 06/02/2010
Posts: 177

Hello,

I am not an expert on the spec, so forgive me if I am going to refer to the wrong thing, but I according to the latest version of the spec (at this address), on page 129,  i see:

 

A built-in DataReader obtained from a given Participant will not provide data pertaining to Entities created from that same Participant under the assumption that such entities are already known to the application that created them. 

Does it make sense? If I am interpreting the spec correctly the implementation is actually doing the right thing. 

Please let me know,
  Gianpiero

wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8

No, you're absolutely right. That paragraph is applicable and clears it up. Thanks! Sorry that I missed that in the specification.

So that's expected behavior, but is there anything (outside the behavior of the specification) which would allow me to easily keep track of all data without coordinating it myself? Presumably the system (Connext in this case) already has the information right? It would be possible in theory for Connext to notify me through a listener when a new DataWriter or DataReader is created locally, correct? Is there such a thing?

I'll understand if the answer is no, but it seems to me that the justification of the behavior given in the spec, "the assumption that such entities are already known to the application that created them", is sort of weak since the system needs to keep track of this already so it doesn't save extra state being stored. And it produces an asymmetry in how entities are handled which doesn't really exist anywhere else (that I'm aware of, which may just be my understanding and not the reality). By the same logic, messages published on a DataWriter would not be received by matching DataReaders in the same participant, on the assumption that the user that wrote the code that publishes the data is aware that some other part of their code needs the message too and therefore should deliver it manually. It's not a perfect analogy, but I think it illuminates a real contradiction in the design. It would be better, in my opinion, if the specification either:

- delivered these entries locally and provided a function to filter out the local ones (which could be used to reproduce the existing behavior)
- or provided a configuration option which let the user select the behavior

I appreciate your time and answers so far, and I hope you'll spare me a few more minutes with these follow up questions.

Offline
Last seen: 10 months 2 weeks ago
Joined: 02/11/2016
Posts: 144

Hey,

I believe the difference (between the example you gave and the original situation) is the following:

For the builtin topics, it is expected that they will be used and managed by the infrastrcutre (for example, rti-dds connext).

That means that it is possible to make heavier assumptions on usage of those topics (in the "natural" usage, anyhow) and therefore it's possible to put more restrictions on those.

In the case of user-topics, it would be unnecessarilly limiting to prevent different parts of code (using the same participant) from communicating via topics (for example, in multithreaded applications).

I can also imagine that not having to handle local information on builtin topics improves the performance a lot.

 

Hope that helps,

Roy.

gianpiero's picture
Offline
Last seen: 10 months 2 weeks ago
Joined: 06/02/2010
Posts: 177

Hello again,

So that's expected behavior, but is there anything (outside the behavior of the specification) which would allow me to easily keep track of all data without coordinating it myself? Presumably the system (Connext in this case) already has the information right? It would be possible in theory for Connext to notify me through a listener when a new DataWriter or DataReader is created locally, correct? Is there such a thing?

 
Unfortunately I am not aware of any mechanism to enable this behaviour. A workaround could be that you create a participant P1 that you only use to listen to the built-in topics. Then you use a participant P2 in the same app to do your normal work (create DRs and DWs). In this way with P1 you will handle all the 'notification' coming from the builtin topics...
 
What do you think? Is this something reasonable for your use case?
 
Best,
  Gianpiero
wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8

@KickR

That means that it is possible to make heavier assumptions on usage of those topics (in the "natural" usage, anyhow) and therefore it's possible to put more restrictions on those.

No doubt that it is possible, but the question is should it be treated differently? I'd argue that it is not the choice of least surprise. Put another way, if you described to someone how DataWriter's and DataReader's work (including the bit about communicating between threads in the same process and same participant), and then told them that the builtin topics are accessed using a normal DataReader, then I think they'd be surprised to learn that they behave differently. I thought using the normal DataWriter's and DataReader's to convey discovery information was very elegant, but was surprised and a bit disappointed to find that they behaved differently.

In the case of user-topics, it would be unnecessarilly limiting to prevent different parts of code (using the same participant) from communicating via topics (for example, in multithreaded applications).

I agree, but I think that same logic should extend to the discovery DataWriter's and DataReader's. I'm trying to synchronize across threads in a multi-threaded application based on the availability of topics without having to bring in my own notification system.

I can also imagine that not having to handle local information on builtin topics improves the performance a lot.

That may be the case. I'd be interested to learn more about that optimization. However, it's hard for me to conceptualize (based on my limited understanding of the system) how that's a significant improvement in performance. Especially since delivery of the discovery information still needs to be offered on the builtin topics for participants in the same process, different processes, and even participants on different machines. Perhaps I'm underestimating the performance improvement for the case where you have a single participant in a single process only, but I also feel like the performance of that use case shouldn't overrule the usability of the use case that the middleware is designed to serve (that is to say the use case where you have multiple participants in the network). Again, I think the right compromise would be to have an option to control this behavior.

Sorry, I don't mean to be bullish about this, but my hope is to bring some attention to this topic so that it might be changed in the future. I appreciate you taking the time to discuss it with me!

 


 

@gianpiero

Unfortunately I am not aware of any mechanism to enable this behaviour. A workaround could be that you create a participant P1 that you only use to listen to the built-in topics. Then you use a participant P2 in the same app to do your normal work (create DRs and DWs). In this way with P1 you will handle all the 'notification' coming from the builtin topics...

 

What do you think? Is this something reasonable for your use case?

That's unfortunate. Having a dedicated participant per process for monitoring discovery is a potential solution, but it doesn't seem very efficient. My understanding is that each new participant created substantially adds to the memory footprint and increases the discovery traffic. In fact, we were looking at going to a single participant per process in an effort to further minimize that overhead. Thanks for the suggestion though! We'll either end up managing some state and events ourselves or use the dedicated discovery participant as you suggested.

Thanks for everyone's time and input.

-- Cheers

Offline
Last seen: 10 months 2 weeks ago
Joined: 02/11/2016
Posts: 144

Hey,

 

If I understand correctly you want to use the builtin topics to recognize when a reader/writer are created for a topic?

It may not be exactly what you wanted but you can use DDSDataWriter/ReaderListeners to detect when a publication/subscription is matched and if I recall correctly it also works for datawriters/readers on the same participant.

 

Roy.

wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8

Yeah, I thought about that too. I think I might use that in some cases when I'm intending on subscribing to a topic before looking to see if there is a publisher, for example.

However, adding a listener to every publisher and subscriber is a bit cumbersome. And there are some cases where I'm interested in whether or not a DataWriter is created, but I do not intend to create a matching DataReader myself. In those cases, it would still be nice to know when they are created without having to maintain a matching DataReader or DataWriter.

gianpiero's picture
Offline
Last seen: 10 months 2 weeks ago
Joined: 06/02/2010
Posts: 177

Hello,

That's unfortunate. Having a dedicated participant per process for monitoring discovery is a potential solution, but it doesn't seem very efficient. My understanding is that each new participant created substantially adds to the memory footprint and increases the discovery traffic. In fact, we were looking at going to a single participant per process in an effort to further minimize that overhead. Thanks for the suggestion though! We'll either end up managing some state and events ourselves or use the dedicated discovery participant as you suggested.

 Sorry I am out of better ideas, maybe you can try to write to support or maybe someone else will contribute to this post.
 
Best,
   Gianpiero
Offline
Last seen: 10 months 2 weeks ago
Joined: 02/11/2016
Posts: 144

I should probably note that if you want to monitor what's happening in the builtin topics, you don't need a participant per process to monitor it, only a participant per domain (that is, you can create a monitoring application which creates readers on the builtin topics which is simply a new process, and use it to log events.

You still won't be able to monitor the participant used for monitoring, but that much is, I believe, acceptable.

wjwwood's picture
Offline
Last seen: 3 years 8 months ago
Joined: 03/31/2015
Posts: 8

@KickR Thanks for the additional suggestion, and sorry it took me so long to get back to you.

I should probably note that if you want to monitor what's happening in the builtin topics, you don't need a participant per process to monitor it, only a participant per domain (that is, you can create a monitoring application which creates readers on the builtin topics which is simply a new process, and use it to log events.

That's a good idea. We're already looking at having a daemon-like program which provides this service. It's especially important, for example, if you want a command-line tool to respond to queries about the graph quickly. Otherwise the programs would need to wait for discovery to finish (assuming a distributed discovery service, which is the default). However, we're wary to rely on this service being available, since it defeats the decentralized nature of DDS and introduces a single point of failure. We'd like our programs to be self-sufficient in the case that the service is not available.

Thanks again for the input!

Offline
Last seen: 10 months 2 weeks ago
Joined: 02/11/2016
Posts: 144

Hello again,

I would definitely advise against making a program that is single point of failure.

How ever, I should note that a monitoring application can (and should) be able to crash without harming (beyond some extent) the monitored applications.

The idea of using it all the time is something we are also looking into (one way or another).

If you want to somewhat increase the reliability of the monitoring you could use two instances of that daemon, although I would recommend looking into the performance toll each of those daemons causes to your monitored application (that is something my team is most concerned with).

 

Roy.