Configuring Reliable Subscription

7 posts / 0 new
Last post
Offline
Last seen: 1 year 10 months ago
Joined: 04/18/2022
Posts: 8
Configuring Reliable Subscription
Can someone please confirm that I believe I just read in the Connext DDS documentation.In order for a consuming party (Subscriber) to ensure they take() all messages published to a Topic by a 3rd Party, both the 3rd Party Publisher and the local Subscriber must have the RELIABLE_RELIABILITY_QOS and KEEP_ALL_HISTORY_QOS configuredSomething like the following;
TopicQos topicQos = DomainParticipant.TOPIC_QOS_DEFAULT;

topicQos.reliability.kind = ReliabilityQosPolicyKind.RELIABLE_RELIABILITY_QOS;
topicQos.reliability.acknowledgment_kind = ReliabilityQosPolicyAcknowledgmentModeKind.APPLICATION_AUTO_ACKNOWLEDGMENT_MODE;

topicQos.history.kind = HistoryQosPolicyKind.KEEP_ALL_HISTORY_QOS;

topicQos.resource_limits.max_samples = 500;
topicQos.resource_limits.max_instances = 500;
topicQos.resource_limits.max_samples_per_instance = 500;
topicQos.resource_limits.initial_samples = 100;
topicQos.resource_limits.initial_instances = 100;

Topic topic = participant.create_topic(topicName, typeName, topicQos, null, StatusKind.STATUS_MASK_NONE);
This seems very odd to me.
  • What if you have no control over the Publisher?
  • This implies that the actual messages are maintained by the Publishing application, not the DDS Server?
  • If they are maintained by the DDS Server then shouldn't it be maintaining the queue, akin to MQTT QOS Levels 1 & 2?  
I think I am probably missing something here that I'm not gleaming from the documentation
Keywords:
Howard's picture
Offline
Last seen: 6 days 2 hours ago
Joined: 11/29/2012
Posts: 567

Hmm, not sure which exact documentation had lead you to this understanding.  Can you point this out so that we can fix the docs if needed?

However, there is *no* third party.  There are only 2 parties involved when data is send.  The DataWriter of the publishing application and the DataReader of the subscribing application.

Unlike MQTT, there is no DDS server.

Both the DataWriter and DataReader need to be configured to support a strict, reliable connection, via the QOSes that you have referenced in your post (RELIABILITY and HISTORY).

And yes, when there is a reliable connection, it's up to the publishing app to store data sent by the DataWriter until its acknowledged to be received by the reliable DataReaders.

HOWEVER, using Topic QoS to configure the QOS Policies of DataWriters and DataReaders is not recommended practice.   A Topic is not a global entity and only used locally to create DataWriters and DataReaders.  And DataWriters and DataReaders do not automatically use the QOS  settings of the Topic.

And there are far more QOS settings on DataWriters and DataReaders than on a Topic.  So, the recommended best practice  is to set and use the DataWriterQos and DataReaderQos objects to configure the DaaWriter and DataReader respectively.

Finally, you should read and learn about builtin, Connext QoS profiles.  These are pre-configured sets of QOS values that when used together set a specific behavior...as you understand, you have to configure both the RELIABILITY and HISTORY QoS Policies in order to get a TCP--like, no data loss, reliable connection...aka strtict reliability.  There are lots o other QoS values that you may need to set as well to configure the reliability protocol.

Instead of configuring them all your self from scratch, you can use or start with a builting QoS profile, for example "BuiltinQosLib::Generic.StrictReliable".

You can read more abut QoS Profiles in these articles:

https://community.rti.com/examples/built-qos-profiles

https://community.rti.com/kb/configuring-qos-built-profiles

Finally, I'm not sure what you mean by "What if you have no control over the Publisher?".  What is the scenario here?  And how would a server help?

Even if there is a server application involved, the DataWriter still needs to be configured to send its data securely to the server, even if the server is responsible for forwarding the data securely to the DataReader (with the additional expense of having to deploy a server and having the data pass in and out of the sever before it reaches the DataReader).

 

Offline
Last seen: 1 year 10 months ago
Joined: 04/18/2022
Posts: 8

Hi Howard,

Thank you so much for your response. I have to confess I was approaching this with a fundamental misconception regarding the serverless nature of the DDS model. I was under the impression that DDS was a loosely coupled publisher/subscriber platform where Publisher applications can issues messages to which Subscriber applications can subscribe assuming they understand how to parse those messages. All of my questions are built around the misconception that there was a middleware server that manages the routing and QoS for these, akin to MQTT

What I am gradually getting my head around is that this is more tightly coupled than that, in so much that the platform is more geared towards propagation of state changes in registered objects between integrated applications

The use of "3rd Party" was in reference to us having no control over the DataWriter. That publishing component is software built by a third party organisation and we have no control over it or how it is deployed within the environment. All we have are some IDLs and an understanding of the topics to listen on.

Again, thank you. This helps tremendously.

Cheers

Al

Howard's picture
Offline
Last seen: 6 days 2 hours ago
Joined: 11/29/2012
Posts: 567

Your impression that DDS offers a loosely coupled publisher/subscriber platform is generically correct.  Of course the details matter and in the case of DDS, no central server or broker is used or needed.

DDS offers a data-centric communications framework.  Topics fundamentally define streams in which updates to the state of data objects are propagated.  However, I'm not sure how that is more tightly coupled than when a server is involved...or versus a message-centric middleware.  If anything, message-centric data streams couple receiving applications to the processing logic of a publishing application more tightly. 

In a message-centric communications framework, the content and format of a message is not constrained or enforced by the middleware.  Consecutive messages may have entirely different content...and to understand a "conversation", you may need to receive all of the messages from the first.

It's up to the publishing application to decide what messages can and will be sent...and in what format.  If you're writing a subscribing application, you have to write your processing logic to follow whatever the application uses to send the messages.  This pretty much ties the subscribing application coding to the coding of publishing application.  Another application that wants to send the same data stream to the subscribing application had better follow the message "rules" that the first publishing application originally set.

The issue isn't that there are rules for message processing, but more in that those "rules" aren't embodied by the middleware.  The middleware does not provide any help to convey what the rules are to process the messages or to compose messages for the subscribing or publishing applications, respectively.

In contrast, a data-centric middleware like DDS allows DDS to understand (and enforce) the content of data messages (aka "samples" ) sent for a data stream (aka "Topic").  As with message-centric middleware, applications that publish or subscribe to the same data stream must mutually agree how the data of a message is composed (the datatype of the Topic).  However, a data-centric middleware allows the middleware to make sure that the publishing and subscribing applications both have the same "datatype definition" for a Topic and can inform the end user when applications have conflicting definitions of a data stream.

Applications can also dynamically determine how to process data for a data stream since they can get the datatype definition of a Topic from DDS and use that definition to send or receive data for datatypes that they weren't compiled to understand.

When using DDS, the "public" interfaces of a component is composed of the Topics that it publishes and subscribes as well as the QoS of the DataWriters that send the data and DataReaders that receive the data.  Those Topics (and associated datatypes) and QoS configurations are what you need to use in order to integrate that component into your system.

If the 3rd-party component decided that a published Topic stream does not need to be received reliably for systems to use that component correctly, then your DataReader's QOS shouldn't expect to receive the data reliably.  Similarly, if the 3rd-party component subscribes to a Topic with Reliability turned "on", then you must configure your DataWriter for that Topic with "Reliability" turned on.

If you don't have the information about what QoS settings that the 3rd-party component is using for the Topics that it publishes/subscribes, then you should inquire its developers to get that information...in my opinion, not having the QoS information is the result of incomplete documentation of the component integration interface.

However, if you cannot get that information directly from the developers, you can use RTI Admin Console, this is a graphical user interface that will allow you to see, explore, examine  any application that uses DDS.  With Admin Console, you can see exactly what Topics an application publishes/subscribes...and with what QoS settings its using...as well as check for the compatibility of QoS settings as well as datatype definitions.

Please see https://www.rti.com/gettingstarted/adminconsole.

 

Offline
Last seen: 1 year 10 months ago
Joined: 04/18/2022
Posts: 8

Thanks Howard,

This greatly expands on the ethos of DDS and how its implementation differs from message-centric models. DDS and its real-time state propagation model would be invaluable to many solutions.

I'm not entirely sure I agreed with the notion that loosely-coupled message-oriented publish/subscribe requires inherent understanding of the messages in advance. Sparkplug provides a convention based layer over MQTT for self describing messaging and even ESB's supported SOAP/WSDL (rest its soul) and WS-* to define discoverability/definition of message types. That said I do understand your point.

"If the 3rd-party component decided that a published Topic stream does not need to be received reliably for systems to use that component correctly, then your DataReader's QOS shouldn't expect to receive the data reliably"

Surely this implies that the Publisher would need to know the intent of all its Subscribers. Some Subscribers may be happy to merely obtain periodic samples of the stream to update a UI while others, like ours, are looking to record all state changes in a time-series and to archive these state changes (binary of the sample) as historical record. If our intent differs from how the original developer of the Publishing component wrote their software we're out of luck.

Thanks again for the feedback, it's great information and will help us determine how we approach this. 

Cheers

Al

Offline
Last seen: 1 year 10 months ago
Joined: 04/18/2022
Posts: 8

I'd also like to note that having someone with your extensive experience with the platform answering question in the forum is outstanding. Thanks again.

Howard's picture
Offline
Last seen: 6 days 2 hours ago
Joined: 11/29/2012
Posts: 567

Hi Al,

With DDS, the concept of Quality of Service is such that publishers of a Topic need to offer the highest level of service that any of its subscribers will want to use.  Subscribers (DataReaders) can request a lower level of QoS, but not higher than what's offered by the publisher.  Not all QoS settings affect end-to-end behavior, but for those that do, this Requested vs Offered QoS concept is a formal part of the DDS design.

So for Reliability, as long as the DataWriter offers RELIABLE Reliability, DataReaders can subscribe to the topic with RELIABLE or BEST_EFFORT (non-reliable) behavior.  Under the hood, DDS will do the right thing and establish the type of connection that's requested by the DataReader.

However, if the DataWriter is configured to offer BEST_EFFORT Reliability, then DataReaders can only make non-reliable connections.

This is actually something that's important to allow developers of a component to control... Writing code to support a reliable connection is different than code that doesn't have to.  With a reliable connection, under the hood DDS will be allocating buffers to store reliably sent data in case they need to be repaired, extra packets will need to be sent/received on the wire to support the reliable protocol, and finally the DataWriter::write() call may block if the storage buffers are full due to DataReaders not acknowledging the receipt of the data.

All of this greatly affects the resource usage, real-time behavior of the publishing application.  So, if the designers of the publishing component decides that due to constraints in memory or real-time determinism, that they don't want a particular data stream to support a RELIABLE connection, they will not want DDS to establish a RELIABLE connection just because a DataReader application wants it to do so.  It could cause the publishing component to behave out-of-spec.

Unlike DDS, the technologies to which you've referred, ESBs (SOAP/WSDL) and even MQTT, were not designed in conception to support the requirements of a deterministic real-time system.  Almost all of those ESBs, including MQTT, are using TCP as the transport under-the-hood...so every network connection was essentially reliable (and non-deterministic), so you can't even configure that aspect of those middleware protocols.  (There are implementations that has ported MQTT to work on UDP to enable MQTT to be used in more real-time systems...but even then, with a broker, MQTT isn't usually deterministic enough to support a closed-loop control over a network).