Hello All
When we create a vector of builtin DataReader such as below
std::vector<DataReader<PublicationBuiltinTopicData>> m_rdr_list;
and populate it using find as below
find<DataReader<PublicationBuiltinTopicData>>(m_builtin_subscriber,
dds::topic::publication_topic_name(),
std::back_inserter(m_rdr_list));
My question:
- Should the list automatically update by erasing the corresponding DataReader from the list upon all the DataWriters of the Topics are closed and no DataWriter is available to send data on the publication_topic_name Topic.
- If yes, when does this trigger (timestamp) ?
- If No, please suggest any strategy to manage this list continuously.
Thanks
Hi,
The built-in readers cannot be deleted explicitly. Only when the DomainParticipant is deleted they are deleted.
I will also recommend using rti::sub::find_datareader_by_topic_name to simplify your code, since there is only one built-in reader per built-in topic:
Regards,
Alex
Thanks for the response Alex
I am quite confused about the Builtin DataReaders, for eg. If we have many user defined topics such as, WheelSpeedData, TemperatureData etc. So will there be Builtin DataReader created for each of these Topics or will there be a single Builtin DataReader which can read TopicData(not data values, but topic description data) of all user defined topics ?
I found this doubt after going through the following blog
https://www.rti.com/blog/implementing-simple-introspection-with-connext-dds-in-c14?success=true
In this introspection code author has said about storing all the Builtin DataReader of type PublicationBuiltinTopicData in a list called m_rdr_list using find of following overload.
He uses the back_inserter to populate m_rdr_list. Can you please explain what is actually being done there? As you have specified there is only one built-in reader per builtin topic then why we need a list for many datareader to be stored ?
Thanks
Built-in readers provide meta-information in your system. There is only one built-in reader for publications (per participant). It provides the meta-information about all the publications in that domain. There is another built-in reader for subscriptions, etc.
The find() function is generic, it works for built-in readers and user readers. So it needs to be able to return more than one reader, since you can create more than one user reader for the same topic. But for convenience, find_reader_by_topic_name only returns one reader. If you use find() with a vector for built-in topics, you'll always insert exactly one reader.
You can read more about the built-in topics in chapter 17 of the core libraries users manual.
Thanks a lot Alex,
After reading over and over, now I am quite clear with what I should be doing with my application, One more following query is, what will be pattern of updates coming from the network, i.e; if we consider PublicationBuiltinTopicData DataReader, how this will read the samples ?
For eg, Lets say we have User defined Topics namely A, B, C and D. So each dissipation of samples contain all the topic descriptions provided above or only epsilon changes within the network are updated. Common sense says every topic A, B, C, D and E(New) has to be updated in new sample dissipation, but i want to be sure on this concept.
And also if you have link for elaborated explanation regarding this please share
Thanks
The PublicationBuiltinTopicData DataReader will receive a new sample when:
* A new DataWriter on that domain is created.
* An existing DataWriter changes a Qos policy. (This only applies if the Qos policy is propagated, i.e. one of those included in the PublicationBuiltinTopicData type.)
* A DataWriter is deleted (you receive a DISPOSE message for that key).
I can point you to a code example (maybe you've seen it already).
You should also be aware of the publication_matched status and the matched_publications method a the DataReader level. That gives you information specifically about DataWriters that match with a given DataReader.
Hello Alex
Thanks for your valuable inputs and time. I am glad, that i recieved responses so quick.
Regards
Anup