RTI's implementation does not rely on a single point of failure. You don't need a separate "DDS" server. Each application will link against the libraries and manage its own database of remote peers and endpoints as necessary.
Note that if you use certain services (Routing Service, Persistence Service, Database Integration Service, etc), those add additional capabilities in Application Space. As they are applications, if you need redundancy, you should run multiple copies to ensure that even if one fails, the others will protect against loss of data.
"Lots of applications", you may want to really understand domains, domain participants and system-of-system design using DDS before diving in. It is very easy to shoot yourself in the foot if you make unwise decisions based on incomplete knowledge of DDS's capabilities.
Do you mean that if for exemple i have in a CPU something like 10 applications, each one will link against the API? Is it a good way to have a shared memory for each CPU? because data will be shared between application. For exemple, an application could need a data which comes from the result of another application.
Yes, each application will link against the libraries and have its own entities. If one crashes, it will not disturb the others (excepting of course that its data writers/readers go offline, but that's why DDS has OWNERSHIP QoS, so you can multiple backup writers).
Enabling shared memory between applications is certainly available when those applications are on the same CPU card. This is the default out of the box. It assumes that your operating system has support for, and has enabled, it's shared memory system.
Can you explain what .you. mean by shared memory, because your question doesn't make sense to me as expressed, using my background knowledge.
Shared memory implies a single card, or it is an external memory card on the bus. External memory cards on the bus don't have a CPU, it is shared space used by any CPU that has been configured to see it. Or this is some weird hardware that I've never been exposed to?
From the standpoint of DDS, the Global Data Space is any device in the cloud which is accessible to DDS traffic, either from shmem (over shared memory), UDPv4, UDPv6, or TCPv4 etc -- over any supported transport.
The Global Data Space is every device that is DDS enabled. RTIs DDS is specifically /not/ "on a dedicated CPU" because that is a single point of failure. There is no "daemon" process.
Based on your questions, it sounds like your background is from the "Client/Server architecture" sphere. This is a different paradigm... Any application node can publish. Any application node can subscribe. Matching Discovery of Topic publishers and subscribers within a given device or within a given subnet is automatic. Data traffic is then peer-to-peer.
If a given use suggests that request/reply is a better pattern than pub/sub, that's available in our Connext Messaging components, but they are built on top of pub/sub, so even if you don't have Messaging, you can implement request/reply yourself.
Each of your applications would simply link against the Connext DDS libraries, and it is then part of the global data space. Note however that when one publishes and one subscribes, they .each. have a copy of the data. Even when it is over shared memory. So, if there is 1 publisher and 4 subscribers, there are 5 (1+4) copies of the data in the system. This is the nature of anonymous, distributed peer-to-peer data communications using DDS.
Really, it's just a .so or .dll or .a shared library. You link it against any application that is using DDS. Have you looked at the makefiles that are generated by rtiddsgen?
use -example <arch> to generate example code. The list of <arch> values is found by looking at the sub directories of $NDDSHOME\lib, the <arch> available on your installation are the directory names there (like x64Linux2.6gcc or i86Win32jdk).
And if i have for exemple some applications which are already running, do i need to do same changes in their implementations in order for them to use DDS?
A) What is Mac S2 OS? Neither I, nor Google, appear to know what it is, so maybe that's a typo? If you mean Mac OS (ie, Darwin), yes we have libraries for Darwin. They aren't available through this venue, however. In any case, for anything other than standard i86/x64-based architectures, you'll need to contact your local sales channel (either RTI or a local distributer) to buy an additional platform license.
B) What level of understanding do you have of what DDS is? The level of your questions implies you haven't read much about it. In short: Yes, both from the standpoint of the code API, but also from the standpoint of "DDS is NOT a message-centric (client-server) tool" and it sounds like you've not dealt with data-centricity before. Implementing a system using DDS is different from "bolting DDS on to a system".
Happy to answer your questions, but at a fundemental level, I'm going to point you at the manuals and say "here, read this, so we have the same basic vocabulary". In short, you need to:
1) Define your data model
2) Describe your data model using IDL
3) Compile your IDL into c, c++, java, c# using rtiddsgen
4) Use your compiler toolset (VS2010, gcc, whatever) to compile the generated Type and TypeSupport source
5) Link your DDS API-using application against the compiled Type and TypeSupport object files and Connext DDS shared objects.
I do understand a little bit more about DDS, and i know it's not message-centric. And i know that Implementing a system using DDS is different from "bolting DDS on to a system", that's the aim of all my questions. What i was asking is if a have applications which are already running and communicating by another way, and if i would like to integrate in them do i have to modify a lot the source code of the applications in which DDS API is integrated or not.
we don't have libraries for Mac S2. is it an application environment that runs on top of a standard OS, or a variant of Linux? If it is an OS, if it has posix conformance, it is theoretically possible to port Connext DDS to it. Contact your local RTI representative for pricing.
for how easy/hard, assuming that libraries could be had, no idea. Insufficient knowledge of the environment, the OS, the applications, etc. see my comments above about the steps In the process, and start from there.
Another option would be routing service, which gives you a way to interact with non-DDS applications/systems. If you have new development, do that with DDS and data-centricity in mind, and use RS as a translation layer/bridge between the two systems.
that said, we'd still have to port the libraries first, and then routing service.
RTI's implementation does not rely on a single point of failure. You don't need a separate "DDS" server. Each application will link against the libraries and manage its own database of remote peers and endpoints as necessary.
Note that if you use certain services (Routing Service, Persistence Service, Database Integration Service, etc), those add additional capabilities in Application Space. As they are applications, if you need redundancy, you should run multiple copies to ensure that even if one fails, the others will protect against loss of data.
"Lots of applications", you may want to really understand domains, domain participants and system-of-system design using DDS before diving in. It is very easy to shoot yourself in the foot if you make unwise decisions based on incomplete knowledge of DDS's capabilities.
rip
Hi,
Thanks for your response.
Do you mean that if for exemple i have in a CPU something like 10 applications, each one will link against the API?
Is it a good way to have a shared memory for each CPU? because data will be shared between application. For exemple, an application could need a data which comes from the result of another application.
Thanks a lot
Yes, each application will link against the libraries and have its own entities. If one crashes, it will not disturb the others (excepting of course that its data writers/readers go offline, but that's why DDS has OWNERSHIP QoS, so you can multiple backup writers).
Enabling shared memory between applications is certainly available when those applications are on the same CPU card. This is the default out of the box. It assumes that your operating system has support for, and has enabled, it's shared memory system.
Ok
Ok, But if i have a lot of application distributed in a lot of CPu cards and which have to exchange some data.
It is a good way to put the shared memory in a separate and dedicated CPU?
Sorry?
Can you explain what .you. mean by shared memory, because your question doesn't make sense to me as expressed, using my background knowledge.
Shared memory implies a single card, or it is an external memory card on the bus. External memory cards on the bus don't have a CPU, it is shared space used by any CPU that has been configured to see it. Or this is some weird hardware that I've never been exposed to?
rip
I mean by shared memory, the global data space. and a CPU will be a single card.
From the standpoint of DDS, the Global Data Space is any device in the cloud which is accessible to DDS traffic, either from shmem (over shared memory), UDPv4, UDPv6, or TCPv4 etc -- over any supported transport.
The Global Data Space is every device that is DDS enabled. RTIs DDS is specifically /not/ "on a dedicated CPU" because that is a single point of failure. There is no "daemon" process.
Based on your questions, it sounds like your background is from the "Client/Server architecture" sphere. This is a different paradigm... Any application node can publish. Any application node can subscribe. Matching Discovery of Topic publishers and subscribers within a given device or within a given subnet is automatic. Data traffic is then peer-to-peer.
If a given use suggests that request/reply is a better pattern than pub/sub, that's available in our Connext Messaging components, but they are built on top of pub/sub, so even if you don't have Messaging, you can implement request/reply yourself.
Each of your applications would simply link against the Connext DDS libraries, and it is then part of the global data space. Note however that when one publishes and one subscribes, they .each. have a copy of the data. Even when it is over shared memory. So, if there is 1 publisher and 4 subscribers, there are 5 (1+4) copies of the data in the system. This is the nature of anonymous, distributed peer-to-peer data communications using DDS.
rip
Ok,
I understand now. Thanks a lot for your reply.
Thank you
Hello,
How could i intégrate the DDS API in an application which runs in a CPU card.?
by linking it to each application?
Really, it's just a .so or .dll or .a shared library. You link it against any application that is using DDS. Have you looked at the makefiles that are generated by rtiddsgen?
use -example <arch> to generate example code. The list of <arch> values is found by looking at the sub directories of $NDDSHOME\lib, the <arch> available on your installation are the directory names there (like x64Linux2.6gcc or i86Win32jdk).
rip
OK
Thank you. Is RTI DDS compatible with Mac S2 OS?
And if i have for exemple some applications which are already running, do i need to do same changes in their implementations in order for them to use DDS?
Ok, now I'm confused. More than before.
A) What is Mac S2 OS? Neither I, nor Google, appear to know what it is, so maybe that's a typo? If you mean Mac OS (ie, Darwin), yes we have libraries for Darwin. They aren't available through this venue, however. In any case, for anything other than standard i86/x64-based architectures, you'll need to contact your local sales channel (either RTI or a local distributer) to buy an additional platform license.
B) What level of understanding do you have of what DDS is? The level of your questions implies you haven't read much about it. In short: Yes, both from the standpoint of the code API, but also from the standpoint of "DDS is NOT a message-centric (client-server) tool" and it sounds like you've not dealt with data-centricity before. Implementing a system using DDS is different from "bolting DDS on to a system".
Happy to answer your questions, but at a fundemental level, I'm going to point you at the manuals and say "here, read this, so we have the same basic vocabulary". In short, you need to:
1) Define your data model
2) Describe your data model using IDL
3) Compile your IDL into c, c++, java, c# using rtiddsgen
4) Use your compiler toolset (VS2010, gcc, whatever) to compile the generated Type and TypeSupport source
5) Link your DDS API-using application against the compiled Type and TypeSupport object files and Connext DDS shared objects.
rip
Hello,
Mac S2 is a special OS used in CPU cards.
I do understand a little bit more about DDS, and i know it's not message-centric. And i know that Implementing a system using DDS is different from "bolting DDS on to a system", that's the aim of all my questions. What i was asking is if a have applications which are already running and communicating by another way, and if i would like to integrate in them do i have to modify a lot the source code of the applications in which DDS API is integrated or not.
Thanks for your suggestions.
Hi,
we don't have libraries for Mac S2. is it an application environment that runs on top of a standard OS, or a variant of Linux? If it is an OS, if it has posix conformance, it is theoretically possible to port Connext DDS to it. Contact your local RTI representative for pricing.
for how easy/hard, assuming that libraries could be had, no idea. Insufficient knowledge of the environment, the OS, the applications, etc. see my comments above about the steps In the process, and start from there.
Good luck,
rip
Another option would be routing service, which gives you a way to interact with non-DDS applications/systems. If you have new development, do that with DDS and data-centricity in mind, and use RS as a translation layer/bridge between the two systems.
that said, we'd still have to port the libraries first, and then routing service.
rip
Ok
Thanks again