RTI Blog RSS

Subscribe to RTI Blog RSS feed RTI Blog RSS
Updated: 5 min 1 sec ago

Why Would Anyone Use DDS Over ZeroMQ?

Thu, 04/20/2017 - 07:14

Choices, choices, choices. We know you have many when it comes to your communications, messaging and integration platforms. A little over a year ago, someone asked a great question on StackOverflow: why would anyone use DDS over ZeroMQ?. In their words, “it seems that there is no benefit [to] using DDS instead of ZeroMQ.”

It’s an important question – one that we believe you should have the answer to, and so our CTO, Gerardo Pardo-Castellote, provided an answer on StackOverflow (a shorter version of his answer is provided below).

++++

In my (admittedly, biased) experience as a DDS implementer/vendor, I have found that many developers of applications find significant benefits using DDS over other middleware technologies including ZeroMQ. In fact, I see many more “critical applications” using DDS than ZeroMQ.

First a couple of clarifications:

DDS, and the RTPS protocol it uses underneath, are standards, not specific products.

There are many implementations of these standards. To my knowledge, there are at least nine different companies that have independent products and codebases that implement these standards. It does not make sense to talk about the “performance” of DDS versus ZeroMQ, you can only talk about the performance of a specific implementation. I will address the issue of performance later but from that point of view alone the statement “the latency of ZeroMQ is better” is plainly wrong. The opposite statement would be just as wrong, of course.

When is it better to use DDS?

It is difficult to provide a short/objective answer to a question as broad as this. I am sure ZeroMQ is a good product and many people are happy using it. The same can be said about DDS. I think the best thing to do is point at some of the differences and let people decide what is important to them.

DDS and ZeroMQ are different in terms of the governance, ecosystem, capabilities, and even layer of abstraction. Some important differences:

Governance, Standards and Ecosystem

Both DDS and RTPS are open international standards from the Object Management Group (OMG). ZeroMQ is a “loose structure controlled by its contributors.” This means that with DDS there is open governance and clear OMG processes that control the specification and its evolution, as well as the IPR rules.

ZeroMQ IPR is less clear. On the web page (http://zeromq.org/docs:features), it is stated that “ZeroMQ’s libzmq core is owned by its contributors” and “The ZeroMQ organization is a loose confederation without a clear power structure, that mostly lives on GitHub. The organization’s Wiki page explains how anyone can join the Owners’ team simply by bringing in some interesting work.”

This “loose structure” may be more problematic to users that care about things like IPR pedigree, Warranty and indemnification.

Related to that, if I understood correctly, there is only one core ZeroMQ implementation (the one in GitHub), and only one company that stands behind it (iMatix). Besides that, it seems just four committers are doing most of the development work in the core (libzmq). If iMatix was to be acquired or decided to change its business model, or the main committers lost interest, the user’s only recourse would be supporting the codebase themselves.

Of course, there are many successful projects/technologies based on common ownership of the code. On the other hand, having an ecosystem of companies competing with independent products, codebases, and business models provides users with assurance regarding the future of the technology. It all depends on how big the communities and ecosystems are and how risk-averse the user is.

Features and Layer of Abstraction

Both DDS and ZeroMQ support patterns like publish-subscribe and Request-Reply (a new addition to DDS, the so-called DDS-RPC). But generally speaking, the layer of abstraction of DDS is higher. Meaning the middleware does more “automatically” for the application. Specifically:

DDS provides for automatic discovery

In DDS you just publish/subscribe to topic names. You never have to provide IP addresses, computer names or ports. It is all handled by the built-in discovery. And it does it automatically without additional services. This means that applications can be re-deployed and integrated without recompilation or reconfiguration. 

In comparison, ZeroMQ is lower level. You must specify ports, IP addresses, etc.

DDS pub-sub is data-centric

An application can publish to a Topic, but the associated data can represent updates to multiple data-objects, each identified by key-attributes. For example, when publishing airplane positions each update can identify the “airplane ID” and the middleware can provide history, enforce QoS, update rates, etc. for each airplane separately. The middleware understands and communicates when new airplanes appear or disappear from the system.

Also, DDS can keep a cache of relevant data for the application, which it can query (by key or content) as it sees fit, e.g., read the last 5 positions of an airplane. The application is notified of changes but it is not forced to consume them immediately. This also can help reduce the amount of code the application developer needs to write.

DDS provides more support for “application” QoS

DDS supports over 22 message and data-delivery QoS policies, such as Reliability, Endpoint Liveliness, Message Persistence and delivery to late-joiners, Message expiration, Failover, monitoring of periodic updates, time-based filtering and ordering. This is all configured via simple QoS-policy settings. The application uses the same read/write API and all the extra work is done underneath.

ZeroMQ approaches this problem by providing building blocks and patterns. It is quite flexible but the application has to program, assemble and orchestrate the different patterns to get the higher-level behavior. For example, to get reliable pub-sub requires combining multiple patterns as described in http://zguide.zeromq.org/page:all#toc119.

DDS supports additional capabilities like content-filtering, time-filtering, partitions, domains and more

These are not available in ZeroMQ. They would have to be built at the application layer.

DDS provides a type system and supports type extensibility and mutability

You have to combine ZeroMQ with other packages like Google protocol buffers to get similar functionality.

Security

There is a DDS-Security specification that provides fine-grained (Topic-level) security including authentication, encryption, signing, key distribution and secure multicast.

How does the performance of DDS compare with ZeroMQ?

Note that you cannot use the benchmarks of Object Computing Inc.’s “OpenDDS” implementation for comparison. As far as I know, this is not known to be one of the fastest DDS implementations. I would recommend you take a look at some of the other implementations such as RTI Connext DDS (our implementation), PrismTech’s OpenSplice DDS or TwinOaks’ CoreDX DDS. Of course, results are highly dependable on the actual test, network and computers used, but typical latency performances for the faster DDS implementations using C++ are on the order of 50 microseconds, not 180 microseconds of OpenDDS. See https://www.rti.com/products/dds/benchmarks.html#CPPLATENCY

Middleware layers like DDS or ZeroMQ run on top of things like UDP or TCP, so I would expect they are bound by what the underlying network can do and for simple cases they are likely not too different, and they will, of course, be worse than the raw transport.

Differences also come from what services they provide. So you should compare what you can get for the same level of service, for example publishing reliably to scaling to many consumers, prioritizing information and sending multiple flows and large data over UDP (to avoid TCP’s head-of-line blocking).

Based on the relative performance of different DDS implementations  (http://www.dre.vanderbilt.edu/DDS/), I would expect that in an apples-to-apples test, the better-performing implementations of DDS would match or exceed ZeroMQ’s performance.

That said, people rarely select the middleware that gives them the “best performance.” Otherwise, no one would use Web Services or HTTP. The selection is based on many factors; performance just needs to be as good as required to meet the needs of the application. Robustness, scalability, support, risk, maintainability, the fitness of the programming model to the domain and total cost of ownership are typically more important to making the correct decision.

Which one is best?

If you’re still undecided, or simply want more information to help you in making a decision, you’re in luck! We have the perfect blog post for you: 6 Industrial IoT Communication Solutions – Which One’s for You? [Comparison].

If you think RTI Connext DDS may be what you need, head on over to our Getting Started homepage where you’ll find a virtual treasure trove of resources to get you up and running with Connext DDS.

Mission: score an interview with a Silicon Valley company

Wed, 04/12/2017 - 08:17

RTI’s engineering team is based in Sunnyvale, CA. We also have a smaller, yet rapidly growing team in Granada, Spain.

Between Sunnyvale and Granada stand 6000 miles. It takes an entire day to travel there. And we need to keep a difference of 9 hours in mind when organizing team meetings.

There are also quite a few differences in how people write a resume (curriculum vitae), and approach getting a job.

This blog post is a summary of my recent presentation to the engineering students at the University of Granada: “How to get hired by a Silicon Valley Company”. Many of the tips below are not just beneficial to new engineering graduates in Spain, but also to new grads in the US.

Your first mission is to be invited for an initial interview.

Your preparation started yesterday

Before you approach the graduation stage, and walk to the tune of Elgar’s Pomp and Circumstance march, there are quite a few things you can do. These are things which go beyond your regular classes and assignments.

Hiring managers pay attention to the type of internships and projects you worked on. You can show your love for programming through your open source contributions or by the cool demo you built at a hackathon. Your work speaks for itself if I can download and test drive your mobile application from the Apple App Store or Google Play Store.

Beyond the technical projects, it is important to learn and practice English. Our Granada team is designed as an extension of the team in Sunnyvale. As a result, engineers in Spain will work on a project, together with engineers in California. Being able to express and defend your ideas well, in English, is important. Some of us learned English while watching Battlestar Gallactica (the original) or Star Trek. We may even admit to picking up phrases watching the A-team or Bay Watch. Yes, those shows are a few decades old. Learn, read, write and mostly find a way to speak English often. Go on a adventure through the Erasmus program, and practice English.

Lastly, start building your professional online profile.

  • Create a LinkedIn profile. Most often, employers will consult your LinkedIn profile, even before your resume. Please use a picture, suitable for work.
  • Create a personal website, with your resume, your projects, and how to contact you. Resumes and LinkedIn profiles are dull. Your personal website allows you to describe the projects in more depth, and include diagrams and even videos of your demoes. Consider it the illustrated addendum to your resume.
  • Share your thoughts on a blog, or on websites such as Medium.
  • Contributions to Github or Stack overflow speak for themselves. You can start by adding your school assignments to your github profile. However, hiring managers will look for contributions beyond the things you had to do to get a good grade.
  • Publish your applications to the Apple App Store of Google Play Store. I love to download the applications of a candidate and try it out. It takes time, effort and even guts to create a working application and share it publicly.
  • Manage your social profile carefully. Future employers may look at your Twitter rants or Facebook antics.
Drop the Europass style of resume

There are plenty of websites which give you the basics to write a good resume: keep it to 1–2 pages and follow a simple structure: objective, education, experience and projects, skills and qualifications and finally list your accomplishments, etc.

Here are a few Do’s and Don’ts, specifically for international candidates:

  • Write your resume in English. Make sure there are no typos. Use online services, such as Hemmingway App or Google Translate, to improve your work.
  • Focus on what you learned or did on a project. Do not just list project names, leaving the reader to guess what you did.
  • Add hyperlinks where the resume screener can get more details. And make sure the hyperlinks work.
  • Add you grades, in easy to understand terms. E.g., specify you graduated, first in class with 92%, rather than 6.1/7. I do get confused when I see two non-correlated grades: e.g., 3.2/4 and 8.7/10.
  • Read other US resume to learn the lingo. A hiring manager may look specifically if you took a class in Data structures and algorithms. In your university, that may have been covered in Programación II.
  • Customize your resume for the job.
  • Do not create any cute resume designs. No background or design touches (unless you are applying for a design job).
  • Drop the Europass resume format. I.e., do not include a picture, data of birth or multiple contact addresses. For an engineering position, I do not care about your driver’s license information. Do not use the standardized table to indicate your proficiency in various languages. Rather than indicate you rate your proficiency in German as a B2, state, “Conversational German”.
  • Do not use long lists of keywords, technologies or acronyms.
  • A pet peeve of mine: do not list Word or Excel unless you actually developed add-ons for those applications. Similarly, only list Windows, if you developed to the Windows APIs.
A cover letter allows you to make a great first impression

Before you submit your resume, craft a cover letter. It is not required by most companies. However, I recommended. It allows you to introduce yourself in your own words. It is your first impression.

A short and well-crafted introduction letter allows you to make a more personal connection. Your intro paragraph should list the job you are applying for and why you are excited about the job and the company. Next, describe in three points why you are a great fit. Describe your successes. Do not repeat your resume. Close by asking for the interview.

You probably read this blog post because you are ready to contact RTI for a job. Let me make it easy: go to the RTI Career Page.

Good luck.

Fog Computing: IT Compute Stacks meet Open Architecture Control

Wed, 04/12/2017 - 05:05

Fog computing is getting more popular and is breaking ground as a concept for deploying the Industrial IoT. Fog computing is defined by the OpenFog Consortium as “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things.” Looking further into the definition, the purpose is to provide low-latency, near-edge data management and compute resources to support autonomy and contextually-aware, intelligent systems.

In particular, fog computing facilitates an open, internet architecture for peer-to-peer, scalable compute systems that supports edge analytics, local monitoring and control systems. It’s this latter application to control that I think is particularly interesting.  Control systems have been in place for decades across multiple industries, and recently some of these industries have moved to create interoperable, open control architectures. In this blog, we’ll take a look at some existing open architecture frameworks that bear a resemblance to Fog and how Fog computing and these framework initiatives could benefit from cross-pollination.

Open Architecture Control

Back in 2004, the Navy started developing its Navy Open Architecture. In an effort to reduce costs and increase speed and flexibility in system procurement, the DoD pushed industry to establish and use open architectures. The purpose was to make it simpler and cheaper to integrate systems by clearly defining the infrastructure software and electronics that “glue” the subsystems or systems together. The Navy selected DDS as their publish-subscribe standard for moving data in real time across their software backplane (Figure 1 below).

Figure 1. The Navy Open Architecture functional overview. Distribution and adaptation middleware, at the center, is for integrating distributed software applications.

Fast forward to now and we find the OpenFog Consortium’s reference architecture looks very much like a modern, IT-based version of what the Navy put together back in 2004 for open architecture control. Given that this Navy Open Architecture is deployed and running successfully across multiple ships, we can feel confident that fog computing as an architectural pattern makes sense for real-world systems. Also, we can likely benefit from looking at lessons learned in the development and deployment of the Navy’s architecture.

OpenFMB

The Open Field Message Bus (OpenFMB) is a more recent edge intelligence, distributed control framework standard for smart power-grid applications. It is being developed by the SGIP (Smart Grid Interoperability Panel). Energy utilities are looking at ways to create more efficient and resilient electricity delivery systems that take advantage of clean energy and hi-tech solutions.

Instead of large, centralized power plants burning fossil fuels or using nuclear power to drive spinning turbines and generators, Distributed Energy Resources (DERs) have emerged as greener, local (at the edge of the power grid) alternatives that do not have to transmit electricity over long distances. DERs are typically clean energy solutions (solar, wind, hydro, geothermal) that provide for local generation, storage and consumption of electricity. But DERs are intermittent and need to be managed and controlled locally, as opposed to centrally, which is all that is available in the current power grid.

Distributed intelligence and edge control is the solution. The OpenFMB framework is being deployed and proven in smart grid testbeds and field systems. Looking at the OpenFMB architecture (Figure 2 below), you can see the concept of a software integration bus clearly illustrated.

Figure 2. OpenFMB architecture integrates subsystems and applications through a central, real-time publish-subscribe bus.

Like the Navy Open Architecture, the OpenFMB distributed intelligence architecture looks very much like a fog computing environment. Since OpenFMB is still under development, I would bet that the OpenFog Consortium and OpenFMB project team would benefit by collaborating.

OpenICE

Patient monitoring, particularly in intensive care units and emergency rooms is a challenging process. There can be well over a dozen devices attached to a patient – and none of them interoperate. To integrate the data needed to make intelligent decisions about the welfare and safety of the patient, someone has to read the front-end of each device and do “sensor fusion” in their head or verbally with another person.

OpenICE, the Open Source Integrated Clinical Environment, was created by the healthcare IT community to provide an open architecture framework that supports medical device interoperability and intelligent medical application development. OpenICE (Figure 3 below) provides a central databus to integrate software applications and medical devices.

Figure 3. The OpenICE distributed compute architecture, with DDS-based databus, facilitates medical device and software application integration.

Again, the OpenICE architecture supports distributed, local monitoring, integration and control and looks very much like a fog architecture.

And now Open Process Automation

More recently, Exxon-Mobil and other process automation customers have gathered via the Open Process Automation Forum to begin defining an open architecture process automation framework. If you look at the various refineries run by Exxon-Mobil, you’ll find distributed control systems from multiple vendors. Each major provider of process automation systems or distributed control systems has its own protocols, management interfaces and application development ecosystems.

In this walled garden environment, integrating a latest and greatest subsystem, sensor or device is much more challenging. Integration costs are higher, device manufacturers have to support multiple protocols, and software application development has to be targeted at each ecosystem. The opportunity for the Open Process Automation Forum is to develop a single, IIoT based architecture that will foster innovation and streamline integration.

Looking at the Exxon-Mobil diagram below, we find, again, an architecture centered around an integration bus, which they call a real-time service bus. The purpose is to provide an open-architecture software application and device integration bus.

Figure 4. Exxon-Mobil’s vision of an open process automation architecture centered around a real-time service bus.

Again, we see a very similar architecture to what is being developed in the IIoT as fog computing.

The Opportunity

Each of these open architecture initiatives is looking to apply modern, IIoT techniques, technologies and standards to their particular monitoring, analysis and control challenges. The benefits are fostering innovation with an open ecosystem and streamlining integration with an open architecture.

In each case, a central element of the architecture is a software integration bus (in many cases a DDS databus) that acts as the software backplane facilitating distributed control, monitoring and analysis. Each group is also addressing (or needs to address) the other aspects of fog computing like end-to-end security, system management and & provisioning, distributed data management and other aspects of a functional fog computing architecture. They have the opportunity to take advantage of the other capabilities of Industrial IoT beyond control.

We have the opportunity to learn from each effort, cross-pollinate best practices and develop a common architecture that spans multiple industries and application domains. These architectures all seem to have very similar fog computing requirements to me.

Getting Started with Connext DDS, Part Four: From Installation to Hello World, These Videos Have You Covered

Thu, 04/06/2017 - 08:52

I started my career at a defense company in the San Francisco Bay Area on a project that involved a distributed system with several hundred nodes (sensors, controllers and servers). All these nodes were networked over different physical media including ethernet, fiber optics and serial. One of the challenges we faced was ensuring our control systems could operate within their allotted loop times. This meant data had to arrive on time regardless if a node required 10 messages per second or several thousand messages per second. We needed a more effective method of communication than point-to-point or centralized server.

To address our most extreme cases of receiving data every handful of microseconds, a colleague of mine developed a protocol that allowed any node on the network to publish blocks of data to a fiber optics network analogous to distributed shared memory. The corresponding nodes would only read messages that enabled them to compute their control algorithms and ignore all other data. This was around 2009, and little did I know at the time, this was my introduction to the concept of data-centric messaging. It so happens that the Object Management Group (OMG) had already been standardizing data-centric messaging as the Data Distribution Service (DDS), the latest version of which was approved in April 2015.

Fast forward a decade later, I have recently been hired as a product manager at Real-TIme Innovations (RTI), the leading DDS vendor. Like most avid technologists ramping up on a new product, I have been eager to get past the “setup” phase so I can start seeing Connext DDS in action. To help me get ramped up, my new colleagues shared these Getting Started video tutorials. With these videos, I was able to quickly build sample applications that communicated with each other over DDS. You can check out the Getting Started tutorials for yourself to see how to configure, compile and run HelloWorld examples in Java and C++.

Granted, there’s more work for me to do to get defense-grade computers talking over fiber, but here’s why I found these tutorials so helpful: they enabled me to quickly pass the beginner’s phase allowing me to hit the ground running right away, therefore shortening my learning curve. Check out the tutorials and see for yourself!

Getting Started with Connext DDS, Part Three: The Essential Tool ALL DDS Developers Need to Know About

Thu, 03/30/2017 - 07:17

Before joining RTI engineering, I was a customer of RTI’s for quite some time. I started working with RTI products before Data Distribution Service (DDS) was a standard. I also happened to be one of the first users of DDS 4.0, when it was finally codified into the standard as we know it today.

I have a passion for developing tools that would help using the Connext DDS products easier — because the core product is so good. I love doing this, and this is what ultimately brought me to RTI. I’m now leading the engineering team responsible for Connext DDS Tools. I enjoy helping RTI customers adopt Connext DDS and use it in their projects.

One of my favorite tools is Admin Console. It is essential for troubleshooting, configuring and monitoring all Connext DDS infrastructure services as well as visualizing data directly from your system. Admin Console minimizes troubleshooting time and effort in all stages of application development by proactively analyzing system settings and log messages. Problems get highlighted, making them easy to find and fix.

One of my RTI colleagues, Dave Seltz, recently recorded a short tutorial video to help you learn the essentials of working with the Admin Console. It allows you to quickly master troubleshooting, configuring and monitoring all Connext DDS infrastructure services. Just like other Connext Tools, the Admin Console is included in the Pro version of Connext DDS. The examples used in the video don’t require any configuration or setup and will work out of the box, once you install the product. You can try them right away by following Dave’s tutorial; simply download the free 30-day trial for Connext DDS Pro and give it a shot!

We’re heading to Munich!

Fri, 03/24/2017 - 11:46

London Connext Conference 2014 and 2015 events brought power DDS users together from a wide range of industries to share experiences, applications and expertise. For those of you who were unable to attend but curious about what you missed, head over to Community and view a list of the presenters and some of the presentations (2014 and 2015). For our third year, we wanted to switch things up a bit, and the first big change to the event is the location: we’ll be hosting our two-day event in Munich!

The second change (and the one I’m most excited to announce) relates to our agenda. In the past, we’ve created an agenda that showcases our users and their work through a curated selection of keynote presentations, demonstrations, and smaller group presentations. This year, in addition to these, we’re going to be offering 2 workshops! The first workshop focuses on using Connext Pro Tools and the other will dive into Connext DDS Secure. During these workshops, you’ll have time to get up and running with the products, ask questions and receive answers from RTI staff, and more.

This is just a sampling of what we’ll be offering. To register now, head on over to https://www.rti.com/munich-connext-con-2017. Also, if you’ll be attending and would like to be considered for a keynote spot at this year’s conference, please visit the conference page for submission details. We can’t wait to see you there!

Getting Started with Connext DDS, Part Two: Use Shapes Demo to Learn the Basics of DDS Without Coding

Thu, 03/23/2017 - 17:03

If you’re building Industrial IoT (IIoT) systems, then you’re probably investigating the Data Distribution Service (DDS) standard. DDS is the only connectivity framework designed specifically for mission-critical IIoT systems.

IIoT applications have extraordinarily demanding requirements in terms of performance, scalability, resilience, autonomy and lifecycle management. To satisfy these requirements, DDS includes unique capabilities—differentiating it from other connectivity frameworks and low-level messaging protocols that were originally designed for consumer IoT and conventional IT applications.

To quickly learn more about DDS and how it is different, there is an easy way: RTI Shapes Demo. Included as part two in the Getting Started series, see part one here, Shapes Demo is a game-like, interactive application that lets you explore DDS capabilities without having to do any programming. It is a tool you can use to learn about the basic (and some advanced) DDS concepts, such as publish-subscribe messaging,  real-time Quality of Service (QoS), data-centric communication, automatic discovery, brokerless peer-to-peer messaging, and reliable multicast.

There are two ways to get RTI Shapes Demo:

After installing Shapes Demo, watch this simple video tutorial to help you get started quickly.

You can also check out the User’s Manual under the Help menu. Chapter 4 walks you through examples that illustrate many of the DDS standard’s powerful features.

Download RTI Shapes Demo and start learning more about DDS today!

Getting Started with Connext DDS – ELI5, please.

Thu, 03/16/2017 - 16:53

One of my favorite subreddits is r/ELI5. For those of you who might not know, ELI5 is a forum, dedicated to offering up explanations of user-submitted topics and concepts in a very specific way – explaining it in a way that even a 5-year old would understand, hence ELI5 (Explain it Like I’m 5).

ELI5 is a pretty popular subreddit. Why? Well, I believe it’s because there are tons of things we don’t know much about (we’re all experts in one area or another, but we don’t know everything!), and these posts give us a chance to gain some basic knowledge outside our area of expertise. Making information simple benefits everyone. Simplicity doesn’t mean a lack of complexity. Being able to take a complex subject that you’ve spent years immersed in, and distil it down to some facts and anecdotes that provide a level of working understanding is amazing – it makes information accessible.

ELI5 doesn’t mean the thing you’re describing isn’t interesting, valuable or worthy of more time and attention. Being able to ELI5 allows people with little to no domain knowledge or context on these more complex and nuanced subjects to understand the basics and to incorporate those basics into other things. It’s general, but it’s useful. If you can give a 5-year old a working understanding of things such as what is a product?, why do we have a president?, or what is middleware?, you really have to understand what you’re talking about.

From my perspective, DDS is powerful – and can be complex – and we hope we’ve made it accessible enough that you can do amazing things with it.

At RTI, we’ve been working behind the scenes to bring you something new. In the spirit of my favorite subreddit, I want to introduce you to Getting Started – all the tools and information you need to get started with DDS.

We explain how to use our products, how to go from install to helloworld, what DDS is (whitepapers), how people are using it, how you can setup the basics using our full sets of configuration files and code to address your most common and challenging use cases (case+code) and more. We’ve even curated special collections of content to meet your needs so you don’t have to wade through everything. And this is only phase 1 – we have so much more information that’s just waiting to go live, and we’re excited to share it.

And as part of making sure you’re getting what you need, let us know. What would you find valuable to get up and running using DDS? What questions did you need answers to, but had trouble finding? What content did you wish was available that wasn’t when you first started using our product? Tell us or leave a comment!

Standards vs. Standardization: How to Drive Innovation in Self-Driving Cars

Tue, 03/07/2017 - 14:50

Authors: Bob Leigh & Brett Murphy

There was a great article in the NY Times recently that suggested self-driving cars may need some standards to foster innovation. This is certainly true, but the article confuses standards and standardization, suggesting that standardizing on a common automotive platform may instead stifle innovation. It is important to understand the difference between the decision to ‘standardize’ on a platform, and the very powerful impact an interoperability standard can have on an industry.

Common platforms spur innovation by creating an ecosystem and simplifying development efforts. One can choose to standardize on a proprietary platform, like the Apple iPhone, where the main goal is to develop an ecosystem and create applications for the platform itself. Standardizing on a walled-garden platform like this can certainly spur innovation like it did in the early days of the iPhone, but it also creates silos and rarely allows for broad interoperability outside of the, often proprietary, platform. App developers for smartphones had to develop and maintain at least three different versions early on in the market. Alternatively, standards, which are managed by an independent governing body, can be truly transformative for the entire industry and allow everyone to participate in developing the ecosystem. For example, TCP/IP, HTTPS and RESTful services have been transformative standards for networking and web applications. In this case, open standards provide a foundation for developing applications and systems that run almost anywhere. These two approaches are not always mutually exclusive, but they have different objectives and results.

For the IIoT (Industrial Internet of Things) to truly transform industrial systems, businesses and industries, a standards-based approach is necessary. For autonomous systems and especially self-driving cars, this is especially true because these systems need to bring together the best technologies from many different independent companies and research organizations, while also fostering rapid innovation. I agree with the author; the industry does not need one-size fits all solutions, or a closed, proprietary platform. This can stifle innovation, creating closed ecosystems, siloed architectures and de-facto monopolies. However, the right standards-based approach will support interoperability between different vendor solutions. The key is to identify the critical interfaces and standardize them. For example, how data is shared between applications running on different devices and systems is a key interface. The right standard will foster an open architecture driven ecosystem and act as a deterrent, or brake, to proprietary and closed ecosystems by being a neutral interface between competing interests.

Very few standards can accomplish this task. Given that IIoT is a relatively new technology and that there are many interoperability standards out there, how is one to choose? Fortunately, the Industrial Internet Consortium has done much of this work and has developed a very detailed analysis of IIoT connectivity standards and best practices (See the Industrial Internet Connectivity Framework (IICF)).  This document presents a connectivity stack to ensure syntactic interoperability between applications running across an IIoT system.  It assesses the leading IIoT connectivity standards and establishes criteria for core connectivity standards. Using a core connectivity standard is best practice and helps ensure secure interoperability. It details four potential core connectivity standards and the types of IIoT systems best addressed by each.

For Autonomous Vehicles, the choice couldn’t be more clear. Autonomous vehicles have created unprecedented demand for both the rapid innovation required for commercial technology and the performance, security and safety required of complex industrial systems. Comparing these requirements with the assessments in the IICF, it is clear the only connectivity standard that suitably addresses these challenges is the OMG’s DDS (Data Distribution Service) standard. DDS is playing a critical role in the IIoT revolution and is already disrupting in-car automotive technology as well. DDS acts as a common language between all the devices, applications, and systems, which is especially important in Autonomous Vehicles as this can hasten innovation and drastically lower the risk of integrating all these disparate systems. DDS offers next generation standards based security, control at the data level, and a proven track record in multi-billion dollar mission and safety-critical systems worldwide.

It is an exciting time to be involved in this industry. The complexity of the problem, and the speed of innovation is going to create clear winners while others will struggle to stay relevant. As we have seen in the storage, computing and networking industries in the past, winning often depends on choosing the right standard. So, how will you ‘standardize’ to foster innovation?

You can learn more about DDS’s role in IIoT, or if you want to learn about using DDS in Autonomous Vehicles See RTI’s white paper titled Secret Sauce of Autonomous Cars and learn more about adding data-flow security with our DDS Secure product.

Industrial Internet Connectivity Document Evaluates Core Standards: DDS, OPC-UA, WebServices

Tue, 02/28/2017 - 13:31

The Industrial Internet Consortium has released an important part of its Reference Architecture guidance: its Connectivity Framework document. This is actually pretty important; this document dives into the detail on connectivity for IIoT systems, establishes criteria for evaluating connectivity technologies/standards and puts forward some likely technologies for core connectivity standards, including DDS, OPC-UA and WebServices. In other words, there is some really valuable guidance here.

What is Connectivity for IIoT Systems?

According to the Industrial Internet of Things Connectivity document, “connectivity provides the ability to exchange information amongst participants within a functional domain, across functional domains within a system and across systems. The information exchanged may include sensor updates, events, alarms, status changes, commands, and configuration updates.” More concretely, connectivity is the critical, cross-cutting function that supports interoperability within and across IIoT systems. Moving beyond the current mish-mash of proprietary and vertical-industry-specific standards to an open IIoT standards-based framework is the goal of this work.

Looking at Connectivity from a technical viewpoint, figure 1 shows where the Connectivity function lies on a Network, Connectivity, and Information stack, and it divides Connectivity into 2 different layers: Transport and Framework. The Transport layer provides technical interoperability, with “Bits and Bytes shared between endpoints, using an unambiguously defined communication protocol.” The Framework layer provides syntactic interoperability, with “Structured data types shared between endpoints. Introduces a common structure to share data; i.e., a common data structure is shared. On this level, a common protocol is used to exchange data; the structure of the data exchanged is unambiguously defined.” Addressing connectivity needs up through the syntactic interoperability provided by the connectivity framework layer and assessing connectivity framework standards is one of the important contributions of this document.

Figure 1. Connectivity, using the Networking functions below – Internet Protocol, provides the layers for communicating messages and data between system participants.

The Concept of a Core Connectivity Standard.

To ensure interoperability within and across IIoT systems, the Connectivity Framework document recommends the use of a core connectivity standard. Figure 2 shows how this core standard becomes the connectivity bus for the system, integrating native devices and applications directly and legacy, or non-core-standard devices and applications through protocol gateways or bridges. In this way non-standard entities can be “normalized” into the core connectivity standard. This core connectivity reference architecture is central to the IIC’s guidance on ensuring secure, device to device to application interoperability for IIoT systems.

Figure 2. Using a Core Connectivity Standard provides for interoperability and streamlined integration within and across IIoT systems.

Evaluating Connectivity Standards.

To reduce the integration and interoperability challenge across different IIoT systems, a key goal of the IIC, the document provides a method and template for evaluating connectivity technologies and standards for the IIoT. It includes assessments of most IIoT standards like DDS, OPC-UA, HTTP/WebServices, OneM2M, MQTT and CoAP. Many of these standards turn out to address different levels of the connectivity stack as you can see in figure 3. Further details on each standard are provided in the document.

Figure 3. IIoT connectivity standards and their location on the connectivity stack.

DDS as a Core Connectivity Standard.

From figure 3, you can see that the document assesses 4 connectivity framework standards including DDS. In addition, the Connectivity Framework document provides guidance on requirements for choosing core connectivity framework standards. A core connectivity framework must:

  • Provide syntactic interoperability
    • Provide way to model data, a type-system (e.g. DDS, OPCUA)
    • Can’t be just a “simple” messaging protocol (MQTT, COAP, etc)
  • Be an open Standard with strong governance:
    • from SDOs like IEEE, OASIS, OMG, W3C, IETF
  • Be horizontal & neutral
  • Be stable and deployed in many industries
  • Have standards-defined core gateways to all other connectivity core standards
  • Provide core functions like publish-subscribe, request-reply, discovert, etc.
  • Meet non-functional requirements: performance, scalability, security, …
  • Meet business criteria: not require single components from single vendors, have supported SDKs, have open source implementations, etc.

In figure 4, you can see the 4 potential core connectivity framework standards assessed against these requirements. DDS supports all the requirements and is a promising standard for IIoT systems across all industries.

Figure 4. IIoT Connectivity Core Standards Criteria applied to key connectivity framework standards.

In particular, if you compare DDS with another promising connectivity framework standard, OPC-UA, from figure 5 below, you can see that they address very different system use cases. If your primary challenge is integrating software applications across an IIoT system, then DDS is a good choice. If you challenge is to provide an interface for your edge device to allow system integrators to later integrate it something like a manufacturing workcell, then OPC-UA is a good choice.

Figure 5. Non-overlapping system aspects addressed by the core connectivity framework standards.

As you can see, this IIC document provides a lot of important guidance and clarifying concepts for IIoT connectivity. You can use it’s IIoT connectivity standards assessment profile to assess other standards you may be interested in for your system, or use its guidance to choose among the leading standards. For more detail download the document for yourself.

Use MATLAB to Leverage Your Live IoT Data

Thu, 02/23/2017 - 09:53

If you have ever done any data analysis from a sensor or other type of data source, you have most likely followed a process where you collect the data, you convert the data and then use MATLAB to process and analyze the data.  Using  MATLAB to analyze the data is a very well known tool to accomplish that task.  Collecting and converting the data, so that it is usable in  MATLAB, can take an enormous amount time.  Thanks to an integration that was completed by MathWorks, it is now possible to easily connect  MATLAB up with live data that is being published and subscribed to on DDS.  With  MATLAB being one of the top tools used to analyze data and DDS quickly becoming the data communications middleware of IIoT applications, this integration will enable some very rapid prototyping and test analysis for developers.  This blog post will walk through a few examples of how to publish DDS data and also how to subscribe to DDS data using  MATLAB.

Getting Started

To get started, you will need to make sure that both  MATLAB and RTI Connext DDS are installed on your computer.  For this set of examples, the following versions were used:

Once you have those installed, just follow the video at this link to complete and verify the installation:  Installation Video

Initialization

Once you have everything installed and verified, then there are just a few steps to get DDS setup appropriately within  MATLAB.

  •  Import the datatype(s) that will be used in your project.
  •  Create a DDS Domain Participant
  •  Create a DDS DataWriter
  •  Create a DDS DataReader

Importing a datatype in  MATLAB is simple.  In DDS, datatypes are specified using IDL files.  The  MATLAB import statement can read an IDL file directly and will create the “.m” files required to work with that datatype within the  MATLAB interpreter.  The following  MATLAB call will import a datatype called “ShapeType” from the ShapeType.idl file located in the current working directory:

>> DDS.import('ShapeType.idl','matlab','f')

Now that datatype is available to use when creating your DataReaders and DataWriters of topics in DDS.  Also note, that once the import has been done, this step no longer has to be run in the future.  The type will be available in  MATLAB going forward.  The next thing to do to get DDS discovery going is to create a DDS Domain Participant.  That can be accomplished in this call:

>> dp = DDS.DomainParticipant;

Using this DomainParticipant (dp) object, you can then create both DataWriter and DataReader objects.  The following two commands will add a datawriter object and datareader object to the dp specifying its type to be the newly created “ShapeType” and their topics to be “Triangle” and “Square” respectively.

>> dp.addWriter('ShapeType','Triangle') >> dp.addReader('ShapeType','Square') Subscribing to Data in Shapes Demo

The ShapeType is used so that it will communicate with the standard RTI Shapes Demonstration application (Shapes) that is provided by RTI.  Shapes enables the creation of both DataWriters and DataReaders of “Square”, “Circle” and “Triangle” topics that are in turn based on the “ShapeType” datatype.  For more information on how to use the Shapes application, click here to view our video tutorial.

In Shapes, the next step is to create a subscriber of Triangle. In the next screen just leave all the other QoS options as default.

Publishing Data in  MATLAB

Now that we have the DataWriter setup in  MATLAB to send out ShapeType on the Triangle topic, and also we have the Shapes Demo setup to receive the publication, lets exercise the writer.  The following commands will populate the fields of the ShapeType and then publish out the data on the Triangle Topic:

%% create an instance of ShapeType myData = ShapeType; myData.x = int32(75); myData.y = int32(100); myData.shapesize = int32(50); myData.color = 'GREEN'; %% write data to DDS dp.write(myData);

The result on the Triangle Topic within the Shapes Demo will be a single Green Triangle shown here:

Some more interesting use cases of publishing Triangle within  MATLAB are:

%% Publish out Green Triangles in a line at 1 Hz for i=1:10     myData.x = int32(20 + 10*i);     myData.y = int32(40 + 10*i);     dp.write(myData);     pause(1); end %% Publish out Green Triangles in a Circle pattern at 20Hz for i=1:1000     angle = 10*pi * (i/200);     myData.x = int32(100 + (50 * cos(angle)));     myData.y = int32(100 + (50 * sin(angle)));     myData.shapesize = int32(40);     myData.color = 'GREEN';     dp.write(myData);     pause(0.05); end

The resulting output on the Shapes Demo are respectively:

             

Publishing Data in Shapes Demo

In the Shapes demonstration, create a publisher of Square.  In the next screen just pick a color and leave all the other QoS options as default.  The following screenshot shows the Square Publish screen.  For my demonstration, I have chosen an Orange Square.  This will publish the X,Y Position on the screen every 30 msec.

                

Subscribing to Data in  MATLAB

If you remember from before we added a Square Topic DataReader to the Domain Participant in  MATLAB.  We will use this DataReader to subscribe to data that we are now publishing from the Shapes Demonstration.  The following commands in  MATLAB will read 10 samples at 1 Hz.

%% read data for i=1:10     dp.read()     pause(1); end

The resulting output in  MATLAB will be 10 reports of the following:

Something More Interesting

Now that we have both directions going, lets do something that is more creative with the data.  First we will read in the Square data and modify it to switch the X and Y coordinates and then republish it out on to a RED Triangle.  Second, we will take the resulting Position data and plot it directly within  MATLAB.  These are the commands to use in  MATLAB to accomplish that.

%% allocate an array of 100 elements xArray = zeros(1,100); %% run a loop to collect data and store it into the array %% also switch up the X and Y coordinates and then republish onto %% the Triangle topic for i=1:100        [myData, status] = dp.read();        if ~isempty(myData)             x = myData(1).x;             y = myData(1).y;             xArray(i) = x;             yArray(i) = y;             myData(1).y = x;             myData(1).x = y;             myData(1).color = 'RED';             dp.write(myData(1));        end     pause(0.05) end %% Plot the X Position Data t = 1:100; plot(t,xArray); legend('xPos'); xlabel('Time'), ylabel('Position'); title('X Postions');

The resulting output in Shapes Demo will be a Red Triangle moving the opposite of the Orange Square and also a Plot will be generated within  MATLAB showing the X Position data:

       

As you can see, the integration of DDS with  MATLAB is very simple to use and makes it very easy to collect data, inject data and analyze data.  For this demonstration, we used the simple Shapes Application, but the data used can just as easily be your own application data.  If you would like to find out more about the  MATLAB Integration with RTI Connext DDS, please visit this site on MathWorks site:  MATLAB DDS Integration. If you’d like to learn more about using Connext DDS, click here to gain access to our developer resources.

Well Being over Ethernet

Thu, 02/02/2017 - 14:54

Guest Author: Andrew Patterson, Business Development Director for Mentor Graphics’ embedded software division (Thank you, Andrew!)

Mentor Embedded on the NXP Smarter World Truck 2017

One of the larger commercial vehicles present at CES 2017 was the NXP® Smarter World Truck – an 18-wheeler parked right outside the Convention Center.  It contained over 100 demonstrations making use of NXP products showing some of the latest innovations in home-automation, medical, industrial and other fields.  Mentor Embedded, together with RTI, worked with NXP to set up a medical demonstration that showed data aggregation in real-time from medical sensors. By collecting medical data, and analyzing it in real time, either locally or in a back-office cloud, a much quicker and more accurate diagnosis of any medical condition can be possible.  Mentor Embedded’s aggregation gateway made use of the multicore NXP i.MX6, a well-established platform, running our own secure Mentor Embedded Linux®.  The technology we specifically wanted to highlight in this example was DDS (Data Distribution Service), implemented by RTI’s Connext® DDS Professional.  The DDS communication protocol, based on a physical Ethernet network, allows multiple sensor nodes to link to a hub or gateway, so it is appropriate for many medical and industrial applications where multi-node data needs to be collected securely and reliably.

Traditional patient monitoring systems have made use of client/server architectures, but these can be inflexible if reconfiguration changes are needed, and they don’t necessarily scale to a large number of clients in a large-scale medical or industrial installation. DDS uses a “publisher” and “subscriber” concept – it is easy to add new publishers and subscribers to the network without any other architecture changes, so the system is scalable.

In the publish-subscribe model there is no central data server – data flows directly from the patient monitor source to the gateway destination.  In our demo medical system, the data sources are individual sensors that put data onto the Ethernet network when the new readings are available.  Data is tagged for reading and accessed by any registered subscriber.  Once received by the subscriber gateway, the data can be uploaded to a cloud resource for further analysis and comparisons made with historical readings. Further trend analysis can be made over time.

The process for adding a new node to a publish-subscribe network is straightforward. A new data element announces itself to the network when it attaches, optionally describing the types and formats of the data it provides. Subscribers then identify themselves to the data source to complete the system reconfiguration.

Mentor Embedded and RTI medical applications demo where multi-node data needs to be collected securely and reliably

DDS provides a range of communication data services to support a variety of application needs, ranging from guaranteed command and control, to real-time data transmission. For example, if it is required to send a “halt” command to a specific node, there is a data service type that guarantees error-free delivery, so sensor data transmission stops immediately. There are also time-sensitive modes, useful when there is time-sensitive data, which require minimum network latency.  Less time-critical data can make use of a “best effort” service, where transmission is scheduled as a lower priority than the time-sensitive communication.

Our demonstration setup is shown in the picture on the left in the NXP Smarter World Truck 2017. The NXP i.MX6 quad core system was linked to a 10” touch-screen display, showing patient graphs.  The Mentor Embedded Linux operating system included the RTI Connext DDS protocol stack, the necessary drivers for high-performance graphics, and the Ethernet network connections. Other options include a fastboot capability and wireless communication links for cloud-connectivity.  For more information please visit Mentor Embedded Linux.

To see when the NXP Smarter World Truck is coming near you, visit the schedule at iot.nxp.com/americas/schedule – it is being updated frequently, so keep a watch on it!

Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

2nd Version of the Industrial Internet Reference Architecture is Out with Layered Databus

Tue, 01/31/2017 - 00:00

A year and a half ago the IIC released the first version of the Industrial Internet Reference Architecture (IIRA) – now the second version (v1.8) is out. It includes tweaks, updates and improvements, the most important or interesting of which is a new Layered Databus Architecture Pattern. RTI contributed this new architecture pattern in the Implementation Viewpoint of the IIRA because we’ve seen it deployed by hundreds of organizations that use DDS. Now it’s one of the 3 common implementation patterns called out by the new version of the IIRA.

So, what is a databus? According to the IIC’s Vocabulary document, “a databus is a data-centric information-sharing technology that implements a virtual, global data space, where applications read and update data via a publish-subscribe communications mechanism. Note to entry: key characteristics of a databus are (a) the applications directly interface with the operational data, (b) the databus implementation interprets and selectively filters the data, and (c) the databus implementation imposes rules and manages Quality of Service (QoS) parameters, such as rate, reliability and security of data flow.”

For those who know the DDS standard, this should sound familiar. You can implement a databus with a lower level protocol like MQTT, but DDS provides all the higher-level QoS, data handling, and security mechanisms you will need for a full featured databus.

As we look across the hundreds of IIoT systems DDS users have developed, what emerges is a common architecture pattern with multiple databuses layered by communication QoS and data model needs. As we see in the figure below, we’ll usually see databuses implemented at the edge in the smart machines or lowest level subsystems like a turbine, a car, an oil rig or a hospital room. Then above those, we’ll see one or more databuses that integrate these smart machines or subsystems, facilitating data communications between them and with the higher level control center or backend systems. The backend or control center layer might be the highest layer databus we see in the system, but there can be more than these three layers. It’s in the control center (which could be the cloud) layer that we see the data historians, user interfaces, high-level analytics and other top-level applications. From this layer, it’s straightforward to zero in on a particular data publication at any layer of the system as needed. It’s from this highest layer that we usually see integration with business and IT systems.

The Layered Databus Architecture Pattern: one of three implementation patterns in the newly released Industrial Internet Reference Architecture v1.8.

Why use a layered databus architecture? As the new IIRA says, you get these benefits:

  • Fast device-to-device integration – with delivery times in milliseconds or microseconds
  • Automatic data and application discovery – within and between databuses
  • Scalable integration – comprising hundreds of thousands of machines, sensors and actuators
  • Natural redundancy – allowing extreme availability and resilience
  • Hierarchical subsystem isolation – enabling development of complex system designs

If you want to dig into the databus concept, especially as it compares with a database (similar data-centric patterns for integrating distributed systems, but different in the way they integrate via data), take a look at this earlier blog post on databus versus database.

In addition to the new IIRA release, the IIC is getting ready to release an important document on the Connectivity Framework for its reference architecture. Look for much more detail on this document that sets out core connectivity standards for the Industrial Internet.

A Foggy Forecast for the Industrial Internet of Things

Tue, 01/24/2017 - 16:18

Signs on I-280 up the San Francisco peninsula proclaim it the “World’s Most Beautiful Freeway.” It’s best when the fog rolls over the hills into the valley, as in this picture I took last summer.

That fog is not just pretty, it’s also the natural refrigerator responsible for California’s famously perfect weather. Clouds in the right place work wonders.

What is Fog? 

This is a perfect analogy for the impending future of the Industrial Internet of Things (IIoT) computing. In weather, fog is the same thing as clouds, only close to the ground. In the IoT, fog is defined as cloud technology close to the things. Neither is a precise term, but it’s true in both cases: clouds in the right place work wonders.

The major industry consortia, including the Industrial Internet Consortium (IIC) and the OpenFog Consortium, are working hard to better define this future.  All agree that many aspects that drive the spectacular success of the cloud must extend beyond data centers.  The also agree that the real world also contains challenges not handled by cloud systems.  They also bandy about names and brand positioning; see the sidebar for a quick weather map.  By any name, the fog, or layered edge computing, is critical to the operation of the industrial infrastructure.

Perhaps the best way to understand fog is to examine real use cases.

Example: Connected Medical Devices

Consider first the coming future of intelligent medical systems.  The driving issue is an alarming fact: the 3rd leading cause of death in the US is hospital error.  Despite extensive protocols that check and recheck assumptions, device alarms, training on alarm fatigue, and years of experience, the sad truth is that hundreds of thousands of people die every year because of miscommunications and errors.  Increasingly clearly, compensating for human error in such a complex environment is not the solution.  The best path is to use technology to take better care of patients.

The Integrated Clinical Environment standard is a leading effort to create an intelligent, distributed system to monitor and care for patients.  The key idea is to connect medical devices to each other and to an intelligent “supervisory” computing function.  The supervisor acts like a tireless member of the care team, checking patient status and intelligently alerting human caretakers or even taking autonomous actions when there are problems.

The supervisor combines and analyzes oximeter, capnometer, and respirator readings to reduce false alarms and stop drug infusion to prevent overdose. The DDS “databus” connects all the components with real-time reliable delivery.

This sounds simple.  However, consider the real-world challenges.  The problem is not just the intelligence.  Current medical devices do not communicate at all.  They have no idea that they are connected to the same patient.  There’s no obvious way to ensure data consistency, staff monitoring, or reliable operation.

Worse, the above diagram is only one patient.  That’s not the reality of a hospital; they have hundreds or thousands of beds.  Patients move between rooms every day.  The environment includes a mix of wired and wireless networks. Finding and delivering information within the treatment-critical environment is a superb challenge.

A realistic hospital environment includes thousands of patients and hundreds of thousands of devices. Reliable monitoring technology must find the right patient and guarantee delivery of that patient’s data to the right analysis or staff. In the connectivity map above, every red dot is a “fog routing node”, responsible for passing the right data up to the next layer.

This scenario exposes the key need for a layered fog system.  Complex systems like this must build from hierarchical subsystems.  Each subsystem shares internal data, with possibly complex dataflow, to execute its functions.  For instance, a ventilator is a complex device that controls gas flows, monitors patient state, and delivers assisted breathing.  Internally, it includes many sensors and motors and processors that share this data.  Externally, it presents a much simpler interface that conveys the patient’s physiological state.   Each of the hundreds of types of devices in a hospital face a similar challenge.  The fog computing system must exchange the right information up the chain at each level.

Note that this use case is not a good candidate for cloud-based technology.  These machines must exchange fast, real-time data flows, such as signal waveforms, to properly make decisions.  Also, patient health is at stake.  Thus, each critical component will need a very reliable connection and even redundant implementation for failover.  Those failovers must occur in a matter of seconds.  It’s not safe or practical to rely on remote connections.

Example: Autonomous Cars

The “driverless car” is the most disruptive innovation in transportation since the “horseless carriage”.  Autonomous Drive (AD) cars and trucks will change daily life and the economy in ways that hard to imagine.  They will move people and things faster, safer, cheaper, farther, and easier than the primitive “bio-drive” cars of the last century.  And the economic impact is stunning; 30% of all US jobs will end or change; trucking, delivery, traffic control, urban transport, child & elder care, roadside hotels, restaurants, insurance, auto body, law, real estate, and leisure will never again be the same.

Autonomous car software exchanges many data types and sources. Video and Lidar sensors are very high volume; feedback control signals are fast. Infrastructure that reliably sends exactly the right information to exactly the right places at the right time makes system development much easier. The vehicle thus combines the performance of embedded systems with the intelligence of the cloud…aka fog.

Intelligent vehicles are complex distributed systems.  An autonomous car combines vision, radar, lidar, proximity sensors, GPS, mapping, navigation, planning, and control.  These components must work together as a reliable, safe, secure system that can analyze complex environments in real time and react to negotiate chaotic environments.  Autonomy is thus a supreme technical challenge.  An autonomous car is more a robot on wheels than it is a car. Automotive vendors suddenly face a very new challenge.  They need fog.

Fog integrates all the components in an autonomous car design. Each of these components is a complex module on its own. As in the hospital patient monitoring case, this is only one car; fog routing nodes (red) are required to integrate subsystems and connect the car into a larger cloud-based system. This system also requires fast performance, extreme reliability, integration of many types of dataflow, and controlled module interactions. Note that cloud-based applications are also critical components. Fog systems must seamlessly merge with cloud-based applications as well.

How Can Fog Work?

So, how can this all work?  I’ve hinted at a few of the requirements above.  Connectivity is perhaps the greatest challenge.  Enterprise-class technologies cannot deliver the performance, reliability, redundancy, and distributed scale that IIoT systems need.

The key insight is that systems are all about the data.  The enabling technology is data-centricity.

A data-centric system has no hard-coded interactions between applications.  When applied to fog connectivity, this concept overcomes problems associated with point-to-point system integration, such as lack of scalability, interoperability, and the ability to evolve the architecture. It enables plug-and-play simplicity, scalability, and exceptionally high performance.

The leading standard for data-centric connectivity is the Data Distribution Service (DDS).  DDS is not like other middleware.  It directly addresses real-time systems. It features extensive fine control of real-time Quality of Service (QoS) parameters, including reliability, bandwidth control, delivery deadlines, liveliness status, resource limits, and security.  It explicitly manages the communications “data model”, or types and QoS used to communicate between endpoints.  It is thus a “data-centric” technology.

DDS is all about the data: finding data, communicating data, ensuring fresh data, matching data needs, and controlling data.  Like a database, which provides data-centric storage, DDS understands the contents of the information it manages.  This data-centric nature, analogous to a database, justifies the term “databus”.

Databus vs. Database: The 6 Questions Every IIoT Developer Needs to Ask

Traditional communications architectures directly connect applications. This connection takes many forms, including messaging, remote object-oriented invocation, and service oriented architectures. Data-centric systems fundamentally differ because applications interact only with the data and properties of data. Data-centricity decouples applications and greatly enables scalability, interoperability and integration. Because many applications may interact with the data independently, data-centricity also makes redundancy natural.

Note that the databus replaces the application-application interaction with application-data-application interaction.  This abstraction is the crux of data-centricity and it’s absolutely critical.  Data-centricity decouples applications and greatly eases scaling, interoperability, and system integration.

Continuing the analogy above, a database implements this same trick for data-centric storage.  It saves old information that you can later search by relating properties of the stored data.  A databus implements data-centric interaction.  It manages future information by letting you filter by properties of the incoming data.  Data-centricity makes a database essential for large storage systems.  Data-centricity makes a databus a fundamental technology for large software-system integration.

The databus automatically discovers and connects publishing and subscribing applications.  No configuration changes are required to add a new smart machine to the network.  The databus matches and enforces QoS.  The databus insulates applications from the execution, or even existence, of other applications.  As long as its data specifications are met, an application can run successfully.

A databus also requires no servers.  It uses a protocol to discover possible connections.  All dataflow is directly peer-to-peer for the lowest possible latency.  And, with no servers to clog or fail, the fundamental infrastructure is both scalable and reliable.

To scale as in our examples above, we must combine hierarchical subsystems; that’s important to fog.  This requires a component that isolates subsystem interfaces, a “fog routing node”.  Note that this is a conceptual term.  It does not have to be, and often is not, implemented as a hardware device.  It is usually implemented as a service, or running application.  That service can run anywhere needed: on the device itself, in a separate box, or in the higher-level system.  Its function is to “wrap a box around” a subsystem, thus hiding the complexity.  The subsystem thus exports only the needed data, allows only controlled access, and even presents a single security domain (certificate).  Also, because the databus so naturally supports redundancy, the service design allows highly reliable systems to simply run many parallel routing nodes.

Hierarchical systems require containment of subsystem internal data. The fog routing node maps data models between levels, controls information export, enables fast internal discovery, and maps security domains. The external interface is thus a much simpler view that hides the internal system.

RTI has immense experience with this design, with over 1000 projects.  These include fast 3kHz feedback loops for robotics, NASA KSC’s huge 300k-point launch control SCADA system, Siemens Wind Power’s largest offshore turbine farms, the Grand Coulee dam, GE Healthcare’s CT imaging and  patient monitoring product lines, almost all Navy ships of the US and its allies, Joy Global’s continuous mining machines, many pilotless drones and ground stations, Audi’s hardware-in-the-loop testing environment, and a growing list of autonomous car and truck designs.

The key benefits of a databus include:

  • Reliability: Easy redundancy and no servers to fail allow extremely reliable operation. The DDS databus supports systems cannot tolerate being offline even for a short period, whether 5 minutes or 5 milliseconds.
  • Real-time: Databus peer-to-peer delivery easily supports latencies measured in milliseconds and even tens of microseconds.
  • Interface scale: Large software projects with more than 10 interacting modules must carefully define, coordinate, and evolve interfaces. Data-centric technology moves this responsibility from manual processes to automatic, enforced infrastructure.  RTI has experience with systems with over 1500 teams of programmers building thousands of interacting applications.
  • Data scale: When systems grow large, they must control dataflow. It’s simply not practical to send everything to every application.  The databus allows filtering by content, rate, and more.  Thus, applications receive only what they truly need.  This greatly reduces both network and processor load.  This is critical for any system with more than 1000 independently-addressable data items.
  • Architecture: Data-centricity is not easily “added” to a system. It is instead adopted as the core design.  Thus, the transformation makes sense only for next-generation IIoT designs.  Most system designs have lifecycles of many years.

Any system that meets most of these requirements should seriously consider a data-centric design.

FREE eBook: Leading Applications & Architecture for the Industrial Internet of Things

The Foggy Future

Like the California fog blanket, a cloud in the right place works wonders.  Databus technology enables elastic computing by bringing the data where it’s needed reliability.  It supports real-time, reliable, scalable system building. Of course, communication is only one of the required functions of the evolving fog architecture.  But it is key and relatively mature.  It is thus driving many designs.

The Industrial IoT will change nearly every industry, including transportation, medical, power, oil and gas, agriculture, and more.  It will be the primary driving trend in technology for the next several decades, the technology story of our lifetimes.  Fog computing will move powerful processing currently only available in the cloud out to the field.  The forecast is foggy indeed.