RTI Blog RSS

Subscribe to RTI Blog RSS feed RTI Blog RSS
Updated: 47 min 38 sec ago

Three Simple Steps to Achieving Peak DDS Performance

Wed, 05/10/2017 - 09:11

RTI Connext® DDS provides an order of magnitude performance improvement over most other messaging middleware. But occasionally we run into customers who are trying to improve the performance of their DDS communications. This performance improvement can be achieved in either throughput or latency. In this blog, I will go through the three simple steps required to assess the performance of your system and will also review some of the most common ways customers have improved performance of their DDS communications.

Step 1: What performance should you be getting?

Compare the numbers you are getting with the comprehensive DDS benchmarks that RTI provides here: https://www.rti.com/products/dds/benchmarks.

If you are not getting close to the numbers you see in the DDS benchmarks, there are a couple things to try:

Use RTI Perftest to make sure you’re comparing apples to apples.

The configuration of the NIC and the network switch, as well as the maximum network throughput and the CPU, all have an impact on the final DDS performance results. So, to make a fair comparison run the DDS benchmarks on your hardware.  RTI makes the DDS benchmark program, “RTI Perftest,” available in source code format with complete documentation.  You can find a copy of Perftest here:  https://community.rti.com/downloads/rti-connext-dds-performance-test

Make sure you are running your tests using the network interface you think you are using.

DDS enables shared memory and UDPv4 transports by default. If Shared memory is available between two nodes DDS will use that by default. But if there are many network interfaces available DDS will only use the first four. I’ve seen developers want to test out a certain network interface, say Infiniband, but it was not one of the first four listed and so DDS was not adding it to the mix. In fact, on Windows systems, the order that network interfaces are listed by the OS, and thus selected by DDS, is random and so the network interface you are actually using can change from run to run. In fact, DDS will actually send the same data over two paths, if they exist, to the same endpoint. This can take up CPU time and slow throughput.  You can explicitly select the interface you want (or do not want)  using the transport QOS “allow-interfaces” property.   Here is a good RTI Community article on the subject: https://community.rti.com/howto/control-or-restrict-network-interfaces-nics-used-discovery-and-data-distribution.

Following is the actual XML code for “allow_interfaces” and “deny_interfaces” QOS that lets you explicitly pick the network interface you want to use or do not want to use:

<participant_qos>
 <property>
  <value>
   <element>
    <name>dds.transport.UDPv4.builtin.parent.deny_interfaces</name>
    <value>10.15.*</value>
   <element>
   <element>
    <name>dds.transport.UDPv4.builtin.parent.allow_interfaces</name>
    <value>10.10.*,192.168.*</value>
   </element>
  </value>
 </property>
</participant_qos>

Step 2. Use the RTI DDS tools to diagnose your performance issues. 

Use RTI Monitor to look for the number of ACKs, NACKs, dropped packets, and duplicate packets.  If these numbers are high, it can be due to several things:

  • Transport buffer sizes are too small
  • MTU  is not optimized for switch
  • There may be too many heartbeats causing multiple resends for single NACKs, indicating the reader is not keeping up
  • The CPU and memory process(es) are bound.

Use RTI Monitor or Admin Console to compare QOS settings of the DataReaders and DataWriters.  Sometimes you are not using the QOS values you think you are using.

A great way to learn about using the Admin Console and the Monitor tools is to watch a video on the RTI YouTube channel:  https://www.youtube.com/user/RealTimeInnovations.  The RTI YouTube channel has many great videos covering all aspects of using DDS.

Step 3. Now let’s start to look at your application to see how we can speed things up by changing the “shape” of the data in motion.

RTI DDS gives you many ways to fine tune your system using QOS settings. This flexibility is great because you have a lot of control over how DDS works.  But all the options can be daunting! I won’t go over every setting (this blog would quickly grow to be a textbook) but I will hit on what I feel are the most important settings to check in regards to performance.

First, don’t use strict reliability if it is not needed. Strict reliability makes sure that every sample reaches every reliable destination and will re-send samples if necessary. Resending samples and the structure that supports them take time and memory.  Many applications would be fine missing a sample very occasionally or waiting longer for it to be re-transmitted.

If you do need to use strict reliability then start with the DDS built-in profile “StrictReliable.HighThroughput”.  It is a good idea in general to use the built-in profiles that RTI provides. These built-in profiles are set up by RTI to have all of the default settings needed for the most common DDS use cases. The built-in profiles can be used as-is or can be used as the basis for your QOS configuration and then tweaked for your specific needs. You can read about using DDS built-in profiles and get a working example here:  https://community.rti.com/examples/built-qos-profiles 

Using Extensible types (XTypes) and sequences of structures can hurt performance. DDS serializes and de-serializes data it sends and receives, and this process takes a lot longer with complicated data types.

Adjust heartbeat_period/ ACKNACK combo.  In reliable communications, the DataWritersends DDS data samples and heartbeats to reliable DataReaders. A DataReader responds to a heartbeat by sending an ACKNACK, which tells the DataWriter what the DataReader has received so far. In addition, the DataReader can request missing DDS samples (by sending an ACKNACK) and the DataWriter will respond by resending the missing DDS samples. So, the heartbeat_period can control how quickly a data reader can acknowledge receipt of a sample or ask for a sample to be re-sent, impacting performance.  Here is an article that talks about how the heartbeat_period can impact latency and throughput.

Modify the Asynchronous Publisher configuration to use flow control to lower the data rate. Sometimes if the data rate from the writer is too fast, the reader gets swamped and the resulting dropped samples and resends slow down the system. Lowering the writer’s data rate a little leaves room for repairs, etc. This gives DDS time to handle incoming data and avoids costly resends. You can use a flow controller to shape the output traffic your publisher will generate. By using an asynchronous publisher and custom flow controller you can lower the data rate. You can see a working example of how to use the asynchronous publisher here: https://community.rti.com/examples/asynchronous-publisher

For smaller sample sizes, use batching and/or Turbo Mode. Batching groups of small samples into a single large packet is more efficient to send and can result in a large throughput increase. But note that while the use of batching increases throughput, it can hurt latency when little data is being sent (because of the added time needed to batch small samples). In high-throughput cases, though, average latency results because of all the CPU saved on the subscriber side of the interface.

Turbo Mode is an experimental feature that uses an intelligent algorithm that adjusts the number of bytes in each batch at runtime according to current system conditions, such as write speed (or write frequency) and sample size. This intelligence gives Turbo Mode the ability to increase throughput at high message rates and avoid negatively impacting message latency at low message rates.

Here is an article that goes into detail on how to use batching and includes a working example: https://community.rti.com/examples/batching-and-turbo-mode

Use multicast for topics with more than a couple of subscribers. Multicast allows a publisher to send to multiple readers with a single write, greatly reduces network and publisher-side processor utilization.  Note that sometimes this feature is not available at the network level.  Here is a good article on  how to implement multicast: https://community.rti.com/best-practices/use-multicast-one-many-data

For reliable communications modify the Send Window size. When a reliable DataWriter writes a DDS sample, it keeps that sample in its queue until it has received acknowledgments from all of its subscribing DataReaders that the sample was received. The number of outstanding DDS samples allowed is referred to as the DataWriter’s “send window.” Once the number of outstanding DDS samples has reached the send window size, subsequent writes will block until an outstanding DDS sample is acknowledged. Anytime the writer blocks, it hurts performance. You can read about adjusting the Send Window in section 6.5.3.4 of the DDS User’s Manual.

Modify the transport settings. Whether you are using UDPv4 or shared memory or a custom transport, having the right buffer sizes and message sizes configured is extremely important when trying to optimize performance. Following is XML code for modifying transport message size and buffers sizes for the UDPv4 transport:

<participant_qos>
 <property>
  <value>
   <element>
    <name>dds.transport.UDPv4.builtin.parent.message_size_max</name>
    <value>65536</value>
   </element>
   <element>
    <name>dds.transport.UDPv4.builtin.send_socket_buffer_size</name>
    <value>524288</value>
   </element>
   <element>
    <name>dds.transport.UDPv4.builtin.recv_socket_buffer_size</name>
    <value>2097152</value>
   </element>
  </value>
 </property>
</participant_qos>

Note that the sizes used here are suggestions for optimizing performance when using large samples. You can make these values smaller for smaller samples.

I hope this advice is helpful in getting the best performance out of your DDS Application. I’ve listed the tips I’ve found most helpful for improving DDS performance but there are other methods that can also be helpful depending on the circumstances. In order to get more information on improving throughput or latency (or really help with any other Connext DDS issue), I encourage you to check out the RTI Community portal. The RTI Community portal is an excellent source of information and support! And of course, always feel free to contact our great support department or your local Field Application Engineer for further help.

Hey, Charlie Miller! Let’s Talk About Securing Autonomous Vehicles

Wed, 05/03/2017 - 15:04

A recent Wired article on Charlie Miller (infamously known for remotely hacking and controlling a Jeep) claims that “open conversation and cooperation among companies” are necessary prerequisites to building secure autonomous vehicles. This seems rather far-fetched when so many companies are racing to dominate the future of the once-nearly-dead-but-newly-revived (remember the Big Three bailouts?) automotive industry. As naive as that part of the article sounds, what really blew my mind was the implication that the answer to re-designing security lies solely within the autonomous-vehicle industry.

The concept of security is not isolated to autonomous vehicles so there is no benefit in pretending that’s the case. Every IIoT industry is trying to solve similar problems and are surprisingly open to sharing their findings. I’m not saying that Miller needs to go on a journey of enlightenment through all other industries to create the ideal solution for security. I’m saying this has already been done for us, compliments of the Industrial Internet Consortium (IIC).

The IIC consists of 250+ companies across several industries – including automotive suppliers like Bosch, Denso, and TTTech – with the same fundamental problem of balancing security, safety, performance, and of course costs for their connected systems. If Wired and Miller are looking for an open conversation – it’s happening at the IIC. The IIC published the Industrial Internet Reference Architecture, which is available to everyone for free – as in “free beer,” especially if the car is doing the driving for you! The extensions to this document are the Industrial Internet Connectivity Framework (IICF) and Industrial Internet Security Framework (IISF). These documents provide guidance from a business perspective down to implementation, and the IISF is particularly applicable as it addresses Wired’s brief mentions to securing the connectivity endpoints and the data that passes between them.

Take a ride with me and see how we might modify the connected car’s architecture to protect against potential adversaries. Since we do not have any known malicious attacks on cars, we can start with Miller’s Jeep hack. Thanks to a backdoor “feature” in the Harmon Kardon head unit, Miller was able to execute unprotected remote commands quite easily. Through this initial exploit, he was able to reprogram a chip connected to the CAN Bus. From there, he had nearly full control of the car. You’re thinking, “just remove that unprotected interface,” right?

Miller would not have stopped there, so neither shall we. Assuming we could still find an exploit that granted us access to reprogram the ARM chip, then Wired’s article rightly suggests establishing an authenticated application – perhaps starting with secure boot for the underlying kernel, leverage ARM Trust Zone for the next stage of critical-only software, and implement some sort of authentication for higher level OS and application processes. Your device endpoint might start to look like a trusted application stack (Figure 1 below). I can only guess how much this head unit costs now, but to be fair, these are valid considerations to run a trusted application. The problem now is that we haven’t actually connected to anything, let alone securely. Don’t worry, I won’t leave you by the roadside.

Figure 1. Trusted Application Stack

Many of these trusted applications connect up directly to the CAN Bus, which arguably expands the attack surface to the vehicle control. The data passed between these applications are not protected from unauthorized data writers and readers. In the case of autonomous taxis, as Wired points out, potential hackers now have physical access to their target, increasing their chance of taking over an application or introducing an imposter. Now the question becomes: can applications trust each other and the data on the CAN bus? How does the instrument cluster trust the external temperature data? Does it really need to? Maybe not and that’s ok. However, I am pretty sure that the vehicle control needs to trust LIDAR, Radar, cameras, and so on. The last thing anyone wants to worry about is a hacker remotely taking the car for a joyride.

We are really talking about data authenticity and access control: two provisions that would have further mitigated risk against Miller’s hack. Securing the legacy applications is a good step, but let’s consider the scenario where an unauthorized producer of data is introduced to the system. This trespasser can inject commands on the CAN Bus – messages that control steering and braking. The CAN Bus does not prevent unauthorized publishers of data nor does it ensure that the data comes from the authenticated producer. I’m not suggesting that replacing the CAN Bus is the way forward – although I’m not opposed to the idea of replacing it with a more data-centric solution. Realistically, with a framework like Data Distribution Services (DDS), we can create a layered architecture as guided by the IISF (Figure 2 below). The CAN Bus and critical drive components are effectively legacy systems for which security risk can be mitigated by creating a DDS databus barrier. New components can then be securely integrated using DDS without further compromising your vehicle control. So what is DDS? And how does it help secure my vehicle? Glad you asked.

Figure 2. Industrial Internet Security Framework Protecting Legacy Endpoints

Imagine a network of automotive sensors, controllers, and other “participants” that communicate peer-to-peer. Every participant receives only the data it needs from another participant and vice versa. With peer-to-peer, participants in that network can mutually authenticate and if our trusted applications hold up, so does our trusted connectivity. How do we secure those peer-to-peer connections? TLS, right? Possibly, but with the complexity of securing our vehicle we want the flexibility to trade off between performance and security and apply access control mechanisms.

Let’s back up a little and re-visit our conversation about the IICF, which provides guidance on connectivity for industrial control systems. The IICF identifies existing open standards and succinctly attributes them to precise functions of an Industrial IoT system. At its core, an autonomous vehicle, as cool as it sounds, is just an Industrial IoT system in a sleek aerodynamic body with optional leather seats. So what does the IICF suggest for integrating software for an Industrial IoT system, or more specifically, autonomous systems? You guessed it! DDS: an open set of standards designed and documented through open conversations by the Object Management Group (OMG). An ideal automotive solution leveraging DDS allows system applications to publish and subscribe to only messages that they need (see Figure 3 below for our view of an autonomous architecture). With this data-centric approach, we can architecturally break down messages based on criticality for safety or need for data integrity.

Figure 3. Autonomous Vehicle Data-Centric Architecture

And now that we’ve established a connectivity solution for our autonomous vehicle, we can get back to talking about security and our TLS-alternative: a data-centric security solution for a data-centric messaging framework. With DDS Security, Industrial IoT system architects can use security plugins to fine-tune security and performance trade-offs, a necessary capability not offered by TLS (Figure 4 below). Authenticate only select data topics but no more? Check. Encrypt only sensitive information but no more? Check. Actually, there is more. Casting aside centralized brokers, DDS Security offers distributed access control mechanisms dictating what participants can publish or subscribe to certain topics without single points of vulnerability. This means that Miller’s unauthorized application would be denied permission to publish commands to control braking or steering. Or if Miller compromised the data in motion, the data subscriber could cryptographically authenticate the message and discard anything that doesn’t match established policies. Can we say our autonomous vehicle is now completely secure? No, because as Miller made it perfectly clear, we still need more conversations. However, we can certainly say that DDS and DDS Security provide the forward-looking flexibility needed to help connect and secure autonomous systems.

Figure 4. Connext DDS Secure Pluggable Architecture

So, to Mr. Charlie Miller (and of course Mr. Chris Valasek), your work is amazing and vision inspiring, but I think you need to look across industries if you want to talk openly about redesigning automotive architecture. When you and all the other Charlie Millers in the world want to have that open conversation, come knock on our door. At RTI, we are ready to talk to you about autonomy, Industrial IoT, safety and security, and everything you else you believe should define cars of tomorrow.

Mission: Ace the initial screening call and get asked back for in-depth interviews

Tue, 04/25/2017 - 08:25

 

Congratulations! Hopefully the tips from Mission: score an interview with a Silicon Valley company were helpful, and you have been contacted to talk to the hiring manager. Here are a few tips on how to ace the initial call.

Before the interview
  • Test your system. We often use a video call to interview candidates: Skype or Google Hangouts. Make sure your camera and microphone work. Do not use a phone for video conferencing. Sometimes we do hands-on exercises. Have a working IDE, editor and compiler installed. Lastly, take a few moments to clean up your computer background and desktop files.
  • Be professional. Dress appropriately in a shirt or blouse. A suit or tie is overkill. Also, pay attention to your environment and background. Clean up the room. Coordinate with your family or roommates that you will be in an important interview call.
  • Prepare by exploring the company website. Read about the products, download the evaluation software, check out the videos, or forums. Learn as much as you can about the company. Look up the interviewer on LinkedIn. You have to learn about a common passion.
During the interview
  • Be friendly and personable. Smile. Show a spark.
  • Be confident, but don’t oversell. And no BS.
  • Ask clarifying questions, especially if your English is not that great.
  • Be sincere if you do not know something but try to answer anyway. For example, “I am not a Java expert, though if it works similar to C#, then …”
  • Don’t give up. Brainstorm. An interview is not a pass/fail test. One candidate felt he should have been asked back for an in-depth interview because he answered six of the ten questions correctly; in his eyes, he had passed the interview. Unfortunately, for the four questions he missed, he didn’t even try to answer them. Being able to figure out things you don’t know is one of the most important skills of an engineer.
  • Don’t play hard to get or show you are disinterested. Don’t act selectively.
  • Think of some questions to ask about the company, job, customers, etc.

Good luck getting to the in-depth interview mission. Apply now to start the process!

Why Would Anyone Use DDS Over ZeroMQ?

Thu, 04/20/2017 - 07:14

Choices, choices, choices. We know you have many when it comes to your communications, messaging and integration platforms. A little over a year ago, someone asked a great question on StackOverflow: why would anyone use DDS over ZeroMQ?. In their words, “it seems that there is no benefit [to] using DDS instead of ZeroMQ.”

It’s an important question – one that we believe you should have the answer to, and so our CTO, Gerardo Pardo-Castellote, provided an answer on StackOverflow (a shorter version of his answer is provided below).

++++

In my (admittedly, biased) experience as a DDS implementer/vendor, I have found that many developers of applications find significant benefits using DDS over other middleware technologies including ZeroMQ. In fact, I see many more “critical applications” using DDS than ZeroMQ.

First a couple of clarifications:

DDS, and the RTPS protocol it uses underneath, are standards, not specific products.

There are many implementations of these standards. To my knowledge, there are at least nine different companies that have independent products and codebases that implement these standards. It does not make sense to talk about the “performance” of DDS versus ZeroMQ, you can only talk about the performance of a specific implementation. I will address the issue of performance later but from that point of view alone the statement “the latency of ZeroMQ is better” is plainly wrong. The opposite statement would be just as wrong, of course.

When is it better to use DDS?

It is difficult to provide a short/objective answer to a question as broad as this. I am sure ZeroMQ is a good product and many people are happy using it. The same can be said about DDS. I think the best thing to do is point at some of the differences and let people decide what is important to them.

DDS and ZeroMQ are different in terms of the governance, ecosystem, capabilities, and even layer of abstraction. Some important differences:

Governance, Standards and Ecosystem

Both DDS and RTPS are open international standards from the Object Management Group (OMG). ZeroMQ is a “loose structure controlled by its contributors.” This means that with DDS there is open governance and clear OMG processes that control the specification and its evolution, as well as the IPR rules.

ZeroMQ IPR is less clear. On the web page (http://zeromq.org/docs:features), it is stated that “ZeroMQ’s libzmq core is owned by its contributors” and “The ZeroMQ organization is a loose confederation without a clear power structure, that mostly lives on GitHub. The organization’s Wiki page explains how anyone can join the Owners’ team simply by bringing in some interesting work.”

This “loose structure” may be more problematic to users that care about things like IPR pedigree, Warranty and indemnification.

Related to that, if I understood correctly, there is only one core ZeroMQ implementation (the one in GitHub), and only one company that stands behind it (iMatix). Besides that, it seems just four committers are doing most of the development work in the core (libzmq). If iMatix was to be acquired or decided to change its business model, or the main committers lost interest, the user’s only recourse would be supporting the codebase themselves.

Of course, there are many successful projects/technologies based on common ownership of the code. On the other hand, having an ecosystem of companies competing with independent products, codebases, and business models provides users with assurance regarding the future of the technology. It all depends on how big the communities and ecosystems are and how risk-averse the user is.

Features and Layer of Abstraction

Both DDS and ZeroMQ support patterns like publish-subscribe and Request-Reply (a new addition to DDS, the so-called DDS-RPC). But generally speaking, the layer of abstraction of DDS is higher. Meaning the middleware does more “automatically” for the application. Specifically:

DDS provides for automatic discovery

In DDS you just publish/subscribe to topic names. You never have to provide IP addresses, computer names or ports. It is all handled by the built-in discovery. And it does it automatically without additional services. This means that applications can be re-deployed and integrated without recompilation or reconfiguration. 

In comparison, ZeroMQ is lower level. You must specify ports, IP addresses, etc.

DDS pub-sub is data-centric

An application can publish to a Topic, but the associated data can represent updates to multiple data-objects, each identified by key-attributes. For example, when publishing airplane positions each update can identify the “airplane ID” and the middleware can provide history, enforce QoS, update rates, etc. for each airplane separately. The middleware understands and communicates when new airplanes appear or disappear from the system.

Also, DDS can keep a cache of relevant data for the application, which it can query (by key or content) as it sees fit, e.g., read the last 5 positions of an airplane. The application is notified of changes but it is not forced to consume them immediately. This also can help reduce the amount of code the application developer needs to write.

DDS provides more support for “application” QoS

DDS supports over 22 message and data-delivery QoS policies, such as Reliability, Endpoint Liveliness, Message Persistence and delivery to late-joiners, Message expiration, Failover, monitoring of periodic updates, time-based filtering and ordering. This is all configured via simple QoS-policy settings. The application uses the same read/write API and all the extra work is done underneath.

ZeroMQ approaches this problem by providing building blocks and patterns. It is quite flexible but the application has to program, assemble and orchestrate the different patterns to get the higher-level behavior. For example, to get reliable pub-sub requires combining multiple patterns as described in http://zguide.zeromq.org/page:all#toc119.

DDS supports additional capabilities like content-filtering, time-filtering, partitions, domains and more

These are not available in ZeroMQ. They would have to be built at the application layer.

DDS provides a type system and supports type extensibility and mutability

You have to combine ZeroMQ with other packages like Google protocol buffers to get similar functionality.

Security

There is a DDS-Security specification that provides fine-grained (Topic-level) security including authentication, encryption, signing, key distribution and secure multicast.

How does the performance of DDS compare with ZeroMQ?

Note that you cannot use the benchmarks of Object Computing Inc.’s “OpenDDS” implementation for comparison. As far as I know, this is not known to be one of the fastest DDS implementations. I would recommend you take a look at some of the other implementations such as RTI Connext DDS (our implementation), PrismTech’s OpenSplice DDS or TwinOaks’ CoreDX DDS. Of course, results are highly dependable on the actual test, network and computers used, but typical latency performances for the faster DDS implementations using C++ are on the order of 50 microseconds, not 180 microseconds of OpenDDS. See https://www.rti.com/products/dds/benchmarks.html#CPPLATENCY

Middleware layers like DDS or ZeroMQ run on top of things like UDP or TCP, so I would expect they are bound by what the underlying network can do and for simple cases they are likely not too different, and they will, of course, be worse than the raw transport.

Differences also come from what services they provide. So you should compare what you can get for the same level of service, for example publishing reliably to scaling to many consumers, prioritizing information and sending multiple flows and large data over UDP (to avoid TCP’s head-of-line blocking).

Based on the relative performance of different DDS implementations  (http://www.dre.vanderbilt.edu/DDS/), I would expect that in an apples-to-apples test, the better-performing implementations of DDS would match or exceed ZeroMQ’s performance.

That said, people rarely select the middleware that gives them the “best performance.” Otherwise, no one would use Web Services or HTTP. The selection is based on many factors; performance just needs to be as good as required to meet the needs of the application. Robustness, scalability, support, risk, maintainability, the fitness of the programming model to the domain and total cost of ownership are typically more important to making the correct decision.

Which one is best?

If you’re still undecided, or simply want more information to help you in making a decision, you’re in luck! We have the perfect blog post for you: 6 Industrial IoT Communication Solutions – Which One’s for You? [Comparison].

If you think RTI Connext DDS may be what you need, head on over to our Getting Started homepage where you’ll find a virtual treasure trove of resources to get you up and running with Connext DDS.

Mission: score an interview with a Silicon Valley company

Wed, 04/12/2017 - 08:17

RTI’s engineering team is based in Sunnyvale, CA. We also have a smaller, yet rapidly growing team in Granada, Spain.

Between Sunnyvale and Granada stand 6000 miles. It takes an entire day to travel there. And we need to keep a difference of 9 hours in mind when organizing team meetings.

There are also quite a few differences in how people write a resume (curriculum vitae), and approach getting a job.

This blog post is a summary of my recent presentation to the engineering students at the University of Granada: “How to get hired by a Silicon Valley Company”. Many of the tips below are not just beneficial to new engineering graduates in Spain, but also to new grads in the US.

Your first mission is to be invited for an initial interview.

Your preparation started yesterday

Before you approach the graduation stage, and walk to the tune of Elgar’s Pomp and Circumstance march, there are quite a few things you can do. These are things which go beyond your regular classes and assignments.

Hiring managers pay attention to the type of internships and projects you worked on. You can show your love for programming through your open source contributions or by the cool demo you built at a hackathon. Your work speaks for itself if I can download and test drive your mobile application from the Apple App Store or Google Play Store.

Beyond the technical projects, it is important to learn and practice English. Our Granada team is designed as an extension of the team in Sunnyvale. As a result, engineers in Spain will work on a project, together with engineers in California. Being able to express and defend your ideas well, in English, is important. Some of us learned English while watching Battlestar Gallactica (the original) or Star Trek. We may even admit to picking up phrases watching the A-team or Bay Watch. Yes, those shows are a few decades old. Learn, read, write and mostly find a way to speak English often. Go on a adventure through the Erasmus program, and practice English.

Lastly, start building your professional online profile.

  • Create a LinkedIn profile. Most often, employers will consult your LinkedIn profile, even before your resume. Please use a picture, suitable for work.
  • Create a personal website, with your resume, your projects, and how to contact you. Resumes and LinkedIn profiles are dull. Your personal website allows you to describe the projects in more depth, and include diagrams and even videos of your demoes. Consider it the illustrated addendum to your resume.
  • Share your thoughts on a blog, or on websites such as Medium.
  • Contributions to Github or Stack overflow speak for themselves. You can start by adding your school assignments to your github profile. However, hiring managers will look for contributions beyond the things you had to do to get a good grade.
  • Publish your applications to the Apple App Store of Google Play Store. I love to download the applications of a candidate and try it out. It takes time, effort and even guts to create a working application and share it publicly.
  • Manage your social profile carefully. Future employers may look at your Twitter rants or Facebook antics.
Drop the Europass style of resume

There are plenty of websites which give you the basics to write a good resume: keep it to 1–2 pages and follow a simple structure: objective, education, experience and projects, skills and qualifications and finally list your accomplishments, etc.

Here are a few Do’s and Don’ts, specifically for international candidates:

  • Write your resume in English. Make sure there are no typos. Use online services, such as Hemmingway App or Google Translate, to improve your work.
  • Focus on what you learned or did on a project. Do not just list project names, leaving the reader to guess what you did.
  • Add hyperlinks where the resume screener can get more details. And make sure the hyperlinks work.
  • Add you grades, in easy to understand terms. E.g., specify you graduated, first in class with 92%, rather than 6.1/7. I do get confused when I see two non-correlated grades: e.g., 3.2/4 and 8.7/10.
  • Read other US resume to learn the lingo. A hiring manager may look specifically if you took a class in Data structures and algorithms. In your university, that may have been covered in Programación II.
  • Customize your resume for the job.
  • Do not create any cute resume designs. No background or design touches (unless you are applying for a design job).
  • Drop the Europass resume format. I.e., do not include a picture, data of birth or multiple contact addresses. For an engineering position, I do not care about your driver’s license information. Do not use the standardized table to indicate your proficiency in various languages. Rather than indicate you rate your proficiency in German as a B2, state, “Conversational German”.
  • Do not use long lists of keywords, technologies or acronyms.
  • A pet peeve of mine: do not list Word or Excel unless you actually developed add-ons for those applications. Similarly, only list Windows, if you developed to the Windows APIs.
A cover letter allows you to make a great first impression

Before you submit your resume, craft a cover letter. It is not required by most companies. However, I recommended. It allows you to introduce yourself in your own words. It is your first impression.

A short and well-crafted introduction letter allows you to make a more personal connection. Your intro paragraph should list the job you are applying for and why you are excited about the job and the company. Next, describe in three points why you are a great fit. Describe your successes. Do not repeat your resume. Close by asking for the interview.

You probably read this blog post because you are ready to contact RTI for a job. Let me make it easy: go to the RTI Career Page.

Good luck.

Fog Computing: IT Compute Stacks meet Open Architecture Control

Wed, 04/12/2017 - 05:05

Fog computing is getting more popular and is breaking ground as a concept for deploying the Industrial IoT. Fog computing is defined by the OpenFog Consortium as “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things.” Looking further into the definition, the purpose is to provide low-latency, near-edge data management and compute resources to support autonomy and contextually-aware, intelligent systems.

In particular, fog computing facilitates an open, internet architecture for peer-to-peer, scalable compute systems that supports edge analytics, local monitoring and control systems. It’s this latter application to control that I think is particularly interesting.  Control systems have been in place for decades across multiple industries, and recently some of these industries have moved to create interoperable, open control architectures. In this blog, we’ll take a look at some existing open architecture frameworks that bear a resemblance to Fog and how Fog computing and these framework initiatives could benefit from cross-pollination.

Open Architecture Control

Back in 2004, the Navy started developing its Navy Open Architecture. In an effort to reduce costs and increase speed and flexibility in system procurement, the DoD pushed industry to establish and use open architectures. The purpose was to make it simpler and cheaper to integrate systems by clearly defining the infrastructure software and electronics that “glue” the subsystems or systems together. The Navy selected DDS as their publish-subscribe standard for moving data in real time across their software backplane (Figure 1 below).

Figure 1. The Navy Open Architecture functional overview. Distribution and adaptation middleware, at the center, is for integrating distributed software applications.

Fast forward to now and we find the OpenFog Consortium’s reference architecture looks very much like a modern, IT-based version of what the Navy put together back in 2004 for open architecture control. Given that this Navy Open Architecture is deployed and running successfully across multiple ships, we can feel confident that fog computing as an architectural pattern makes sense for real-world systems. Also, we can likely benefit from looking at lessons learned in the development and deployment of the Navy’s architecture.

OpenFMB

The Open Field Message Bus (OpenFMB) is a more recent edge intelligence, distributed control framework standard for smart power-grid applications. It is being developed by the SGIP (Smart Grid Interoperability Panel). Energy utilities are looking at ways to create more efficient and resilient electricity delivery systems that take advantage of clean energy and hi-tech solutions.

Instead of large, centralized power plants burning fossil fuels or using nuclear power to drive spinning turbines and generators, Distributed Energy Resources (DERs) have emerged as greener, local (at the edge of the power grid) alternatives that do not have to transmit electricity over long distances. DERs are typically clean energy solutions (solar, wind, hydro, geothermal) that provide for local generation, storage and consumption of electricity. But DERs are intermittent and need to be managed and controlled locally, as opposed to centrally, which is all that is available in the current power grid.

Distributed intelligence and edge control is the solution. The OpenFMB framework is being deployed and proven in smart grid testbeds and field systems. Looking at the OpenFMB architecture (Figure 2 below), you can see the concept of a software integration bus clearly illustrated.

Figure 2. OpenFMB architecture integrates subsystems and applications through a central, real-time publish-subscribe bus.

Like the Navy Open Architecture, the OpenFMB distributed intelligence architecture looks very much like a fog computing environment. Since OpenFMB is still under development, I would bet that the OpenFog Consortium and OpenFMB project team would benefit by collaborating.

OpenICE

Patient monitoring, particularly in intensive care units and emergency rooms is a challenging process. There can be well over a dozen devices attached to a patient – and none of them interoperate. To integrate the data needed to make intelligent decisions about the welfare and safety of the patient, someone has to read the front-end of each device and do “sensor fusion” in their head or verbally with another person.

OpenICE, the Open Source Integrated Clinical Environment, was created by the healthcare IT community to provide an open architecture framework that supports medical device interoperability and intelligent medical application development. OpenICE (Figure 3 below) provides a central databus to integrate software applications and medical devices.

Figure 3. The OpenICE distributed compute architecture, with DDS-based databus, facilitates medical device and software application integration.

Again, the OpenICE architecture supports distributed, local monitoring, integration and control and looks very much like a fog architecture.

And now Open Process Automation

More recently, Exxon-Mobil and other process automation customers have gathered via the Open Process Automation Forum to begin defining an open architecture process automation framework. If you look at the various refineries run by Exxon-Mobil, you’ll find distributed control systems from multiple vendors. Each major provider of process automation systems or distributed control systems has its own protocols, management interfaces and application development ecosystems.

In this walled garden environment, integrating a latest and greatest subsystem, sensor or device is much more challenging. Integration costs are higher, device manufacturers have to support multiple protocols, and software application development has to be targeted at each ecosystem. The opportunity for the Open Process Automation Forum is to develop a single, IIoT based architecture that will foster innovation and streamline integration.

Looking at the Exxon-Mobil diagram below, we find, again, an architecture centered around an integration bus, which they call a real-time service bus. The purpose is to provide an open-architecture software application and device integration bus.

Figure 4. Exxon-Mobil’s vision of an open process automation architecture centered around a real-time service bus.

Again, we see a very similar architecture to what is being developed in the IIoT as fog computing.

The Opportunity

Each of these open architecture initiatives is looking to apply modern, IIoT techniques, technologies and standards to their particular monitoring, analysis and control challenges. The benefits are fostering innovation with an open ecosystem and streamlining integration with an open architecture.

In each case, a central element of the architecture is a software integration bus (in many cases a DDS databus) that acts as the software backplane facilitating distributed control, monitoring and analysis. Each group is also addressing (or needs to address) the other aspects of fog computing like end-to-end security, system management and & provisioning, distributed data management and other aspects of a functional fog computing architecture. They have the opportunity to take advantage of the other capabilities of Industrial IoT beyond control.

We have the opportunity to learn from each effort, cross-pollinate best practices and develop a common architecture that spans multiple industries and application domains. These architectures all seem to have very similar fog computing requirements to me.

Getting Started with Connext DDS, Part Four: From Installation to Hello World, These Videos Have You Covered

Thu, 04/06/2017 - 08:52

I started my career at a defense company in the San Francisco Bay Area on a project that involved a distributed system with several hundred nodes (sensors, controllers and servers). All these nodes were networked over different physical media including ethernet, fiber optics and serial. One of the challenges we faced was ensuring our control systems could operate within their allotted loop times. This meant data had to arrive on time regardless if a node required 10 messages per second or several thousand messages per second. We needed a more effective method of communication than point-to-point or centralized server.

To address our most extreme cases of receiving data every handful of microseconds, a colleague of mine developed a protocol that allowed any node on the network to publish blocks of data to a fiber optics network analogous to distributed shared memory. The corresponding nodes would only read messages that enabled them to compute their control algorithms and ignore all other data. This was around 2009, and little did I know at the time, this was my introduction to the concept of data-centric messaging. It so happens that the Object Management Group (OMG) had already been standardizing data-centric messaging as the Data Distribution Service (DDS), the latest version of which was approved in April 2015.

Fast forward a decade later, I have recently been hired as a product manager at Real-TIme Innovations (RTI), the leading DDS vendor. Like most avid technologists ramping up on a new product, I have been eager to get past the “setup” phase so I can start seeing Connext DDS in action. To help me get ramped up, my new colleagues shared these Getting Started video tutorials. With these videos, I was able to quickly build sample applications that communicated with each other over DDS. You can check out the Getting Started tutorials for yourself to see how to configure, compile and run HelloWorld examples in Java and C++.

Granted, there’s more work for me to do to get defense-grade computers talking over fiber, but here’s why I found these tutorials so helpful: they enabled me to quickly pass the beginner’s phase allowing me to hit the ground running right away, therefore shortening my learning curve. Check out the tutorials and see for yourself!

Getting Started with Connext DDS, Part Three: The Essential Tool ALL DDS Developers Need to Know About

Thu, 03/30/2017 - 07:17

Before joining RTI engineering, I was a customer of RTI’s for quite some time. I started working with RTI products before Data Distribution Service (DDS) was a standard. I also happened to be one of the first users of DDS 4.0, when it was finally codified into the standard as we know it today.

I have a passion for developing tools that would help using the Connext DDS products easier — because the core product is so good. I love doing this, and this is what ultimately brought me to RTI. I’m now leading the engineering team responsible for Connext DDS Tools. I enjoy helping RTI customers adopt Connext DDS and use it in their projects.

One of my favorite tools is Admin Console. It is essential for troubleshooting, configuring and monitoring all Connext DDS infrastructure services as well as visualizing data directly from your system. Admin Console minimizes troubleshooting time and effort in all stages of application development by proactively analyzing system settings and log messages. Problems get highlighted, making them easy to find and fix.

One of my RTI colleagues, Dave Seltz, recently recorded a short tutorial video to help you learn the essentials of working with the Admin Console. It allows you to quickly master troubleshooting, configuring and monitoring all Connext DDS infrastructure services. Just like other Connext Tools, the Admin Console is included in the Pro version of Connext DDS. The examples used in the video don’t require any configuration or setup and will work out of the box, once you install the product. You can try them right away by following Dave’s tutorial; simply download the free 30-day trial for Connext DDS Pro and give it a shot!

We’re heading to Munich!

Fri, 03/24/2017 - 11:46

London Connext Conference 2014 and 2015 events brought power DDS users together from a wide range of industries to share experiences, applications and expertise. For those of you who were unable to attend but curious about what you missed, head over to Community and view a list of the presenters and some of the presentations (2014 and 2015). For our third year, we wanted to switch things up a bit, and the first big change to the event is the location: we’ll be hosting our two-day event in Munich!

The second change (and the one I’m most excited to announce) relates to our agenda. In the past, we’ve created an agenda that showcases our users and their work through a curated selection of keynote presentations, demonstrations, and smaller group presentations. This year, in addition to these, we’re going to be offering 2 workshops! The first workshop focuses on using Connext Pro Tools and the other will dive into Connext DDS Secure. During these workshops, you’ll have time to get up and running with the products, ask questions and receive answers from RTI staff, and more.

This is just a sampling of what we’ll be offering. To register now, head on over to https://www.rti.com/munich-connext-con-2017. Also, if you’ll be attending and would like to be considered for a keynote spot at this year’s conference, please visit the conference page for submission details. We can’t wait to see you there!

Getting Started with Connext DDS, Part Two: Use Shapes Demo to Learn the Basics of DDS Without Coding

Thu, 03/23/2017 - 17:03

If you’re building Industrial IoT (IIoT) systems, then you’re probably investigating the Data Distribution Service (DDS) standard. DDS is the only connectivity framework designed specifically for mission-critical IIoT systems.

IIoT applications have extraordinarily demanding requirements in terms of performance, scalability, resilience, autonomy and lifecycle management. To satisfy these requirements, DDS includes unique capabilities—differentiating it from other connectivity frameworks and low-level messaging protocols that were originally designed for consumer IoT and conventional IT applications.

To quickly learn more about DDS and how it is different, there is an easy way: RTI Shapes Demo. Included as part two in the Getting Started series, see part one here, Shapes Demo is a game-like, interactive application that lets you explore DDS capabilities without having to do any programming. It is a tool you can use to learn about the basic (and some advanced) DDS concepts, such as publish-subscribe messaging,  real-time Quality of Service (QoS), data-centric communication, automatic discovery, brokerless peer-to-peer messaging, and reliable multicast.

There are two ways to get RTI Shapes Demo:

After installing Shapes Demo, watch this simple video tutorial to help you get started quickly.

You can also check out the User’s Manual under the Help menu. Chapter 4 walks you through examples that illustrate many of the DDS standard’s powerful features.

Download RTI Shapes Demo and start learning more about DDS today!

Getting Started with Connext DDS – ELI5, please.

Thu, 03/16/2017 - 16:53

One of my favorite subreddits is r/ELI5. For those of you who might not know, ELI5 is a forum, dedicated to offering up explanations of user-submitted topics and concepts in a very specific way – explaining it in a way that even a 5-year old would understand, hence ELI5 (Explain it Like I’m 5).

ELI5 is a pretty popular subreddit. Why? Well, I believe it’s because there are tons of things we don’t know much about (we’re all experts in one area or another, but we don’t know everything!), and these posts give us a chance to gain some basic knowledge outside our area of expertise. Making information simple benefits everyone. Simplicity doesn’t mean a lack of complexity. Being able to take a complex subject that you’ve spent years immersed in, and distil it down to some facts and anecdotes that provide a level of working understanding is amazing – it makes information accessible.

ELI5 doesn’t mean the thing you’re describing isn’t interesting, valuable or worthy of more time and attention. Being able to ELI5 allows people with little to no domain knowledge or context on these more complex and nuanced subjects to understand the basics and to incorporate those basics into other things. It’s general, but it’s useful. If you can give a 5-year old a working understanding of things such as what is a product?, why do we have a president?, or what is middleware?, you really have to understand what you’re talking about.

From my perspective, DDS is powerful – and can be complex – and we hope we’ve made it accessible enough that you can do amazing things with it.

At RTI, we’ve been working behind the scenes to bring you something new. In the spirit of my favorite subreddit, I want to introduce you to Getting Started – all the tools and information you need to get started with DDS.

We explain how to use our products, how to go from install to helloworld, what DDS is (whitepapers), how people are using it, how you can setup the basics using our full sets of configuration files and code to address your most common and challenging use cases (case+code) and more. We’ve even curated special collections of content to meet your needs so you don’t have to wade through everything. And this is only phase 1 – we have so much more information that’s just waiting to go live, and we’re excited to share it.

And as part of making sure you’re getting what you need, let us know. What would you find valuable to get up and running using DDS? What questions did you need answers to, but had trouble finding? What content did you wish was available that wasn’t when you first started using our product? Tell us or leave a comment!

Standards vs. Standardization: How to Drive Innovation in Self-Driving Cars

Tue, 03/07/2017 - 14:50

Authors: Bob Leigh & Brett Murphy

There was a great article in the NY Times recently that suggested self-driving cars may need some standards to foster innovation. This is certainly true, but the article confuses standards and standardization, suggesting that standardizing on a common automotive platform may instead stifle innovation. It is important to understand the difference between the decision to ‘standardize’ on a platform, and the very powerful impact an interoperability standard can have on an industry.

Common platforms spur innovation by creating an ecosystem and simplifying development efforts. One can choose to standardize on a proprietary platform, like the Apple iPhone, where the main goal is to develop an ecosystem and create applications for the platform itself. Standardizing on a walled-garden platform like this can certainly spur innovation like it did in the early days of the iPhone, but it also creates silos and rarely allows for broad interoperability outside of the, often proprietary, platform. App developers for smartphones had to develop and maintain at least three different versions early on in the market. Alternatively, standards, which are managed by an independent governing body, can be truly transformative for the entire industry and allow everyone to participate in developing the ecosystem. For example, TCP/IP, HTTPS and RESTful services have been transformative standards for networking and web applications. In this case, open standards provide a foundation for developing applications and systems that run almost anywhere. These two approaches are not always mutually exclusive, but they have different objectives and results.

For the IIoT (Industrial Internet of Things) to truly transform industrial systems, businesses and industries, a standards-based approach is necessary. For autonomous systems and especially self-driving cars, this is especially true because these systems need to bring together the best technologies from many different independent companies and research organizations, while also fostering rapid innovation. I agree with the author; the industry does not need one-size fits all solutions, or a closed, proprietary platform. This can stifle innovation, creating closed ecosystems, siloed architectures and de-facto monopolies. However, the right standards-based approach will support interoperability between different vendor solutions. The key is to identify the critical interfaces and standardize them. For example, how data is shared between applications running on different devices and systems is a key interface. The right standard will foster an open architecture driven ecosystem and act as a deterrent, or brake, to proprietary and closed ecosystems by being a neutral interface between competing interests.

Very few standards can accomplish this task. Given that IIoT is a relatively new technology and that there are many interoperability standards out there, how is one to choose? Fortunately, the Industrial Internet Consortium has done much of this work and has developed a very detailed analysis of IIoT connectivity standards and best practices (See the Industrial Internet Connectivity Framework (IICF)).  This document presents a connectivity stack to ensure syntactic interoperability between applications running across an IIoT system.  It assesses the leading IIoT connectivity standards and establishes criteria for core connectivity standards. Using a core connectivity standard is best practice and helps ensure secure interoperability. It details four potential core connectivity standards and the types of IIoT systems best addressed by each.

For Autonomous Vehicles, the choice couldn’t be more clear. Autonomous vehicles have created unprecedented demand for both the rapid innovation required for commercial technology and the performance, security and safety required of complex industrial systems. Comparing these requirements with the assessments in the IICF, it is clear the only connectivity standard that suitably addresses these challenges is the OMG’s DDS (Data Distribution Service) standard. DDS is playing a critical role in the IIoT revolution and is already disrupting in-car automotive technology as well. DDS acts as a common language between all the devices, applications, and systems, which is especially important in Autonomous Vehicles as this can hasten innovation and drastically lower the risk of integrating all these disparate systems. DDS offers next generation standards based security, control at the data level, and a proven track record in multi-billion dollar mission and safety-critical systems worldwide.

It is an exciting time to be involved in this industry. The complexity of the problem, and the speed of innovation is going to create clear winners while others will struggle to stay relevant. As we have seen in the storage, computing and networking industries in the past, winning often depends on choosing the right standard. So, how will you ‘standardize’ to foster innovation?

You can learn more about DDS’s role in IIoT, or if you want to learn about using DDS in Autonomous Vehicles See RTI’s white paper titled Secret Sauce of Autonomous Cars and learn more about adding data-flow security with our DDS Secure product.

Industrial Internet Connectivity Document Evaluates Core Standards: DDS, OPC-UA, WebServices

Tue, 02/28/2017 - 13:31

The Industrial Internet Consortium has released an important part of its Reference Architecture guidance: its Connectivity Framework document. This is actually pretty important; this document dives into the detail on connectivity for IIoT systems, establishes criteria for evaluating connectivity technologies/standards and puts forward some likely technologies for core connectivity standards, including DDS, OPC-UA and WebServices. In other words, there is some really valuable guidance here.

What is Connectivity for IIoT Systems?

According to the Industrial Internet of Things Connectivity document, “connectivity provides the ability to exchange information amongst participants within a functional domain, across functional domains within a system and across systems. The information exchanged may include sensor updates, events, alarms, status changes, commands, and configuration updates.” More concretely, connectivity is the critical, cross-cutting function that supports interoperability within and across IIoT systems. Moving beyond the current mish-mash of proprietary and vertical-industry-specific standards to an open IIoT standards-based framework is the goal of this work.

Looking at Connectivity from a technical viewpoint, figure 1 shows where the Connectivity function lies on a Network, Connectivity, and Information stack, and it divides Connectivity into 2 different layers: Transport and Framework. The Transport layer provides technical interoperability, with “Bits and Bytes shared between endpoints, using an unambiguously defined communication protocol.” The Framework layer provides syntactic interoperability, with “Structured data types shared between endpoints. Introduces a common structure to share data; i.e., a common data structure is shared. On this level, a common protocol is used to exchange data; the structure of the data exchanged is unambiguously defined.” Addressing connectivity needs up through the syntactic interoperability provided by the connectivity framework layer and assessing connectivity framework standards is one of the important contributions of this document.

Figure 1. Connectivity, using the Networking functions below – Internet Protocol, provides the layers for communicating messages and data between system participants.

The Concept of a Core Connectivity Standard.

To ensure interoperability within and across IIoT systems, the Connectivity Framework document recommends the use of a core connectivity standard. Figure 2 shows how this core standard becomes the connectivity bus for the system, integrating native devices and applications directly and legacy, or non-core-standard devices and applications through protocol gateways or bridges. In this way non-standard entities can be “normalized” into the core connectivity standard. This core connectivity reference architecture is central to the IIC’s guidance on ensuring secure, device to device to application interoperability for IIoT systems.

Figure 2. Using a Core Connectivity Standard provides for interoperability and streamlined integration within and across IIoT systems.

Evaluating Connectivity Standards.

To reduce the integration and interoperability challenge across different IIoT systems, a key goal of the IIC, the document provides a method and template for evaluating connectivity technologies and standards for the IIoT. It includes assessments of most IIoT standards like DDS, OPC-UA, HTTP/WebServices, OneM2M, MQTT and CoAP. Many of these standards turn out to address different levels of the connectivity stack as you can see in figure 3. Further details on each standard are provided in the document.

Figure 3. IIoT connectivity standards and their location on the connectivity stack.

DDS as a Core Connectivity Standard.

From figure 3, you can see that the document assesses 4 connectivity framework standards including DDS. In addition, the Connectivity Framework document provides guidance on requirements for choosing core connectivity framework standards. A core connectivity framework must:

  • Provide syntactic interoperability
    • Provide way to model data, a type-system (e.g. DDS, OPCUA)
    • Can’t be just a “simple” messaging protocol (MQTT, COAP, etc)
  • Be an open Standard with strong governance:
    • from SDOs like IEEE, OASIS, OMG, W3C, IETF
  • Be horizontal & neutral
  • Be stable and deployed in many industries
  • Have standards-defined core gateways to all other connectivity core standards
  • Provide core functions like publish-subscribe, request-reply, discovert, etc.
  • Meet non-functional requirements: performance, scalability, security, …
  • Meet business criteria: not require single components from single vendors, have supported SDKs, have open source implementations, etc.

In figure 4, you can see the 4 potential core connectivity framework standards assessed against these requirements. DDS supports all the requirements and is a promising standard for IIoT systems across all industries.

Figure 4. IIoT Connectivity Core Standards Criteria applied to key connectivity framework standards.

In particular, if you compare DDS with another promising connectivity framework standard, OPC-UA, from figure 5 below, you can see that they address very different system use cases. If your primary challenge is integrating software applications across an IIoT system, then DDS is a good choice. If you challenge is to provide an interface for your edge device to allow system integrators to later integrate it something like a manufacturing workcell, then OPC-UA is a good choice.

Figure 5. Non-overlapping system aspects addressed by the core connectivity framework standards.

As you can see, this IIC document provides a lot of important guidance and clarifying concepts for IIoT connectivity. You can use it’s IIoT connectivity standards assessment profile to assess other standards you may be interested in for your system, or use its guidance to choose among the leading standards. For more detail download the document for yourself.

Use MATLAB to Leverage Your Live IoT Data

Thu, 02/23/2017 - 09:53

If you have ever done any data analysis from a sensor or other type of data source, you have most likely followed a process where you collect the data, you convert the data and then use MATLAB to process and analyze the data.  Using  MATLAB to analyze the data is a very well known tool to accomplish that task.  Collecting and converting the data, so that it is usable in  MATLAB, can take an enormous amount time.  Thanks to an integration that was completed by MathWorks, it is now possible to easily connect  MATLAB up with live data that is being published and subscribed to on DDS.  With  MATLAB being one of the top tools used to analyze data and DDS quickly becoming the data communications middleware of IIoT applications, this integration will enable some very rapid prototyping and test analysis for developers.  This blog post will walk through a few examples of how to publish DDS data and also how to subscribe to DDS data using  MATLAB.

Getting Started

To get started, you will need to make sure that both  MATLAB and RTI Connext DDS are installed on your computer.  For this set of examples, the following versions were used:

Once you have those installed, just follow the video at this link to complete and verify the installation:  Installation Video

Initialization

Once you have everything installed and verified, then there are just a few steps to get DDS setup appropriately within  MATLAB.

  •  Import the datatype(s) that will be used in your project.
  •  Create a DDS Domain Participant
  •  Create a DDS DataWriter
  •  Create a DDS DataReader

Importing a datatype in  MATLAB is simple.  In DDS, datatypes are specified using IDL files.  The  MATLAB import statement can read an IDL file directly and will create the “.m” files required to work with that datatype within the  MATLAB interpreter.  The following  MATLAB call will import a datatype called “ShapeType” from the ShapeType.idl file located in the current working directory:

>> DDS.import('ShapeType.idl','matlab','f')

Now that datatype is available to use when creating your DataReaders and DataWriters of topics in DDS.  Also note, that once the import has been done, this step no longer has to be run in the future.  The type will be available in  MATLAB going forward.  The next thing to do to get DDS discovery going is to create a DDS Domain Participant.  That can be accomplished in this call:

>> dp = DDS.DomainParticipant;

Using this DomainParticipant (dp) object, you can then create both DataWriter and DataReader objects.  The following two commands will add a datawriter object and datareader object to the dp specifying its type to be the newly created “ShapeType” and their topics to be “Triangle” and “Square” respectively.

>> dp.addWriter('ShapeType','Triangle') >> dp.addReader('ShapeType','Square') Subscribing to Data in Shapes Demo

The ShapeType is used so that it will communicate with the standard RTI Shapes Demonstration application (Shapes) that is provided by RTI.  Shapes enables the creation of both DataWriters and DataReaders of “Square”, “Circle” and “Triangle” topics that are in turn based on the “ShapeType” datatype.  For more information on how to use the Shapes application, click here to view our video tutorial.

In Shapes, the next step is to create a subscriber of Triangle. In the next screen just leave all the other QoS options as default.

Publishing Data in  MATLAB

Now that we have the DataWriter setup in  MATLAB to send out ShapeType on the Triangle topic, and also we have the Shapes Demo setup to receive the publication, lets exercise the writer.  The following commands will populate the fields of the ShapeType and then publish out the data on the Triangle Topic:

%% create an instance of ShapeType myData = ShapeType; myData.x = int32(75); myData.y = int32(100); myData.shapesize = int32(50); myData.color = 'GREEN'; %% write data to DDS dp.write(myData);

The result on the Triangle Topic within the Shapes Demo will be a single Green Triangle shown here:

Some more interesting use cases of publishing Triangle within  MATLAB are:

%% Publish out Green Triangles in a line at 1 Hz for i=1:10     myData.x = int32(20 + 10*i);     myData.y = int32(40 + 10*i);     dp.write(myData);     pause(1); end %% Publish out Green Triangles in a Circle pattern at 20Hz for i=1:1000     angle = 10*pi * (i/200);     myData.x = int32(100 + (50 * cos(angle)));     myData.y = int32(100 + (50 * sin(angle)));     myData.shapesize = int32(40);     myData.color = 'GREEN';     dp.write(myData);     pause(0.05); end

The resulting output on the Shapes Demo are respectively:

             

Publishing Data in Shapes Demo

In the Shapes demonstration, create a publisher of Square.  In the next screen just pick a color and leave all the other QoS options as default.  The following screenshot shows the Square Publish screen.  For my demonstration, I have chosen an Orange Square.  This will publish the X,Y Position on the screen every 30 msec.

                

Subscribing to Data in  MATLAB

If you remember from before we added a Square Topic DataReader to the Domain Participant in  MATLAB.  We will use this DataReader to subscribe to data that we are now publishing from the Shapes Demonstration.  The following commands in  MATLAB will read 10 samples at 1 Hz.

%% read data for i=1:10     dp.read()     pause(1); end

The resulting output in  MATLAB will be 10 reports of the following:

Something More Interesting

Now that we have both directions going, lets do something that is more creative with the data.  First we will read in the Square data and modify it to switch the X and Y coordinates and then republish it out on to a RED Triangle.  Second, we will take the resulting Position data and plot it directly within  MATLAB.  These are the commands to use in  MATLAB to accomplish that.

%% allocate an array of 100 elements xArray = zeros(1,100); %% run a loop to collect data and store it into the array %% also switch up the X and Y coordinates and then republish onto %% the Triangle topic for i=1:100        [myData, status] = dp.read();        if ~isempty(myData)             x = myData(1).x;             y = myData(1).y;             xArray(i) = x;             yArray(i) = y;             myData(1).y = x;             myData(1).x = y;             myData(1).color = 'RED';             dp.write(myData(1));        end     pause(0.05) end %% Plot the X Position Data t = 1:100; plot(t,xArray); legend('xPos'); xlabel('Time'), ylabel('Position'); title('X Postions');

The resulting output in Shapes Demo will be a Red Triangle moving the opposite of the Orange Square and also a Plot will be generated within  MATLAB showing the X Position data:

       

As you can see, the integration of DDS with  MATLAB is very simple to use and makes it very easy to collect data, inject data and analyze data.  For this demonstration, we used the simple Shapes Application, but the data used can just as easily be your own application data.  If you would like to find out more about the  MATLAB Integration with RTI Connext DDS, please visit this site on MathWorks site:  MATLAB DDS Integration. If you’d like to learn more about using Connext DDS, click here to gain access to our developer resources.

Well Being over Ethernet

Thu, 02/02/2017 - 14:54

Guest Author: Andrew Patterson, Business Development Director for Mentor Graphics’ embedded software division (Thank you, Andrew!)

Mentor Embedded on the NXP Smarter World Truck 2017

One of the larger commercial vehicles present at CES 2017 was the NXP® Smarter World Truck – an 18-wheeler parked right outside the Convention Center.  It contained over 100 demonstrations making use of NXP products showing some of the latest innovations in home-automation, medical, industrial and other fields.  Mentor Embedded, together with RTI, worked with NXP to set up a medical demonstration that showed data aggregation in real-time from medical sensors. By collecting medical data, and analyzing it in real time, either locally or in a back-office cloud, a much quicker and more accurate diagnosis of any medical condition can be possible.  Mentor Embedded’s aggregation gateway made use of the multicore NXP i.MX6, a well-established platform, running our own secure Mentor Embedded Linux®.  The technology we specifically wanted to highlight in this example was DDS (Data Distribution Service), implemented by RTI’s Connext® DDS Professional.  The DDS communication protocol, based on a physical Ethernet network, allows multiple sensor nodes to link to a hub or gateway, so it is appropriate for many medical and industrial applications where multi-node data needs to be collected securely and reliably.

Traditional patient monitoring systems have made use of client/server architectures, but these can be inflexible if reconfiguration changes are needed, and they don’t necessarily scale to a large number of clients in a large-scale medical or industrial installation. DDS uses a “publisher” and “subscriber” concept – it is easy to add new publishers and subscribers to the network without any other architecture changes, so the system is scalable.

In the publish-subscribe model there is no central data server – data flows directly from the patient monitor source to the gateway destination.  In our demo medical system, the data sources are individual sensors that put data onto the Ethernet network when the new readings are available.  Data is tagged for reading and accessed by any registered subscriber.  Once received by the subscriber gateway, the data can be uploaded to a cloud resource for further analysis and comparisons made with historical readings. Further trend analysis can be made over time.

The process for adding a new node to a publish-subscribe network is straightforward. A new data element announces itself to the network when it attaches, optionally describing the types and formats of the data it provides. Subscribers then identify themselves to the data source to complete the system reconfiguration.

Mentor Embedded and RTI medical applications demo where multi-node data needs to be collected securely and reliably

DDS provides a range of communication data services to support a variety of application needs, ranging from guaranteed command and control, to real-time data transmission. For example, if it is required to send a “halt” command to a specific node, there is a data service type that guarantees error-free delivery, so sensor data transmission stops immediately. There are also time-sensitive modes, useful when there is time-sensitive data, which require minimum network latency.  Less time-critical data can make use of a “best effort” service, where transmission is scheduled as a lower priority than the time-sensitive communication.

Our demonstration setup is shown in the picture on the left in the NXP Smarter World Truck 2017. The NXP i.MX6 quad core system was linked to a 10” touch-screen display, showing patient graphs.  The Mentor Embedded Linux operating system included the RTI Connext DDS protocol stack, the necessary drivers for high-performance graphics, and the Ethernet network connections. Other options include a fastboot capability and wireless communication links for cloud-connectivity.  For more information please visit Mentor Embedded Linux.

To see when the NXP Smarter World Truck is coming near you, visit the schedule at iot.nxp.com/americas/schedule – it is being updated frequently, so keep a watch on it!

Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

2nd Version of the Industrial Internet Reference Architecture is Out with Layered Databus

Tue, 01/31/2017 - 00:00

A year and a half ago the IIC released the first version of the Industrial Internet Reference Architecture (IIRA) – now the second version (v1.8) is out. It includes tweaks, updates and improvements, the most important or interesting of which is a new Layered Databus Architecture Pattern. RTI contributed this new architecture pattern in the Implementation Viewpoint of the IIRA because we’ve seen it deployed by hundreds of organizations that use DDS. Now it’s one of the 3 common implementation patterns called out by the new version of the IIRA.

So, what is a databus? According to the IIC’s Vocabulary document, “a databus is a data-centric information-sharing technology that implements a virtual, global data space, where applications read and update data via a publish-subscribe communications mechanism. Note to entry: key characteristics of a databus are (a) the applications directly interface with the operational data, (b) the databus implementation interprets and selectively filters the data, and (c) the databus implementation imposes rules and manages Quality of Service (QoS) parameters, such as rate, reliability and security of data flow.”

For those who know the DDS standard, this should sound familiar. You can implement a databus with a lower level protocol like MQTT, but DDS provides all the higher-level QoS, data handling, and security mechanisms you will need for a full featured databus.

As we look across the hundreds of IIoT systems DDS users have developed, what emerges is a common architecture pattern with multiple databuses layered by communication QoS and data model needs. As we see in the figure below, we’ll usually see databuses implemented at the edge in the smart machines or lowest level subsystems like a turbine, a car, an oil rig or a hospital room. Then above those, we’ll see one or more databuses that integrate these smart machines or subsystems, facilitating data communications between them and with the higher level control center or backend systems. The backend or control center layer might be the highest layer databus we see in the system, but there can be more than these three layers. It’s in the control center (which could be the cloud) layer that we see the data historians, user interfaces, high-level analytics and other top-level applications. From this layer, it’s straightforward to zero in on a particular data publication at any layer of the system as needed. It’s from this highest layer that we usually see integration with business and IT systems.

The Layered Databus Architecture Pattern: one of three implementation patterns in the newly released Industrial Internet Reference Architecture v1.8.

Why use a layered databus architecture? As the new IIRA says, you get these benefits:

  • Fast device-to-device integration – with delivery times in milliseconds or microseconds
  • Automatic data and application discovery – within and between databuses
  • Scalable integration – comprising hundreds of thousands of machines, sensors and actuators
  • Natural redundancy – allowing extreme availability and resilience
  • Hierarchical subsystem isolation – enabling development of complex system designs

If you want to dig into the databus concept, especially as it compares with a database (similar data-centric patterns for integrating distributed systems, but different in the way they integrate via data), take a look at this earlier blog post on databus versus database.

In addition to the new IIRA release, the IIC is getting ready to release an important document on the Connectivity Framework for its reference architecture. Look for much more detail on this document that sets out core connectivity standards for the Industrial Internet.