Papers

Publication Year: 2022

In 2016 UPC released the new version of the Ulstein Integrated Automation System (ULSTEIN IAS®). This is a major release, where the previous system from 2006 is replaced by top modern technology based on ULSTEIN X-CONNECT™. The new product, which is first in its line with an Industrial Internet of Things (IIoT) backbone, has, among a range of different things and functionality, a new Graphical User Interface (GUI), segmented topology and is scalable in size and availability.

The process of Configure To Order (CTO) makes it easier to embed changes and adapt the product to fit the customer’s needs. SCADA, OPC DA and PLC’s are replaced with configurable software running on Linux PC’s, that are vendor independent and communicate on DDS (Data Distribution Service). DDS is one of the core technologies that enables us to easier integrate (and be integrated) and extend the product to fit the customer.

This paper describes the architecture and benefits derrived from using RTI Connext DDS.

 

Organization: Vanderbilt University, RTI
Publication Year: 2021

The growing number of data- and latency-sensitive Internet of Things (IoT) applications is posing significant challenges for the edge and cloud deployment of publish/subscribe services, which are required by these applications. Two independently developed technologies show promise in addressing these challenges. First, Kubernetes (K8s) provides a de-facto standard for container orchestration that can manage and scale distributed applications in the cloud. Second, OMG’s Data Distribution Service (DDS), a standardized real-time, data-centric and peer-to-peer publish/subscribe middleware, is being used in thousands of critical systems around the world. However, the feasibility of running DDS applications within K8s for latency-sensitive edge computing, and specifically the performance overhead of K8s’ network virtualization on DDS applications is not yet well-understood. To address this, in this paper we evaluate the performance overhead of several container network interface (CNI) plugins including Flannel, WeaveNet and Kube-Router installed on a hybrid (ARM+AMD) edge/cloud K8s cluster. The paper reports results from a comprehensive set of experiments conducted to measure and analyze the performance (throughput, latency, and CPU/memory usage) of containerized DDS applications from the perspectives of virtualization overhead, reliability (DDS Reliable and BestEffort QoS), transport mechanisms (UDP unicast and multicast), and security. The insights derived from this study provide concrete guidance to developers of DDS-based applications in choosing the right virtual network plugin and configurations when hosting their real-time IoT applications in real-world containerized environments.

Publication Year: 2020

In this paper, a data-centric communication framework is proposed for multicast routable generic object-oriented substation event (GOOSE) messages (MRGM) over the wide area network (WAN) for effective substation-to-substation (SS2SS) and substation to control center (SS2CC) communications. In this structure, the IEC 61850 GOOSE message is transmitted over the WAN using the data distribution service (DDS) as a fast, reliable, and secure data-centric communication middleware. The main feature of this framework is its multicast capability, where several authorized subscribers can receive a published message simultaneously. This can significantly improve the system monitoring and control of the protection systems in modern smart grids, where intelligent schemes can be applied. The effectiveness of the proposed platform, in terms of total end-to-end delay between participants, is evaluated through experimental results obtained from the actual hardware-based test setup developed at the Florida International University (FIU) smart grid testbed. The results demonstrate that the latency between sending and receiving a GOOSE message among participants is within its maximum time span defined by the IEC 61850-90-5 working group for communications over the WAN.

Publication Year: 2020
The increased rate of cyber-attacks on the power system necessitates the need for innovative solutions to ensure its resiliency. This work builds on the advancement in the IoT to provide a practical framework that is able to respond to multiple attacks on a network of interconnected microgrids. This paper provides an IoT-based digital twin (DT) of the cyber-physical system that interacts with the control system to ensure its proper operation. The IoT cloud provision of the energy cyber-physical and the DT are mathematically formulated. Unlike other cybersecurity frameworks in the literature, the proposed one can mitigate an individual as well as coordinated attacks. The framework is tested on a distributed control system and the security measures are implemented using cloud computing. The physical controllers are implemented using single-board computers. The practical results show that the proposed DT is able to mitigate the coordinated false data injection and the denial of service cyber-attacks.
 
Publication Year: 2019

Software-defined networks (SDNs) have caused a paradigm shift in communication networks as they enable network programmability using either centralized or distributed controllers. With the development of the industry and society, new verticals have emerged, such as Industry 4.0, cooperative sensing and augmented reality. These verticals require network robustness and availability, which forces the use of distributed domains to improve network scalability and resilience. To this aim, this paper proposes a new solution to distribute SDN domains by using Data Distribution Services (DDS). The DDS allows the exchange of network information, synchronization among controllers and auto-discovery. Moreover, it increases the control plane robustness, an important characteristic in 5G networks (e.g., if a controller fails, its resources and devices can be managed by other controllers in a short amount of time as they already know this information). To verify the effectiveness of the DDS, we design a testbed by integrating the DDS in SDN controllers and deploying these controllers in different regions of Spain. The communication among the controllers was evaluated in terms of latency and overhead.

Publication Year: 2019
The smart grid energy management with variable renewable energy resources presents many challenges to the grid operation. An optimized solution to manage the available resources is necessary to achieve reliable operation. This paper presents the hierarchical distributed model predictive control (HDMPC) to solve the energy management problem in the multitime frame and multilayer optimization strategy. The HDMPC combines the concept of enabling the optimization over long time-horizon for a centralized supervisory management (SM) layer and another short time-horizon during high-power variability for a distributed coordination management (CM) layer. The information exchange and interoperability between different layers are provided through the data-centric communication approach. The SM (upper layer) works to present the grid operator with certain operational plans and gives the guidelines to the CM (lower layer). The CM has the responsibility to coordinate the relationship between the centralized optimization objectives and the physical power system layer. The proposed HDMPC control was verified both numerically and experimentally. The obtained simulation results show that the control strategy proposed here is successful and combines the benefits of both the centralized and distributed control for a global solution of the grid operation problem. The experimental results demonstrate the feasibility of the real-time implementation of the proposed system for deployment to control future smart grid assets.
 
Publication Year: 2019

This paper presents the design and implementation of a Multi-level Time Sensitive Networking (TSN) protocol based on a real-time communication platform utilizing Data Distribution Service (DDS) middleware for data transfer of synchronous three phase measurement data. To transfer ultra-high three phase measurement samples, the DDS open-source protocol is exploited to shape the network’s data traffic according to specific Quality of Service (QoS) profiles, leading to low packet loss and low latency by synchronizing and prioritizing the data in the network. Meanwhile the TSN protocol enables time-synchronization of the measured data by providing a common time reference to all the measurement devices in the network, making the system less expensive, more secure and enabling time-synchronization where acquiring GPS signals is a challenge. A software library was developed and used as a central Quality of Service (QoS) profile for the TSN implementation. The proposed design and implemented real-time simulation prototype presented in this paper takes in consideration diverse scenarios at multiple levels of prioritization including publishers, subscribers, and data packets. This allows granular control and monitoring of the data for traffic shaping, scheduling, and prioritization. The major strength of this protocol lies in the fact that it’s not only in real time but it’s time-critical too. The simulation prototype implementation was performed using the Real Time Innovation (RTI) Connext connectivity framework, custom-built MATLAB classes and DDS Simulink blocks. Simulation results show that the proposed protocol achieves low latency and high throughput, which makes it a desired option for various communication systems involved in microgrids, smart cities, military applications and potentially other time-critical applications, where GPS signals become vulnerable and data transfer needs to be prioritized.

Publication Year: 2018

In this paper, we develop a power system adaptive and predictive management strategy for integrating large-scale renewable energy penetration in the power systems. The variability and uncertainty of renewable energy increase the difficulty of the power system management extremely. Also, the problem becomes more challenging if the variable energy has large penetration. The proposed strategy depends on robust optimization under unforeseen contingencies and uncertain changes in intermittent resources output. Adaptive model predictive control is used to manipulate the power system energy unit's set-points to maximize profit at maximum security. Simulation studies are conducted to validate the effectiveness of the introduced strategy. The results show that the proposed strategy is successfully able to maximize profit not only in normal operation case but also in the case of severe contingencies. In addition to, the management strategy is flexible for customized plans of the grid operator and it is scalable for system extensions. The developed strategy helps the management system to increase the free renewable energy sources share at the minimum cost with maximum security.

Organization: Edith Cowan University
Publication Year: 2017

The convergence of Operational Technology and Information Technology is driving integration of the Internet of Things and Industrial Control Systems to form the Industrial Internet of Things. Due to the influence of Information Technology, security has become a high priority particularly when implementations expand into critical infrastructure. At present there appears to be minimal research addressing security considerations for industrial systems which implement application layer IoT messaging protocols such as Data Distribution Services (DDS). Simulated IoT devices in a virtual environment using the DDSI-RTPS protocol were used to demonstrate that enumeration of devices is possible by a non-authenticated client in both active and passive mode. Further, modified sequence numbers were found to be a potential denial of service attack, and malicious heartbeat messages were fashioned to be effective at denying receipt of legitimate messages.

Publication Year: 2017

Partitioning is a widespread technique that enables the execution of mixed-criticality applications in the same hardware platform. New challenges for the next generation of partitioned systems include the use of multiprocessor architectures and distribution standards in order to open up this technique to a heterogeneous set of emerging scenarios (e.g., cyber-physical systems). This work describes a system architecture that enables the use of data-centric distribution middleware in partitioned real-time embedded systems based on a hypervisor for multi-core, and it focuses on the analysis of the available architectural configurations. We also present an application-case study to evaluate and identify the possible trade-offs among the different configurations.

Publication Year: 2017

NASA Langley Research Center identified a need for a distributed simulation architecture that enables collaboration of live, virtual, and constructive nodes across its local area network with extensibility to other NASA Centers and external partners. One architecture that was prototyped and evaluated employed Data Distribution Service (DDS) middleware and the GovCloud cloud computing service. The nodes used DDS to exchange data. GovCloud was added as a potential solution to enable other Centers and external partners to join the distributed simulation through an existing trusted network, removing the need to establish case-by-case interconnection security agreements. The prototype architecture was applied to an airspace simulation of manned and unmanned vehicles exchanging Auto-Dependent Surveillance Broadcast (ADS-B) messages. Various configurations of nodes were run and evaluated to assess the architecture with respect to upfront investment to augment a node for the architecture, integration and interoperability of nodes, performance, and connectivity and security.

Publication Year: 2017

IETF impressively defined internet interoperation across 30 years of unforeseeable syntax API. IOT needs similar future proof, but for connected things' composable semantics, security, reliability and Quality of Service (QoS). This paper overviews these with simplifying tradeoffs from a bottom up approach using Data Distribution Service (DDS). Then high level semantic additions to DDS are suggested for semantics that are backward compatible, while maintaining the security, reliability and QoS of DDS. Finally, further work is suggested toward out-of-the-box composability and interoperability between common IoT data models and compliant solutions.

Organization: University of Cantabria
Publication Year: 2017

Many cyber-physical systems in the avionics domain are mission- or safety-critical systems. In this context, standard distribution middleware has recently emerged as a potential solution to interconnect heterogeneous partitioned systems, as it would bring important benefits throughout the software development process. A remaining challenge, however, is reducing the complexity associated with current distribution middleware standards which leads to prohibitive certification costs. To overcome this complexity, this work explores the use of the DDS distribution middleware standard on top of a software platform based on the ARINC-653 specification. Furthermore, it discusses how both technologies can be integrated in order to apply them in mission and safety-critical scenarios.

Publication Year: 2017

Refining and petrochemical processing facilities utilize various process control applications to raise productivity and enhance plant operation. Client–server communication model is used for integrating these highly interacting applications across multiple network layers utilized in distributed control systems.

This paper presents an optimum process control environment by merging sequential and regulatory control, advanced regulatory control, multivariable control, unit-based process control, and plant-wide advanced process control into a single collaborative automation platform to ensure optimum operation of processing equipment for achieving maximum yield of all manufacturing facilities.

The main control module is replaced by a standard real-time server. The input/output racks are physically and logically decoupled from the controller by converting them into distributed autonomous process interface systems. Real-time data distribution service middleware is used for providing seamless cross-vendor interoperable communication among all process control applications and distributed autonomous process interface systems.

Detailed performance analysis was conducted to evaluate the average communication latency and aggregate messaging capacity among process control applications and distributed autonomous process interface systems.

The overall performance results confirm the viability of the new proposal as the basis for designing an optimal collaborative automation platform to handle all process control applications. It also does not impose any inherent limit on the aggregate data messaging capacity, making it suitable for scalable automation platforms. 

Publication Year: 2016

Abstract. IoT big data real-time analytics systems need to effectively process and manage massive amounts of data from streams produced by distributed data sources. There are many challenges in deploying and managing processing logic at execution time in those systems, especially when 24x7 availability is required. Aiming to address those challenges, we have developed and tested a middleware for Distributed CEP, with a data-centric and dynamic design, based on the Data Distribution Service for Real- Time Systems (OMG-DDS) specification and its extension for dynamic topics/types (DDS-XTypes). Its main advantages include the use of OMG-DDS; witch is suitable for IoT applications with QoS requirements, its dynamic capabilities, and the scalable and parallel execution of the CEP rules on a dynamic set of processing nodes. 

Organization: University of Cantabria
Publication Year: 2016

Modern complex embedded systems are evolving into mixed-criticality systems in order to satisfy a wide set of non-functional requirements such as security, cost, weight, timing or power consumption. Partitioning is an enabling technology for this purpose, as it provides an environment with strong temporal and spatial isolation which allows the integration of applications with different requirements into a common hardware platform. At the same time, embedded systems are increasingly networked (e.g., cyber-physical systems) and they even might require global connectivity in open environments so enhanced communication mechanisms are needed to develop distributed partitioned systems. To this end, this work proposes an architecture to enable the use of data-centric real-time distribution middleware in partitioned embedded systems based on a hypervisor. This architecture relies on distribution middleware and a set of virtual devices to provide mixed-criticality partitions with a homogeneous and interoperable communication subsystem. The results obtained show that this approach provides low overhead and a reasonable trade-off between temporal isolation and performance.

Publication Year: 2016

An increasing amount of distributed real-time systems and other critical infrastructure now rely on Data Distribution Service (DDS) middleware for timely dissementation of data between system nodes. While DDS has been designed specifically for use in distributed real-time systems and exposes a number of QoS properties to programmers, DDS fails to lift time fully into the programming abstraction. The reason for this is simple: DDS cannot directly control the underlying network to ensure that messages always arrive to their destination on time.

In this paper we describe a prototype system that uses the OpenFlow SDN protocol to automatically and transparently learn the QoS requirements of DDS participants. Based on the QoS requirements, our system will manipulate the low-level packet forwarding rules in the network to ensure that the QoS requirements are always satisfied.

We use real OpenFlow hardware to evaluate how well our prototype is able to manage network contention and guarantee the requested QoS. Additionally, we evaluate how well the reliability and resilience features of a real DDS implementation is able to compensate for network contention on an unmanaged (i.e., normal) ethernet. To the best of our knowledge, this is the first evaluation of performance DDS under extreme network contention conditions.

 

Publication Year: 2015

Condition-based maintenance (CBM) of naval assets is preferred over scheduled maintenance because CBM provides a window into the future of each asset’s performance, and recommends/schedules service only when needed. In practice, the asset’s condition indicators must be reduced, transmitted (off-ship), and mined using shore-based predictive analytics. Real-Time Innovations (RTI), Inc. in collaboration with the University of South Carolina CBM Center is developing a comprehensive, multi-disciplinary technology platform for advanced predictive analytics for the Navy’s mechanical, electrical, and IT assets on-board ships. RTI is developing an open, extensible, data-centric bus architecture to integrate shipboard asset monitoring data with shore-based predictive analysis tools. The interoperability challenge is addressed using the Model-Driven Architecture (MDA) by transforming sensor data to rigorously specified standard data models. Our MDA process includes open standards such as the OMG Data Distribution Service (DDS) and Open System Architecture for Condition-Based Maintenance (OSA-CBM), both of which have enjoyed success in the Navy. Furthermore, the Navy’s Information Assurance (IA) requirements are implemented using the OMG Secure-DDS standard. In summary, the technology will improve combat readiness using a truly interoperable data-bus for exchanging CBM data from ship-to-shore while reducing distractions to the sailors, standby inventory requirements, and decision time for analysts.

Organization: RTI Research Team
Publication Year: 2015

We discuss how RTI DDS provides a foundation for the construction of an open architecture for the latest generation of Navy ships.

Organization: University of Cantabria
Publication Year: 2015

The Data Distribution Service (DDS) standard defines a data-centric distribution middleware that supports the development of distributed real-time systems. To this end, the standard includes a wide set of configurable parameters to provide different degrees of Quality of Service (QoS). This paper presents an analysis of these QoS parameters when DDS is used to build reactive applications normally designed under an event-driven paradigm, and shows how to represent them using the real-time end-to-end flow model defined by the MARTE standard. We also present an application-case study to illustrate the use and modeling of DDS in next-generation distributed real-time systems.

Organization: RTI Research Team
Publication Year: 2015

This paper discusses the Navy's Information Assurance requirements, and how the OMG DDS Security specification that we developed meets those requirements.  Secure DDS is the only commercial, DoD-tested implementation of this OMG standard.

Publication Year: 2015

The Internet of Things (IoT) paradigm has given rise to a new class of applications wherein complex data analytics must be performed in real-time on large volumes of fast-moving, heterogeneous sensor-generated data. Such data streams are often unbounded and must be processed in a distributed and parallel manner to ensure timely processing and delivery to interested subscribers. Dataflow architectures based on event-based design have served well in such applications because events support asynchrony, loose coupling, and helps build resilient, responsive and scalable applications. However, a unified programming model for event processing and distribution that can naturally compose the processing stages in a dataflow while exploiting the inherent parallelism available in the environment and computation is still lacking. To that end, we investigate the benefits of blending Functional-style Reactive Programming with data distribution frameworks for building distributed, reactive, and high-performance stream-processing applications. Specifically, we present insights from our study integrating and evaluating Microsoft .NET Reactive Extensions (Rx) with OMG Data Distribution Service (DDS), which is a standards-based publish/subscribe middleware suitable for demanding industrial IoT applications. Several key insights from both qualitative and quantitative evaluation of our approach are presented.

Author(s): Sumant Tambe
Organization: Real-Time Innovations
Publication Year: 2015

We report our experience of developing and using a simple yet an effective flow-based programming language and its distributed execution engine for detecting behavioral anomalies in physical assets in industrial IoT systems. Our stream processing systems is built using the Reactive Extensions (Rx) library for composing asynchronous data streams and the OMG Data Distribution Service (DDS) for publish-subscribe communication over the network. Our little language is called Stream Concatenation and Coordination (StreamCoCo) due to its similarity to the UNIX shell pipes-and-filter syntax. The novelty lies in the simple declarative programming model baked into the language that upon detection of anomalies in a stream, takes snapshots of other streams which may be distributed. Further, dynamic parallel pipelines of stateful stream processing operators are trivial to implement using StreamCoCo. We leverage the core capabilities of the language for infrastructure health monitoring and data analytics at the edge to assist remote human operators in problem diagnosis. 

Author(s): Sumant Tambe
Organization: Real-Time Innovations
Publication Year: 2014

This whitepaper describes a powerful C++ template library to allow users to describe their types in plain C++ and use those types directly for data-centric communication over DDS. The library transforms native C++ types into equivalent run-time TypeObject representation as specified in the DDS-XTypes standard. The library obviates the need to describe application-level data-types in external representations, such as IDL, XSD, XML, and DynamicData. The users of the library can use the full expressive power of native C++ to encapsulate the application-level data-types and use the same data-types for data distribution over DDS. The types may include all the standard template library containers (e.g., vector, list, map, unordered containers, etc.), raw pointers, smart pointers, and many more. The restrictions imposed by popular serialization/deserialization tools are eliminated. The application-level data are written directly using DDS.

Organization: Vanderbilt University
Publication Year: 2014

The OMG Data Distribution Service (DDS), which is a standard specification for data-centric publish/subscribe communications, has shown promise for use in internet of things (IoT) applications because of its loosely coupled and scalable nature, and support for multiple QoS properties, such as reliable and real-time message delivery in dynamic environments. However, the current OMG DDS specification does not define coordination and discovery services for DDS message brokers, which are used in wide area network deployments of DDS. This paper describes preliminary research on a cloud-enabled coordination service for DDS message brokers, PubSubCoord, to overcome these limitations. Our approach provides a novel solution that brings together (a) ZooKeeper, which is used for the distributed coordination logic between message brokers, (b) DDS Routing Service, which is used to bridge DDS endpoints connected to different networks, and (c) BlueDove, which is used to provide a single-hop message delivery between brokers. Our design can support publishers and subscribers that dynamically join and leave their subnetworks. 

Organization: Audi, RTI
Publication Year: 2014

As automotive electronic system design evolves, so must the HiL testbench and automotive test platforms. the fundamental functional design approach has been modular and ECU-centric, but the ECU count has steadily increased. the next big shift is to achieve functionality through the integration of multiple ECUs. Audi is responding to these challenges by radically re-thinking the architecture of the HiL test platform and defining a next generation approach. the new approach introduces the concept of a HiL-Bus to integrate the functionality of multiple existing HiL sub-systems and meet the needs of a modular best-in-class test ecosystem. By using a data-oriented approach the complexity of the testbench is reduced making it easier to integrate hardware and software products from different vendors. one of the enabling technologies (Connext DDS) is developed by Real-Time Innovations Inc.

Organization: KFUPM University
Publication Year: 2014

This paper proposes a real time Automatic Vehicle Location (AVL) and Monitoring system for pilgrims road transport coming towards city of Makkah in Saudi Arabia based on Data Distribution Service (DDS). This service is a real time publish/subscribe middleware. Using this middleware approach, we are able to locate and track a huge number of mobile vehicles and identify pilgrims for an annual Islamic gathering in the Holy City of Makkah. Performance results are demonstrated for LAN, WLAN and Bluetooth over DDS.

Publication Year: 2014

The OMG Data Distribution Service (DDS) has been deployed in many mission-critical systems and increasingly in Internet of Things (IoT) applications since it supports a loosely-coupled, data-centric publish/subscribe paradigm with a rich set of quality-of-service (QoS) policies. Effective data communication between publishers and subscribers requires dynamic and reliable discovery of publisher/subscriber endpoints in the system, which DDS currently supports via a standardized approach called the Simple Discovery Protocol (SDP). For large-scale systems, however, SDP scales poorly since the discovery completion time grows as the number of applications and endpoints increases. In order to scale to much larger IoT applications, a more efficient protocol is required.

This paper makes three contributions to overcoming the current limitations with DDS SDP. First, it describes the Content-based Filtering Discovery Protocol (CFDP), which is our new endpoint discovery mechanism that employs content-based filters to conserve computing, memory and network resources used in the DDS discovery process. Second, it describes the design of a CFDP prototype implemented using RTI Connext DDS. Third, it analyzes the results of empirical studies conducted in a testbed we developed to evaluate the performance and resource usage of our CFDP approach compared with SDP.

Publication Year: 2014

Large-scale cyber-physical systems (CPS) in mission-critical areas such as transportation, health care, energy, agriculture, defense, homeland security, and manufacturing, are becoming increasingly interconnected and interdependent. These types of CPS are unique in their need to combine rigorous control over timing and physical properties, as well as functional ones, while operating dynamically, reliably and affordably over significant scales of distribution, resource consumption, and utilization. As large-scale CPS continue to evolve—and grow in scale and complexity— they will impose significant and novel requirements for a new kind of cloud computing that is not supported by conventional technologies.

Current research on networking, middleware, cloud computing, and other potentially relevant technologies does not yet adequately address the specific challenges posed by large-scale CPS. In particular, the combination of (1) geographic distribution, (2) dynamic demand for resources, and (3) rigorous behavioral requirements spanning diverse temporal and physical scales motivates a new set of research and development (R&D) challenges that must be pursued to achieve new foundations for cloud computing that can meet the needs of large-scale CPS.

To pursue these challenges, cloud computing advances are needed to establish real-time computing, communication, and control foundations rigorously at scale. Likewise, advances are needed to apply these foundations in a flexible and scalable manner to different real-world large-scale CPS challenge problems. To support both foundational and experimental R&D, a new generation of elastic infrastructure must be designed, developed, and evaluated. This paper identifies challenges, opportunities, and benefits for this work and for the large-scale CPS it targets.

Publication Year: 2014

Event-driven design is fundamental to developing resilient, responsive, and scalable reactive systems as it supports asynchrony and loose coupling. The OMG Data Distribution Service (DDS) is a proven event-driven technology for building data-centric reactive systems because it provides the primitives for decoupling system components with respect to time, space, quality-of-service, and behavior. DDS, by design, supports distribution scalability. However, with increasing core count in CPUs, building multicore-scalable reactive systems remains a challenge. Towards that end, we investigate the use of Functional Reactive Programming (FRP) for DDS applications. Speci cally, this paper presents our experience in integrating and evaluating Microsoft .NET Reactive Extensions (Rx) as a programming model for DDS-based reactive stream processing applications. We used a publicly available challenge problem involving real-time complex analytics over high-speed sensor data captured during a soccer game. We compare the FRP solution with an imperative solution we implemented in C++11 along several dimensions including code size, state management, concurrency model, event synchronization, and the fitness for the purpose of "stream processing." Our experience suggests that DDS and Rx together provide a powerful infrastructure for reactive stream processing, which allows declarative speci cation of concurrency and therefore dramatically simpli es multicore scalability.

Publication Year: 2014

This paper introduces the Haptics-1 ISS Payload and experiment, which has been developed by ESA’s Telerobotics & Haptics Laboratory.

Haptics-1 allows conducting a first extensive set of human factor measurements and measurements of variability of human motor-control capabilities of the upper extremity during extended exposure to microgravity. Haptics-1 consists of a high resolution force reflective Joystick with a single degree of freedom (a force manipulandum), a touch-screen tablet PC with the experiment interface software and all required periphery to conduct multiple experiment protocols with crew-in-the-loop.

Haptics-1 has a flexible software framework allowing software up-load and experiment parameter changes from ground. Moreover, Haptics-1 followed an agile development process, which allowed developing the experiment in less than 16 months from scratch, up to delivery to ATV-5 for launch to ISS in summer 2014. 

*Note:* Haptics-1 uses the Data Distribution Services (RTI DDS) for communication between the different components.

Organization: Real-Time Innovations, Inc.
Publication Year: 2013

Avionics Sensor Health Assessment is a sub-discipline of Integrated Vehicle Health Management (IVHM), which relates to the collection of sensor data, distributing it to diagnostics/prognostics algorithms, detecting run-time anomalies, and scheduling maintenance procedures. Real-time availability of the sensor health diagnostics for aircraft (manned or unmanned) subsystems allows pilots and operators to improve operational decisions. Therefore, avionics sensor health assessments are used extensively in the mil-aero domain. As avionics platforms consist of a variety of hardware and software components, standards such as Open System Architecture for Condition-Based Maintenance (OSA-CBM) have emerged to facilitate integration and interoperability. However, OSA-CBM is a platform-independent standard that provides little guidance for avionics sensor health monitoring, which requires onboard health assessment of airborne sensors in real-time. In this paper, we present a distributed architecture for avionics sensor health assessment using the Data Distribution Service (DDS), an Object Management Group (OMG) standard for developing loosely coupled high-performance real-time distributed systems. We use the data-centric publish/subscribe model supported by DDS for data acquisition, distribution, health monitoring, and presentation of diagnostics. We developed a normalized data model for exchanging the sensor and diagnostics information in a global data space in the system. Moreover, Extensible and Dynamic Topic Types (XTypes) specification allows incremental evolution of any subset of system components without disrupting the overall health monitoring system. We believe, the DDS standard and in particular RTI Connext DDS, is a viable technology for implementing OSA-CBM for avionics systems due to its real-time characteristics and extremely low resource requirements. RTI Connext DDS is being used in other major avionics programs, such as FACE™ and UCS. We evaluated our approach to sensor health assessment in a hardware-in-the-loop simulation of an Inertial Measurement  Unit (IMU) onboard a simulated General Atomics MQ-9 Reaper UAV. Our proof-of-concept effectively demonstrates real-time health monitoring of avionics sensors using a Bayesian Network –based analysis running on an extremely low-power and lightweight processing unit. 

Publication Year: 2013

One of the most important features in Cloud environments is to know the status and the availability of the physical resources and services present in the current infrastructure. A full knowledge and control of the current status of those resources enables Cloud administrators to design better Cloud provisioning strategies and to avoid SLA violations. However, it is not easy to manage such information in a reliable and scalable way, especially when we consider Cloud environments used and shared by several tenants and when we need to harmonize their different monitoring needs at different Cloud software stack layers. To cope with these issues, we propose Distributed Architecture for Resource manaGement and mOnitoring in cloudS (DARGOS), a completely distributed and highly efficient Cloud monitoring architecture to disseminate resource monitoring information. DARGOS ensures an accurate measurement of physical and virtual resources in the Cloud keeping at the same time a low overhead. In addition, DARGOS is flexible and adaptable and allows defining and monitoring new metrics easily. The proposed monitoring architecture and related tools have been integrated into a real Cloud deployment based on the OpenStack platform: they are openly available for the research community and include a Web-based customizable Cloud monitoring console. We report experimental results to assess our architecture and quantitatively compare it with a selection of other Cloud monitoring systems similar to ours showing that DARGOS introduces a very limited and scalable monitoring overhead.

Author(s): Jinsong Yang
Organization: Marardalen University, Sweden
Publication Year: 2013

Master's Thesis in Intelligent Embedded Systems, School of Innovation, Design and Engineering, Marardalen University, Sweden.

This work focuses on applying DDS in the context of IEC 61499. The specific objectives are to: 1) present the structure and key feature of DDS, 2) map node-to-node communication in IEC 61499 to the DDS real-time publish-subscribe model including mapping timing requirements to QoS attributes of the publish-subscribe model, and 3) evaluate the performance of DDS communication while comparing it with the more traditional socket-based Ethernet communication.

Publication Year: 2013

DDS is a recent specification aimed at providing high‐performance publisher/subscriber middleware solutions. Despite being a very powerful flexible technology, it may prove complex to use, especially for the inexperienced. This work provides some guidelines for connecting software components that represent a new generation of automation devices (such as PLCs, IPCs and robots) using Data Distribution Service (DDS) as a virtual software bus. More specifically, it presents the design of a DDS‐based component, the so‐called Automation Component, and discusses how to map different traffic patterns using DDS entities exploiting the wealth of QoS management mechanisms provided by the DDS specification. A case study demonstrates the creation of factory automation applications out of software components that encapsulate independent stations.

Publication Year: 2013

The Object Management Group’s (OMG) Data Distribution Service (DDS) provides many configurable policies which determine end-to-end quality of service (QoS) of applications. It is challenging to predict the system’s performance in terms of latencies, throughput, and resource usage because diverse combinations of QoS configurations influence QoS of applications in different ways. To overcome this problem, design-time formal methods have been applied with mixed success, but lack of sufficient accuracy in prediction, tool support, and understanding of formalism has prevented wider adoption of the formal techniques. A promising approach to address this challenge is to emulate system behavior and gather data on the QoS parameters of interest by experimentation. To realize this approach, which is preferred over formal methods due to their limitations in accurately predicting QoS, we have developed a model-based automatic performance testing framework with generative capabilities to reduce manual efforts in generating a large number of relevant QoS configurations that can be deployed and tested on a cloud platform. This paper describes our initial efforts in developing and using this technology.

Publication Year: 2013

Applications such as fleet management, mobile task force coordination, logistics or traffic control can largely benefit from the on-line detection of collective mobility patterns of vehicles, goods or persons. However, collective mobility pattern analysis is exponential by nature, requires the high-throughput processing of large volumes of mobile sensor data, and thus generates huge communication and processing load to a monitoring system. Considering the benefits of the event-based asynchronous processing model for on-line monitoring applications, in this paper we argue that several collective mobility patterns can be elegantly described as a composition of reusable Complex Event Processing (CEP) rules, and specifically focus on the detection of the cluster mobility pattern. We also present a DDS-based mobile middleware that sup- ports a distributed deployment of these CEP rules for such collective mobility pattern detection. As means of evaluating our approach we show that using our middleware it is possible to detect this mobility pattern for thousands of mobile nodes, with a latency that is adequate for most monitoring applications. 

Organization: Vanderbilt University, RTI
Publication Year: 2013

This paper describes a real-time event-based system to distribute and analyze high-velocity sensor data collected from a soccer game case study used in the DEBS 2013 Grand Challenge. Our approach uses the OMG Data Distribution Service (DDS) for data dissemination and we combine it with algorithms to provide the necessary real-time analytics. We implemented the system using the Real-Time Innovations (RTI) ConnextTMDDS implementation, which provides a novel platform for Quality-of-Service (QoS)-aware distribution of data and real-time event processing. We evaluated latency and update rates of one of the queries in our solution to show the scalability and bene ts of con gurable QoS provided by DDS.

Publication Year: 2013

As recent technology trends usher us into the many-core era, software applications must scale as 1000s core become available in a single chip. Middleware is a key infrastructure component between applications and the operating system. Therefore, new middleware mechanisms must be developed to handle scheduling, resource sharing, and communication on platforms with 100s and 1000s of cores. The solution must help application developers create concurrent software and must be easy to use. Real-Time Innovations (RTI) and the University of North Carolina (UNC) Real-Time Systems Group have teamed up to create mechanisms for scalable, high-performance, and adaptable scheduling and communication for many-core systems. Our solution has four key innovations: a component framework to facilitate higher concurrency in real-time application integration, a smart scheduler to ensure little capacity loss of the processor, improved RTI DDS messaging middleware using concurrency patterns to achieve higher throughput, and a middleware transport optimized for sending messages across 1000s of cores in a closely distributed and potentially heterogeneous environment. We validated our approach using prototype implementations and through experimentation. Our results indicate that, when commercialized, our approach will help existing as well as new applications to scale to 1000s of cores in a cost-effective way.

Publication Year: 2013

Building automation has grown in parallel to energy smart grids without much interaction. However, at present, these two worlds cannot remain separated any more, as long as intelligent buildings become another node of the grid. This paper shows that this integration can be performed in practice as long as a harmonized knowledge/data model for the two worlds can be defined. This paper describes how this harmonized model can be achieved, how it can be implemented as part of a smart grid node, and how it could work and be deployed in a case study.

SOA (especially web-services) is the communication architecture that has been used so far to integrate building automation within other systems but the smartgrid cannot relay in non real- time technologies and the request-response paradigm classically offered in SOA poses an obstacle very difficult to avoid. In this paper we describe an alternative based on an OMG standard, DDS, a real-time message-oriented communication middleware based on the publish-subscribe paradigm.

Publication Year: 2013

Assuring end-to-end quality-of-service (QoS) in distributed real-time and embedded (DRE) systems is hard due to the heterogeneity and scale of communication networks, transient behavior, and the lack of mechanisms that holistically schedule different resources end-to-end. This paper makes two contributions to research focusing on overcoming these problems in the context of wide area network (WAN)-based DRE applications that use the OMG Data Distribution Service (DDS) QoS-enabled publish/subscribe middleware. First, it provides an analytical approach to bound the delays incurred along the critical path in a typical DDS-based publish/subscribe stream, which helps ensure predictable end-to-end delays. Second, it presents the design and evaluation of a policy-driven framework called Velox. Velox combines multi-layer, standards-based technologies—including the OMG DDS and IP DiffServ—to support end-to-end QoS in heterogeneous networks and shield applications from the details of network QoS mechanisms by specifying per-flow QoS requirements. The results of empirical tests conducted using Velox show how combining DDS with DiffServ enhances the schedulability and predictability of DRE applications, improves data delivery over heterogeneous IP networks, and provides network-level differentiated performance. 

Organization: University of Granada
Publication Year: 2012

Abstract: The OMG DDS (Data Distribution Service) standard specifies a middleware for distributing real-time data using a publish-subscribe data-centric approach. Until now, DDS systems have been restricted to a single and isolated DDS domain, normally deployed within a single multicast-enabled LAN. As systems grow larger, the need to interconnect different DDS domains arises. In this paper, we consider the problem of communicating disjoint data-spaces that may use different schemas to refer to similar information. In this regard, we propose a DDS interconnection service capable of bridging DDS domains as well as adapting between different data schemas. A key benefit of our approach is that is compliant with the latest OMG specifications, thus the proposed service does not require any modifications to DDS applications. The paper identifies the requirements for DDS data-spaces interconnection, presents an architecture that responds to those requirements, and concludes with experimental results gathered on our prototype implementation. We show that the impact of the service on the communications performance is well within the acceptable limits for most real-world uses of DDS (latency overhead is of the order of hundreds of microseconds). Reported results also indicate that our service interconnects remote data-spaces efficiently and reduces the network traffic almost N times, with N being the number of final data subscribers.

Publication Year: 2012

Cloud computing has become an essential technology not only for web provisioning, but also in mobile scenarios. Mobile devices are usually resource constrained due to processing and power limitations, so typical applications are not easy portable. Battery draining and application performance (resource shortage) have a big impact on the experienced quality, so shifting applications and services to the Cloud may improve mobile user's satisfaction. However, available Cloud solutions are mostly focused on scenarios with slowly changing provisioning, which are unable to support and promptly react to short-term provisioning requests. To address the new scenario, this paper proposes a novel Cloud monitoring and management architecture based on the data-centric publish-subscribe Data Distribution Service (DDS) standard. We present not only an architecture proposal, but also a real prototype that we have deployed in our experimental testbed. The experimental results show that our architecture is able to support the scheduling of highly dynamic tasks in the Cloud while maintaining low overheads.

Publication Year: 2012

The smart grid revolution demands a huge effort in redesigning and enhancing current power networks, as well as integrating emerging scenarios such as distributed generation, renewable energies or the electric vehicle. This novel situation will cause a huge flood of data that can only be handled, processed and exploited in real-time with the help of cutting-edge ICT (Information and Communication Technologies).

We present here a new architecture that, contrary to the previous centralised and static model, distributes the intelligence all over the grid by means of individual intelligent nodes controlling a number of electric assets. The nodes own a profile of the standard smart grid ontology stored in the knowledge base with the inferred information about their environment in RDF triples. Since the system does not have a central registry or a service directory, the connectivity emerges from the view of the world semantically encoded by each individual intelligent node (i.e., profile + inferred information).

We have described a use-case both with and without real-time requirements to illustrate and validate this novel approach.

Al the core of the architecture is the DDS Databus, which is uses to normalize all the information, make it available to all interesting nodes, and communicate data, events, and commmands.

Publication Year: 2012

Summary: This paper provided a brief overview of the experimental proximity operations HUD developed at LRT. It then proceeded to describe the evaluation experiments conducted to determine which HUD configuration is most beneficial for operator performance. The results of these experiments are discussed and some conclusions are drawn for future development and research work. Furthermore, the adapta- tions of the HUD made when incorporating it into the Third Eye situation awareness enhancement operator interface are detailed.

This work uses DDS provided by RTI under the University Program to communicate between the Orbiter simulator, a Simulink data-logger, and other system components. 

Organization: ETRI
Publication Year: 2012

Abstract: One of the primary requirements in many cyber-physical systems (CPS) is that the sensor data derived from the physical world should be disseminated in a timely and reliable manner to all interested collaborative entities. However, providing reliable and timely data dissemination services is especially challenging for CPS since they often operate in highly unpredictable environments. Existing network middleware has limitations in providing such services. In this paper, we present a novel publish/subscribe-based middleware architecture called Real-time Data Distribution Service (RDDS). In particular, we focus on two mechanisms of RDDS that enable timely and reliable sensor data dissemination under highly unpredictable CPS environments. First, we discuss the semantics-aware communication mechanism of RDDS that not only reduces the computation and communication overhead, but also enables the subscribers to access data in a timely and reliable manner when the network is slow or unstable. Further, we extend the semantics-aware communication mechanism to achieve robustness against unpredictable workloads by integrating a control-theoretic feedback controller at the publishers and a queueing-theoretic predictor at the subscribers. This integrated control loop provides Quality-of-Service (QoS) guarantees by dynamically adjusting the accuracy of the sensor models. We demonstrate the viability of the proposed approach by implementing a prototype of RDDS. The evaluation results show that, compared to baseline approaches, RDDS achieves highly efficient and reliable sensor data dissemination as well as robustness against unpredictable workloads.

Publication Year: 2012

SPARTA, the ESO Standard Platform for Adaptive optics Real Time Applications, is the high-performance, real-time computing platform serving three major 2nd generation instruments at the VLT (SPHERE, GALACSI and GRAAL) and possibly a fourth one (ERIS).

SPARTA offers a very modular and fine-grained architecture, which is generic enough to serve a variety of AO systems. It includes the definitions of all the interfaces between those modules and provides libraries and tools for their implementation and testing, as well as a mapping to technologies capable of delivering the required performance. These comprise, amongst others, VXS communication, FPGA-aided wavefront processing, command time filtering and I/O, DSP-based wavefront reconstruction, DDS data distribution and multi-CPU number crunching, most of them innovative with respect to ESO standards in use. A scaled-down version of the platform, namely SPARTA-Light, will employ a subset of the SPARTA technologies to implement the AO modules for the VLT auxiliary telescopes (NAOMI) and is the baseline for a new VLTI instrument (GRAVITY).

For the above instrument portfolio, SPARTA provides also a complete implementation of the AO application, with features customised to each instrument's needs and specific algorithms. In this paper we describe the architecture of SPARTA, its technology choices, functional units and test tools. End-to-end as well as individual module performance data is provided for the XAO system delivered to SPHERE. Initial performance results are presented for the GALACSI and GRAAL systems under development.

Publication Year: 2011

Abstract: The growing trend towards running publish/subscribe (pub/sub)-based distributed real-time and embedded (DRE) systems in cloud environments motivates the need to achieve end-to-end quality-of-service (QoS) over wide-area networks (WANs).  The OMG Data Distribution Service (DDS) is a data-centric middleware that provides fast, scalable and predictable distribution of real-time critical data.  The DDS standard, however, provides QoS control mechanisms that are confined only to the middleware residing at end-systems, which makes it hard to support DRE pub/sub systems over WANs.  A promising solution to this problem is to integrate DDS with the Session Initiation Protocol (SIP), which is an IP-based signaling protocol that supports real-time applications involving voice, video, and multimedia sessions via the QoS mechanisms in IP networks. 

This paper describes our approach to bridge the SIP protocol and DDS to realize DDS-based applications over QoS-enabled IP WANs by overcoming both inherent and accidental complexities in their integration.  An exemplar of the proposed approach for IP DiffServ networks is described, which uses the Common Open Policy Server (COPS) protocol to assure QoS for cloud-hosted DRE pub/sub applications.  To distinguish the DDS traffic from other best-effort traffic in the cloud environment, our approach uses the COPS-DRA protocol as a generic protocol for automatic service-level negotiation and the integration of this protocol in an overall QoS management architecture to manage service levels over multiple domains deploying different QoS technologies.

Publication Year: 2011

Abstract: The Data Distribution Service (DDS) middleware has recently been standardized by the OMG. Prior to data communication, a discovery protocol had to locate and obtain remote DDS entities and their attributes. Specifically, DDS discovery matches the DataWriters (DWs) and DataReaders (DRs) entities (Endpoints) situated in different network nodes. DDS specification does not specify how this discovery is translated “into the wire”. To provide interoperability and transparency between different DDS implementations, the OMG has standardized the DDS Interoperability Wire Protocol (DDS-RTPS). Any compliant DDS-RTPS implementation must support at least the SDP (Simple Discovery Protocol). The SDP works in relatively small or medium networks but it may not scale as the number of DDS Endpoints increases. This paper addresses the design and evaluation of an SDP alternative–which uses Bloom Filters (BF)–that increases DDS scalability. BFs use Hash functions for space-efficient probabilistic data set representation. We provide both analytical and experimental studies. Results show that our approach can improve the discovery process (in terms of network load and node resource consumption), especially in those scenarios with large Endpoint per Participant ratios.

Organization: RTI
Publication Year: 2011

This whitepaper describes the basic characteristics of real-world systems programming, how the DDS middleware technology can be used to integrate them, and a set of “best practices” guidelines that should be applied when using DDS to implement these systems.

Real-world systems are systems that interact with the external physical world and must live within the constraints imposed by real-world physics. Good examples include air-traffic control systems, real-time stock trading, command and control (C2) systems, unmanned vehicles, robotic and vetronics, and Supervisory Control and Data Acquisition (SCADA) systems. More and more these “real-world” systems are integrated using a Data-Centric PublishSubscribe approach, specifically the programming model defined by the Object Management Group (OMG) Data Distribution Service (DDS) specification.

This whitepaper provides practical advice on how to use DDS to program these systems

Organization: Universite, CNRS; LAAS
Publication Year: 2011

Abstract—The use of simulation in training and education enables to prepare personal in realistic environment. But the cost and the complexity to create and reuse simulations often limits their application. In this paper we investigate a low cost; and high fidelity PC-based simulator based on Data Distribution Service (DDS) middleware. The main parts of the systems and the architecture, including the hardware and the software are introduced. Real-time networking between distributed simulators is achieved using a reliable distributed communication, which employs publish-subscribe middleware build using OMG-DDS. Result shows these methods could produce low cost, extensible, reliable and distributed simulators. 

http://community.rti.com/sites/default/files/users/Hakiri/103.pdf

Publication Year: 2011

Abstract: In this paper, we describe our approach for intersection safety developed in the scope of the European project INTERSAFE-2. A complete solution for the safety problem including the tasks of perception and risk assessment using on-board lidar and stereo-vision sensors will be presented and interesting results are shown.

Publication Year: 2011
The rapid growth on the adoption of smart grids technologies is enabling the improvement of efficiency, reliability and security on energy distribution and consumption. The efficiency increase comes from the overall monitor- ing of the electricity network, and from the capability of acting upon loads in order to better adapt to overall and local energy production from traditional sources and renewables. To guarantee that these energy sources can be effectively used, smart grid systems must be able to react quickly and predictably, adapting to changing supply, by controlling loads and energy storage. Many applications have been identified and developed to optimize power grid systems, and these applications rely on a solid communications network that is secure, highly scalable, and always available. Thus, any communication infrastructure for smart grids should support its potential of producing high quantities of real-time data, with the goal of reacting to state changes by actuating on devices in real-time, while providing Quality of Service (QoS) guarantees to the communications. These functionalities can be supported by a Message-Oriented Middleware, which allows interconnecting houses and controlling applications in a distributed environment. Therefore, in this paper we survey and analyze existing middleware solutions for the support of distributed scalable large-scale applications with QoS requirements that are structured on top of a Message oriented Middleware. The paper concludes DDS is the most suitable technology for smart grid application with QoS requirements.
Organization: University of Florida
Publication Year: 2011

Abstract: Modern autonomous underwater vehicle (AUV) research is moving towards multi-agent system integration and control. Many university research projects, however, are restricted by cost to obtain even a single AUV platform. An affordable, robust AUV design is presented with special emphasis on modularity and fault tolerance, guided by previous platform iterations and historically successful AUV designs. Modularity is obtained by the loose coupling of typical AUV tasks such as navigation, image processing, and interaction with platform specific hardware. Fault tolerance is integrated from the lowest hardware levels to the vehicle’s mission planning frame- work. Major system design features including electrical infrastructure, mechanical design, and software architecture are presented. Application to the 14th annual AUVSI Robosub competition are addressed.

Note: RTI Connext DDS is used as the primary middleware. The paper includes a description on how DDS is used.

 

Publication Year: 2010

Abstract: The development of embedded systems challenges software engineers with timely delivery of optimised code that is both safe and resource-aware. Within this context, we focus on distributed systems with small, specialised node hardware, specifically, wireless sensor net- work (WSN) systems. Model-driven software development (MDSD) promises to reduce errors and efforts needed for complex software projects by automated code generation from abstract software models. We present an approach for MDSD based on the data-centric OMG middleware standard DDS. In this paper, we argue that the combination of DDS features and MDSD can successfully be applied to WSN systems, and we present the design of an appropriate approach, describing an architecture, metamodels and the design workflow. Finally, we present a prototypical implementation of our approach using a WSN-enabled DDS implementation and a set of modelling and transformation tools from the Eclipse Modeling Framework.

Publication Year: 2010

ABSTRACT: Real-Time availability of information is of most importance in large scale distributed interactive simulation in network-centric communication. Information generated from multiple federates must be distributed and made available to interested parties and providing the required QoS for consistent communication. The remainder of this paper discuss design alternative for realizing high performance distributed interactive simulation (DIS) application using the OMG Data Distribution Service (DDS), which is a QoS enabled publish/subscribe platform standard for time-critical, data-centric and large scale distributed networks. The considered application, in the civil domain, is used for remote education in driving schools. An experimental design evaluates the bandwidth and the latency performance of DDS and a comparison with the High Level Architecture performance is given.

Publication Year: 2010

The IEC61499 is an open standard for distributed control and automation. The interface between control software and hardware or communications is achieved by means of the so-called Service Interface Function Blocks (SIFB).

This paper presents the guidelines to build communication SIFBs based on the emerging OMG DDS (Data Distribution Service) middleware. This specification implements in a very efficient way the Publisher/Subscriber paradigm providing significant QoS configuration possibilities. These characteristics make DDS suitable for implementing the communications among time-critical devices. By using these DDS-SIFBs within IEC61499 code generation tools, the designers of the distributed applications will be allowed to use this powerful technology in the new distributed applications.

Publication Year: 2010

This robotic application leverages RTI Data-Distribution Service to integrate the different components in a robotic system, such as trajectory generation, control, archiving, OPC components, etc.

Abstract: Future production concepts which are currently developed in the scope of “agile production” will require autonomous working and transportation platforms which dispose of much higher flexibility and robustness than current autonomous guided vehicle (AGV). One the one hand, such flexibility and robustness will only be possible if an optimal functionality and interoperability of the monitoring, planning, control and diagnosis systems can be achieved. On the other hand, a rapid multidimensional trajectory planning is inevitable in order to achieve the required flexibility and robustness.

In the first part of this paper a hierarchical and distributed concept and realization proposal will be presented aiming at optimized functionality and interoperability. Based on the concept and the realization proposal, a system for multidimensional trajectory planning called “trajectory kernel” in a basic and an enhanced form will be presented in the second part. The paper concludes with an outlook to the control of multiple autonomous vehicles applying Max-plus-Algebra. It is important to note that the focus of this paper is not a description of the respective algorithms but an explanation of the framework and the application. Investigations about the industrial application of intelligent systems for monitoring, control and diagnosis (Kleinmann et al. 2009, Stetter & Kleinmann 2011) have shown that frequently not the lack of the optimum algorithms but the missing integration in existing infrastructure and a missing higher level concept are important causes for the low application ratio of such intelligent systems. Consequently, research aiming at optimized integration and searching for high level concepts is obviously as desirable as the search for new and improved algorithms.

Organization: ENIT SYS-COM
Publication Year: 2009

This paper describe a details of implementing for DDS to be publish data in CAN BUS

Abstract: The Publish/Subscribe paradigm matches well with these systems. Data Distribution Service (DDS) is a Publish/Subscribe data-centric middleware. It specifies an API designed for enabling real-time data distribution and is well suited for such complex distributed systems and QoS-enabled applications. Unfortunately, the need to transmit a large number of sensor measurements over a network negatively affects the timing parameters of the control loops. The CAN-bus enables the information from a large number of sensor measurements to be conveyed within a few messages. Its priority-based medium access control is used to select the sensor messages with high timing constraints. This approach greatly reduces the time for obtaining a snapshot of the environment state and therefore supports the real- time requirements of feedback control loops.

The use of the “ Publish/Subscribe/Distribute ” paradigm and the underlying real -time CAN-Bus is a currently research topic and only today few works exist in this area. These activities are headed by University of ULM & German National Research Center for Information Technology and by Software Architecture Lab, Seoul National University as well as by our "Control and Communication Technologies" research group at the National School of Engineering of Tunis (ENIT), Department of Computer and Communication Technologies, headed by myself. The main objective of this paper is to demonstrate how DDS API is implemented on a CAN-Bus.

Organization:
Publication Year: 2009

Abstract: The real-time distributed computing environment and reusable software architecture are important factors that affect the fidelity of flight simulation. We accomplished a flight simulator based on DDS (Data Distribution Service for Real-time Systems) middleware and structural software architecture and proved its high fidelity as a flight training device. According to the analysis of flight simulator’s functional blocks, we developed the real-time distributed computing environment which utilized DDS middleware through Ethernet and decreased the communication latency among functional blocks. Furthermore, we proposed the structural software architecture on the basis of layered and component-based design pattern to facilitate the higher simulation components’ reuse and replacement. The real-time communication procedures among simulation components are also described in this paper. The validation method and the contrasting simulation results are presented finally to show the feasible design based on DDS to carry out flight simulation with low communication latency and high quality.

Organization: Vanderbilt University
Publication Year: 2009

Abstract: Recent trends in distributed real-time and embed- ded (DRE) systems motivate the development of infor- mation management capabilities that ensure the right information is delivered to the right place at the right time to satisfy quality of service (QoS) requirements in heterogeneous environments. A promising approach to building and evolving large-scale and long-lived DRE information management systems are standards-based QoS-enabled publish/subscribe (pub/sub) platforms that enable participants to communicate by publishing information they have and subscribing to information they need in a timely manner. Since there is little exist- ing evaluation of how well these platforms meet the performance needs of DRE information management, this paper provides two contributions: (1) it describes three common architectures for the OMG Data Distri- bution Service (DDS), which is a QoS-enabled pub/sub platform standard, and (2) it evaluates implementa- tions of these architectures to investigate their design tradeoffs and compare their performance with each other and with other pub/sub middleware. Our results show that DDS implementations perform significantly better than non-DDS alternatives and are well-suited for certain classes of data-critical DRE information management systems.

Publication Year: 2009

Abstract:  A Wireless Sensor Network (WSN) is formed by a large quantity of small devices with certain computing power, wireless communication and sensing capabilities. These types of networks have become popular as they have been developed for applications which can carry out a vast quantity of tasks, including home and building monitoring, object tracking, precision agriculture, military applications, disaster recovery, among others. For this type of applications a middleware is used in software systems to bridge the gap between the application and the underlying operating system and networks. As a result, a middleware system can facilitate the development of applications and is designed to provide common services to the applications. The development of a middleware for sensor networks presents several challenges due to the limited computational resources and energy of the different nodes. 

This work is related with the design, implementation and test of a micro middleware for WSN with real-time (i.e. temporal) restrictions; the proposal incorporates characteristics of a message oriented middleware thus allowing the applications to communicate by employing the publish/subscribe model. Experimental evaluation shows that the proposed middleware provides a stable and timely service for providing different QoS levels.

 
Organization: University of Nottingham
Publication Year: 2008

This paper discusses the applicability of the Data Distribution Service (DDS) for the development of automated and modular manufacturing systems which require a flexible and robust communication infrastructure. DDS is an emergent standard for datacentric publish/subscribe middleware systems that provides an infrastructure for platform-independent many-to-many communication. It particularly addresses the needs of real-time systems that require deterministic data transfer, have low memory footprints and high robustness requirements. After an overview of the standard, several aspects of DDS are related to current challenges for the development of modern manufacturing systems with distributed architectures. Finally, an example application is presented based on a modular active fixturing system to illustrate the described aspects.

Publication Year: 2008

En las arquitecturas distribuidas de control inteligente basadas en agentes, los sistemas de comunicaciones pueden realizar tareas más importantes que la de simples transmisores de información. Para poder enriquecer las conexiones del sistema distribuido, el sistema de comunicaciones debe proporcionar una calidad de servicio que los agentes puedan emplear para tomas algunas decisiones correspondientes a aspectos como la distribución de la información o la movilidad por el sistema. Basándose en las ventajas que puede proporcionar la calidad de servicio, se decidió extender una arquitectura ya desarrollada con el soporte a la calidad de servicio. En la arquitectura expuesta en éste artículo, la responsabilidad de la gestión de las políticas de calidad de servicio, reside en el sistema de comunicaciones y está basada en el estándar DDS propuesto por la OMG.

Organization: ENIT SYS-COM
Publication Year: 2008

Distributed computing in complex embedded systems gain complexity, when these systems are equipped with many microcontrollers which oversee diverse Electronic Control Units (ECU) connecting hundreds or thousands of analogue and digital sensors and actuators.

The Publish/Subscribe paradigm matches well with these systems. Data Distribution Service (DDS) is a publish/subscribe data-centric middleware. It specifies an API designed for enabling real-time data distribution and is well suited for such complex distributed systems and QoS-enabled applications.

Unfortunately, the need to transmit a large number of sensor measurements over a network negatively affects the timing parameters of the control loops. The CAN-bus enables the information from a large number of sensor measurements to be conveyed within a few messages. Its priority-based medium access control is used to select the sensor messages with high timing constraints. This approach greatly reduces the time for obtaining a snapshot of the environment state and therefore supports the real-time requirements of feedback control loops.

The main objective of this paper is to demonstrate how DDS API is implemented on a CAN-Bus and gives performance evaluation related to delivery and transport QoS parameters.

Organization:
Publication Year: 2008

Abstract: Many complex distributed real-time applications need complicated processing and sharing of an extensive amount of data under critical timing constraints. In this paper, we present a comprehensive overview of the Data Distribution Service standard (DDS) and describe its QoS features for developing real-time applications. An overview of an active real-time database (ARTDB) named Agilor is also provided. For efficient expressing QoS policy in Agilor, a Real-time ECA (RECA) rule model is presented based on common ECA rule. And then we propose a novel QoS-aware Real-Time Publish-Subscribe (QRTPS) service compatible to DDS for distributed real-time data acquisition. Furthermore, QRTPS is implemented on Agilor by using objects and RECA rules in Agilor. To illustrate the benefits of QRTPS for real-time data acquisition, an example application is presented.

Publication Year: 2008

Abstract: To realize real-time information sharing in generic platforms, it is especially important to support dynamic message structure changes. For the case of IDL, it is necessary to rewrite applications to change data sample structures. In this paper, we propose a dynamic reconfiguration scheme of data sample structures for DDS. Instead of using IDL, which is the static data sample structure model of DDS, we use a self describing model using data sample schema, as a dynamic data sample structure model to support dynamic reconfiguration of data sample structures. We also propose a data propagation model to provide data persistency in distributed environments. We guarantee persistency by transferring data samples through relay nodes to the receiving nodes, which have not participated in the data distribution network at the data sample distribution time. The proposed schemes can be utilized to support data sample structure changes during operation time and to provide data persistency in various environments, such as real-time enterprise environments and connection-less internet environments.

Organization: UC Berkeley
Publication Year: 2007

Abstract: The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution.

Organization: Vanderbilt University
Publication Year: 2007

ABSTRACT: Publish/subscribe (pub/sub) middleware platforms for event- based distributed systems often provide many configurable policies that affect end-to-end quality of service (QoS). Although the flexibility and functionality of pub/sub middle- ware platforms has matured, configuring their QoS policies in semantically compatible ways has become more complex.

This paper makes two contributions to reducing the complexity of configuring QoS policies for event-based distributed systems. First, it evaluates various approaches for managing complex QoS policy configurations in pub/sub middleware platforms. Second, it describes a domain-specific modeling language (DSML) that automates the analysis and synthesis of semantically compatible QoS policy configurations.

Publication Year: 2007

Many present-day safety-critical or mission-critical military applications are deployed using intrinsically static architectures. Often these applications are real-time systems, where late responses may cause potentially catastrophic results. Static architectures allow system developers to certify with a high degree of confidence that their systems will provide correct functionality during operation, but a more adaptive approach could provide some clear benefits. In particular, the ability to dynamically reconfigure the system at run time would give increased flexibility and performance in response to unpredictable or unplanned operating scenarios.

Many current dynamic architectural approaches provide little or no features to facilitate the highly dependable, real-time performance required by critical systems. The challenge is to provide the features and benefits of dynamic architectural approaches while still achieving the required level of performance and dependability. This paper describes the early results of an ongoing research programme, part funded by the Software Systems Engineering Initiative (SSEI), aimed at developing a more adaptive software architecture for future military systems. A range of architectures with adaptive features (including object-based, agent based and publish/subscribe) are reviewed against the desirable characteristics of highly dependable systems. A publish/subscribe architecture is proposed as a potential way forward and a discussion of its advantages and disadvantages for highly dependable, real-time systems is given.

Organization: MIT Lincoln Laboratory
Publication Year: 2007

Abstract: Modern military satellite communications terminals have typically been built as multiprocessor systems. Because of increasing pressure for reuse and modularity, current programs have been encouraged to consider the use of component middleware. While Common Object Request Broker Architecture is the most mature middleware standard available, its invocation semantics present considerable challenges for the development of such systems. Through reasoning about quality attributes, we found that a real-time publish-subscribe middleware reduces coupling, improves composability, and reduces the risk of architectural mismatch, deadlock, and integration problems compared to an invocationbased system. In building a communications-on-the-move (COTM) node, we found that this type of middleware, which exemplifies an implicit-invocation architectural style, promotes ease of system evolution and an incremental integration approach.

Organization: SYSCOM Laboratory
Publication Year: 2007

Abstract: Data-centric design is emerging as a key tenet for building advanced data-critical distributed real-time and embedded systems. These systems must find the right data, know where to send it, and deliver it to the right place at the right time. Data Distribution Service (DDS) specifies an API designed for enabling real-time data distribution and is well suited for such complex distributed systems and QoS-enabled applications. It is also, widely known that Control Area Networks (CAN) are used in real-time, distributed and parallel processing.

Thus, the goal idea of this paper is to study an implementation of publish-subscribe messaging middleware that supports the DDS specifications and that is customized for real-time networking. This implementation introduces an efficient approach of data temporal consistency and real-time network-scheduler that schedules network traffic based upon DDS QoS-policies. A simulator has been developed to demonstrate that our implementation fulfills the guarantees predicted by the theoretical results.

Publication Year: 2007

Abstract: There’s a world of opportunity for distributed embedded and real-time applications. The list of applications goes on and on: military systems, telecommunications, factory automation, traffic control, financial trading, medical imaging, building automation, consumer electronics, and more. These applications must find the right data, know where to send it, and deliver it to the right place at the right time. The publish-subscribe paradigm according to DDS is the best fit to such complex distributed applications that require a powerful communications model.

Thus, the goal idea of this paper is to study the defaults within a network of publish-subscribe nodes occurring in a clustered middleware, in order to calculate the loss rates while allowing for the caching size. A simulator has been developed in order to fix metrics chosen in the theoretical part.

Publication Year: 2007

Abstract: Seaware is a publish-subscribe middleware used in multi-vehicle networked systems composed of autonomous and semi-autonomous vehicles and systems.

Seaware provides a high level interface to network communications and may be deployed with a combination of heterogeneous components within a dynamic network. Seaware supports the RTPS (Real Time Publish Subscribe) protocol, underwater acoustic modems and other forms of network transport. This paper gives an overview of Seaware's implementation and its application to multi-vehicle networked systems.

Publication Year: 2005

Abstract: As control systems become more distributed and they are implemented with smaller hardware and software components, implementing the necessary communication links becomes challenging. There are considerable technical difficulties in guaranteeing upper bounds for latencies in motion or machine control, synchronizing the execution and communication of distributed components, taking corrective action in fault situations and reconfiguring the system.

In this paper, we discuss the main communication requirements of distributed control systems and use them to evaluate a certain distribution service product

Author(s): Seppo Sierla
Publication Year: 2005

Abstract: This thesis has been written as a part of a research project, whose goal is to define the architecture and communication requirements of next-generation process automation systems. We focus on defining appropriate communication mechanisms for components that communicate with each other using Ethernet.

The theoretical part starts by summarizing the communication requirements that have been defined in the research project (OHJAAVA-2). Two middleware standards, the CORBA Notification Service and RTPS, are then described and their usefulness for our purposes is evaluated.

The practical part contains a description of a testing environment for evaluating a RTPS implementation. The test cases are based on communication scenarios that are typically encountered in process automation systems. The results are then presented and the impact of all relevant factors is analyzed.

We conclude that the RTI Data Distribution Service implementation of RTPS is a very promising middleware solution for process automation systems. There is no perfect product that satisfies all of our requirements, but good results can be expected from using RTPS, if the system designers appreciate the strengths and limitations of the middleware standard.

Author(s): Basem Almadani
Organization: Montan Univ., Leoben, Austria
Publication Year: 2005

Abstract: Designing and constructing Real-Time Distributed Industrial Vision Systems (RT-DIVS) from scratch is very complicated task. RT-DIVS has Conflicting requirements such as reasonable development cost, ease of use, reusable code and high performance. The success key in building such systems is to recognize the need for middleware software. Middleware plays a major role in developing distributed systems efficiently. Real-Time Publish-Subscribe (RTPS) model is one of the latest developments in Real-Time middleware technologies. Network Data Distribution Service (NDDS) is RTPS middleware developed by Real-Time Innovation (RTI). NDDS is widely used in Real-Time distributed and embedded systems for mission critical applications. The research work presented in this paper discusses the employment of NDDS for RT-DIVS and the advantages of NDDS¿s Quality of Service (QoS) policies in covering the requirements of RTDIVS. An experimental test set-up is used to verify the NDDS¿s performance for RT-DIVS. Tests results show that RTPS middleware (and NDDS specifically) is suitable for soft and firm timelines requirements for distributed industrial vision systems.

Publication Year: 2004

Abstract: This paper presents the communication network for machine vision system to implement to control systems and logistics applications in industrial environment. The real-time distributed over the network is very important for communication among vision node, image processing and control as well as the distributed I/O node. A robust implementation both with respect to camera packaging and data transmission has been accounted. This network consists of a gigabit Ethernet network and a switch with integrated fire-wall is used to distribute the data and provide connection to the imaging control station and IEC-61131 conform signal integration comprising the Modbus TCP protocol. The real-time and delay time properties each part on the network were considered and worked out in this paper.

Organization: RTI
Publication Year: 2003

The OMG Data Distribution Service (DDS) is an emerging specification for publish-subscribe data distribution systems. The purpose of the specification is to provide a common application-level interface that clearly defines the data distribution service. The specification describes the service using UML, thus providing a platform-independent model that can then be mapped into a variety of concrete platforms and programming languages.

The OMG DDS attempts to unify the common practice of several existing implementations [2, 5] enumerating and providing formal definitions for the QoS (Quality of Service) settings that can be used to configure the service.

This paper introduces the OMG DDS specification, describes the main aspects of the model, QoS settings, and gives examples of the communication scenarios it supports.

Publication Year: 1997

Nomad is a mobile robot designed for extended planetary exploration. In June and July of 1997, Nomad performed the first such mission, traversing more than 220 kilometers in the Atacama Desert of Chile and exploring a landscape analogous to that of the Moon and Mars.

Nomad's journey, the Atacama Desert Trek, was an unprecedented demonstration of long-distance, long-duration robotic operation. Guided by operators thousands of kilometers away but telepresent via immersive imagery and interfaces, Nomad operated continuously for 45 days. Science field experiments evaluated exploration strategies and analysis techniques for future terrestrial and planetary missions.

Nomad uses RTI DDS for its communication between the Atacama desert in Chile and NASA Ames in California.