ARINC 653 hardware module

4 posts / 0 new
Last post
Offline
Last seen: 10 years 10 months ago
Joined: 01/18/2014
Posts: 2
ARINC 653 hardware module

Hello all,

I would like to know whether the hardware module specified in the standard ARINC653 is a single microprocessor or a set of interconnected microprocessors.

How many harware modules can we have in a single system?

And what advantages does the fact of having a single (or multiple) hardware module (s) bring to an avionics system (instead of having federated applications running each on a separate microprocessor)?

Thanks in advance

 

rip
rip's picture
Offline
Last seen: 21 hours 31 min ago
Joined: 04/06/2012
Posts: 324

Hi,

I've only ever seen it referred to as a single CPU, with a hypervisor implementation. 

To step ahead, we do have an implementation that runs on WindRiver's VxWorks653 (PPC) board, if you've seen RTI's "IOA" demo that includes a WindRiver PPC board running 653.  Pause this video at 6 seconds[1], the VxWorks 653 board is on the right.  It has two partitions, each running 32bit VxWorks 6.9.  One partition is using the FACE API to enable DDS (data coming from an open source flight simulator (FlightGear), bridged to DreamHammer's UCS 2.x message set for use in Ballista), where the data is cleansed and bounds-checked and other paranoia steps are applied to it.  The data is then handed via apex port (as a bytestream) to the other ("avionics") partition for graphical display using Esterel Scade-Display graphics.

With DDS you don't need to start with by worring on how many nodes you need in the system.  Generally what I've seen (which is a very, very limitied subset of the installations out there), is that on a given node you'll have (a/an) DO-178 level A certified application(s) running on one partition, and things less needful of "Level A" running in one or more different partitions.  Inter-node communications is via Ethernet or backplane shared memory, inter-partition (i.e., intra-node across partition boundary) is via Apex ports.  Then you simply have as many of these boards as necessary to fill the need based on system size, max cpu loading, latency requirements, etc.

Fewer boards allows you to benefit from lower latencies when distributing data, more boards allows you to benefit from redundancy and immediate fail-over (OWNERSHIP_QOS). 

How many can you have? "given these functional and non-functional requirements, how many boards is suitable for this design?"  therefor it depends on your functional and non-functional requirements.

rip

[1] Someplace i have an actual video of the demo, but apparently it is not on this laptop.  I'll upload that when I get the chance and update this page to point at it, instead.

Offline
Last seen: 10 years 10 months ago
Joined: 01/18/2014
Posts: 2

Thanks rip for your answer, I would like to ask another question, if a hardware module can contain more than one microprocessor, what's the difference between an ARINC 653 system with a hardware module numerous microprocessors and between a system in which microprocessors are dispatched all over the system. I mean, which advantage does this concept of hardware module bring?

rip
rip's picture
Offline
Last seen: 21 hours 31 min ago
Joined: 04/06/2012
Posts: 324

Cost. 

Cost, Cost, Cost.

The more nodes you have, the more it will cost.  The more "avionics" (ie, DO-178C certified applications and partitions and nodes) you have, the more it will cost. 

I said:  "Fewer boards allows you to benefit from lower latencies when distributing data, more boards allows you to benefit from redundancy and immediate fail-over (OWNERSHIP_QOS)".

Fewer boards will simply just cost less.  Less hardware, fewer OS runtimes (RTI Connext DDS doesn't have runtimes, from 5.0.0, so I'm not counting those). 

But again, it comes down to what are the functional and non-functional requirements. 

rip