As high-performance embedded computing (HPEC) becomes a mainstay of military computing, the industry is increasingly interested in adapting Linux cluster architectures from the IT world. These architectures have obvious appeal for their low cost and high performance, but they are not generally designed for deterministic performance. Thus it is not clear that these architectures are suitable for real-time defense applications.


Members of the Intel® Internet of Things Solutions Alliance (Intel® IoT Solutions Alliance) have been hard at work answering this question. For example, Associate member GE-Intelligent Platforms recently published an excellent white paper that investigates interrupt response times and Message Passing Interface (MPI) latencies using OpenMPI on three different versions of Linux*. The Linux builds were provided by Concurrent Real-Time and Red Hat (both General members) and are all based on Red Hat* Enterprise Linux* 6.3:



All three kernels were tested on the GE-IP DSP280, a dual-socket, eight-core platform based on the Intel® CoreTM i7-2715QE processor. To characterize these platforms, a series of measurements were made with and without a background CPU load. The tests found that properly tuned systems could achieve interrupt response latencies of less than 10 µs and MPI latencies of around 1 µs, which is sufficient for many embedded applications.


Even standard Red Hat Enterprise Linux met soft real-time criteria with some tuning, such as disabling hyperthreading and isolating the real-time application on a specific core. However, interrupt latencies could still exceed 120 µs, making this standard distribution unsuitable for hard real-time applications.


In contrast, GE achieved highly deterministic behavior with the other flavors of Linux. For example, it got consistent sub-10 µs interrupt response times on Concurrent RedHawk Linux by applying real-time priority to the critical application, and by “shielding” the core running that application (see Figure 1). Red Hat* Enterprise MRG Realtime was not far behind, with optimized response times below 15 µs. These fast, reliable response times are an excellent fit for hard real-time applications.


redhawk latency.png

Figure 1. Concurrent RedHawk Linux* can provide real-time determinism. Note that all results for the “CHRT+CORE3” optimization (light blue) are under 10 µs.


Interestingly, MPI latencies were not a major problem on any of the kernels. Simply assigning real-time priority to the critical application produced low latencies and low jitter on all three versions of Linux. This leads to a second question. If OpenMPI is good enough for hard real time, should developers consider other options?


As it happens, GE has another great white paper on this very topic. This paper compares OpenMPI to GE’s AXISFlow in an impressively detailed and even-handed analysis, and summarizes the pros and cons of both options on five key criteria:


Programming – GE argues that OpenMPI is overly complex, with 300+ APIs that may have hidden side effects. In comparison, AXISFlow has only 42 APIs for a user to learn. On the other hand, OpenMPI’s main APIs are conceptually simple, and they hide many details from the developer. This makes it easier to get started than with AXISFlow, which exposes more interfaces to the developer.


Hardware Mapping – OpenMPI was designed for homogenous HPC systems with one MPI task per processor. AXISFlow was designed for heterogeneous embedded systems using multiple threads of execution on each core and multiple interconnects between CPUs. Thus, GE sees AXISFlow as a better fit for HPEC systems.


Performance – According to GE’s tests, OpenMPI offers lower latency for smaller message sizes and when there is no conflict with other CPU tasks. In contrast, AXISFlow outperforms in multi-tasking applications that need determinism (see Figure 3).



Figure 3. Relative performance of the MPI alternatives depends greatly on the conditions at hand.


Distributions – As the name says, OpenMPI is an open standard and is thus available for free – or with paid support from a variety of vendors. In contrast, AXISFlow is proprietary.


This last point deserves a bit of exploration. OpenMPI is popular not only in the IT world but increasingly in embedded applications. GE itself just added OpenMPI to its AXIS 6.0 design suite, which already supported AXISFlow. As another example, General member Mercury Systems supports the API in its Performance Enhanced OpenMPI/OFED.


On the other hand, there are plenty of popular propriety solutions, like Curtiss-Wright Continuum IPC and the Gedae Software Development Platform. (Curtiss-Wright and Gedae are General members of the Alliance.) A full analysis of HPEC architectures is not complete without careful consideration of these alternatives – something that is well beyond the scope of this blog.


Having said that, I think the overall picture is clear: Linux clusters are unquestionably suitable for military HPEC applications. This is an important development, because it allows embedded developers to leverage the cost and performance advantages of these architectures. I have no doubt that we will see IT-like Linux clusters continue to make inroads into military applications as the HPEC market matures.


Learn More

Solutions in this blog:


Related topics:


GE Intelligent Platforms is an Associate member of the of the Intel® IoT Solutions Alliance Gedae, Inc., Concurrent Real-Time, Curtiss-Wright Controls Defense Solutions, Mercury Systems, Inc., and Red Hat are General members of the Alliance.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® IoT Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston