We live in a “seeing is believing” world. And embedded systems users likely physically see only the hardware they use as they: keep an F-22 flying high (Figure 1); capture telemetry information; gather Intelligence, Surveillance, and Reconnaissance (ISR) data; pilot an Unmanned Aerial Vehicle (UAV) from a ground control station; or even send a deployed soldier classified information on a handheld device. But the fact remains that this very-visible hardware is 100\% dormant without the “invisible” software that brings it to life.
[Figure 1 | It's easy to notice the highly visible hardware in life- and safety-critical systems such as those that comprise an F-22, but such rugged systems software underneath must also be up to the safety-critical challenge. U.S. Air Force photo by Master Sgt. Kevin J. Gruenwald]
The first two installments of this x86 commercial-to-rugged series (“Adapting x86 to harsh environments – Not as straightforward as you think” and “Adapting commercial x86 designs to harsh environments – Passing the test”) delved into the hardware-centric aspects of converting a commercial x86 design into a rugged x86 design. However, this third and final part of the series switches gears radically, with a focus on the software aspects of what is needed for a rugged x86 embedded system.
Rather than focusing on heat sinks, fans, cooling, or soldering methods, the nuts and bolts of the following software-oriented discussion revolve around one concept: To be … or not to be … safety- and security-critical. (Isn’t that really what ruggedized designs are for, when you think about it?)
So how is safety-criticality achieved with software? “For safety-critical systems, the software platforms (i.e. operating systems) have to be fault tolerant, and also be able to mitigate and deal with any faults that occur at the application level,” says Robert Day, VP of Marketing at LynuxWorks, an Affiliate member of the Intel® Embedded Alliance.
DO-178B, the FAA’s primary software safety certification standard for aircraft and now required on nearly all military and commercial avionics projects worldwide, specifies software protocol necessary to keep aircraft safely airborne. Both the applications and operating systems must be certified, and must contain a mechanism to protect against system faults that could result from a fault condition or malfunctioning application within the system, Day explains.
How can these system faults be counteracted? “Many avionics systems that have multiple applications running on a single piece of hardware add a partitioning scheme on top of the normal task scheduling algorithms. These partitioning schemes protect both memory and scheduling for different processes or applications running,” he details.
How it works: Because each application executes within a separate partition, has a discrete time slice, and its own MMU (Memory Management Unit)-function-using memory space, each application continues running no matter what’s transpiring in any of the other partitions. So one application cannot interrupt another. Additionally, health monitor applications can appraise applications’ faults separately so that fixes do not require interfering with other partitions.
Of note: The nascent DO-178C standard will introduce acceptability of formal methods use in conjunction with object-oriented development for avionics, as a way of ensuring that reused independently developed software components are viable in such safety-critical systems.
Next up: Security
Different from the DO-178B safety certification that many avionics projects are subject to, many embedded systems requiring security might undergo Common Criteria certification at levels 1 to 7 (for which level 7 or “EAL 7” requires formal methods verification). “[Common Criteria] examination is every bit as rigorous as the safety certification, and the operating system has a similar partitioning scheme to help mitigate both faults and attacks,” explains Day.
The primary difference between the safety partitioning scheme heretofore described and security partitioning schemes, he says, is that security systems are partitioned to separate the many differing security levels within the system, so that low-level classified applications are not able to access higher-security-level data and applications or vice versa (Figure 2).
[Figure 2 | In secure systems, low-level classified applications are not able to access higher-security-level data and applications or vice versa.]
Thus, defining the security policies for the OS is a must, to indicate the authorized access and flow of information. “This architecture is known as Multiple Independent Levels of Security (MILS) and the type of operating system is known as a separation kernel, as it enforces true separation between the different security levels,” Day says. “A key addition to the separation kernel is virtualization or hypervisor technology,” he adds.
And hypervisor/virtualization technology just about does it all: Each partition can run disparate “guest” operating systems, which enables the OSs to operate at differing security levels. “When combined with Intel® multicore processors with Intel® Virtualization Technology (Intel® VT) support, this allows traditionally unsecure OSs such as Windows to run in the same system as highly secure OSs and applications, and allows commercial OSs and applications to run in highly secure systems with no compromise on functionality or performance,” explains Day. Applications developed and deployed before such modern stringent security standards existed can also run well in a virtualization scheme.
To facilitate virtualization, processors like the Intel® Core™ i7, Intel® Core™ 2 Duo, and Intel® Xeon® include Intel® virtualization technology. Therefore, they also support Intel® VT for IA-32, Intel® Architecture (Intel® VT-x), and Intel® 64.
All safe and secure: When is enough really enough?
When converting a commercial x86-based design to a rugged (and therefore safety- or security-critical) system, hardware is important, but up-to-par software – the invisible enabler – is just as vital to ensure safety from life-threatening events and from malicious attacks. DO-178B certification and Common Criteria and virtualization schemes might seem pricey (at least, compared to not attempting them), but isn’t it worth it at any price?
Written by Sharon Schnakenburg-Hess, an assistant managing editor at OpenSystems Media®, by special arrangement with the Intel® Embedded Alliance.
1. “DO-178B and DO-254: A unified aerospace-field theory?” By Vance Hilderman, HighRely Incorporated, Military Embedded Systems magazine, Jan/Feb 2009, www.mil-embedded.com/articles/id/?3741
2. “DO-178C meets safety-critical Java,” by Dr. Kelvin Nilsen, Atego Systems, Inc., VME and Critical Systems magazine, Summer 2010, www.vmecritical.com/articles/id/?4823
3. “Intel® Virtualization Technology for Embedded Applications,” by Amit Aneja, Intel, Rev 1.0, July, 2009, http://edc.intel.com/Training/Courses.aspx?ttag=ttipt&ptag=&ftag=&sort=2