Skip navigation

Software & Bios

2 Posts authored by: SharonS-OSM

We live in a “seeing is believing” world. And embedded systems users likely physically see only the hardware they use as they: keep an F-22 flying high (Figure 1); capture telemetry information; gather Intelligence, Surveillance, and Reconnaissance (ISR) data; pilot an Unmanned Aerial Vehicle (UAV) from a ground control station; or even send a deployed soldier classified information on a handheld device. But the fact remains that this very-visible hardware is 100\% dormant without the “invisible” software that brings it to life.

 

407i2D1E03BF77EEBD1C

[Figure 1 | It's easy to notice the highly visible hardware in life- and safety-critical systems such as those that comprise an F-22, but such rugged systems software underneath must also be up to the safety-critical challenge. U.S. Air Force photo by Master Sgt. Kevin J. Gruenwald]

 

The first two installments of this x86 commercial-to-rugged series (“Adapting x86 to harsh environments – Not as straightforward as you think” and “Adapting commercial x86 designs to harsh environments – Passing the test”) delved into the hardware-centric aspects of converting a commercial x86 design into a rugged x86 design. However, this third and final part of the series switches gears radically, with a focus on the software aspects of what is needed for a rugged x86 embedded system.

 

Rather than focusing on heat sinks, fans, cooling, or soldering methods, the nuts and bolts of the following software-oriented discussion revolve around one concept: To be … or not to be … safety- and security-critical. (Isn’t that really what ruggedized designs are for, when you think about it?)

 

Safety first

So how is safety-criticality achieved with software? “For safety-critical systems, the software platforms (i.e. operating systems) have to be fault tolerant, and also be able to mitigate and deal with any faults that occur at the application level,” says Robert Day, VP of Marketing at LynuxWorks, an Affiliate member of the Intel® Embedded Alliance.

 

DO-178B, the FAA’s primary software safety certification standard for aircraft and now required on nearly all military and commercial avionics projects worldwide[1], specifies software protocol necessary to keep aircraft safely airborne. Both the applications and operating systems must be certified, and must contain a mechanism to protect against system faults that could result from a fault condition or malfunctioning application within the system, Day explains.

 

How can these system faults be counteracted? “Many avionics systems that have multiple applications running on a single piece of hardware add a partitioning scheme on top of the normal task scheduling algorithms. These partitioning schemes protect both memory and scheduling for different processes or applications running,” he details.

 

How it works: Because each application executes within a separate partition, has a discrete time slice, and its own MMU (Memory Management Unit)-function-using memory space, each application continues running no matter what’s transpiring in any of the other partitions. So one application cannot interrupt another. Additionally, health monitor applications can appraise applications’ faults separately so that fixes do not require interfering with other partitions.

 

Of note: The nascent DO-178C standard will introduce acceptability of formal methods use in conjunction with object-oriented development for avionics, as a way of ensuring that reused independently developed software components are viable in such safety-critical systems[2].

 

Next up: Security

Different from the DO-178B safety certification that many avionics projects are subject to, many embedded systems requiring security might undergo Common Criteria certification at levels 1 to 7 (for which level 7 or “EAL 7” requires formal methods verification). “[Common Criteria] examination is every bit as rigorous as the safety certification, and the operating system has a similar partitioning scheme to help mitigate both faults and attacks,” explains Day.

 

The primary difference between the safety partitioning scheme heretofore described and security partitioning schemes, he says, is that security systems are partitioned to separate the many differing security levels within the system, so that low-level classified applications are not able to access higher-security-level data and applications or vice versa (Figure 2).

 

408iB4CDCFCCEE69024E

[Figure 2 | In secure systems, low-level classified applications are not able to access higher-security-level data and applications or vice versa.]

 

Thus, defining the security policies for the OS is a must, to indicate the authorized access and flow of information. “This architecture is known as Multiple Independent Levels of Security (MILS) and the type of operating system is known as a separation kernel, as it enforces true separation between the different security levels,” Day says. “A key addition to the separation kernel is virtualization or hypervisor technology,” he adds.

 

And hypervisor/virtualization technology just about does it all: Each partition can run disparate “guest” operating systems, which enables the OSs to operate at differing security levels. “When combined with Intel® multicore processors with Intel® Virtualization Technology (Intel® VT) support, this allows traditionally unsecure OSs such as Windows to run in the same system as highly secure OSs and applications, and allows commercial OSs and applications to run in highly secure systems with no compromise on functionality or performance,” explains Day. Applications developed and deployed before such modern stringent security standards existed can also run well in a virtualization scheme.

 

To facilitate virtualization, processors like the Intel® Core™ i7, Intel® Core™ 2 Duo, and Intel® Xeon® include Intel® virtualization technology. Therefore, they also support Intel® VT[3] for IA-32, Intel® Architecture (Intel® VT-x), and Intel® 64.

 

All safe and secure: When is enough really enough?

When converting a commercial x86-based design to a rugged (and therefore safety- or security-critical) system, hardware is important, but up-to-par software – the invisible enabler – is just as vital to ensure safety from life-threatening events and from malicious attacks. DO-178B certification and Common Criteria and virtualization schemes might seem pricey (at least, compared to not attempting them), but isn’t it worth it at any price?

 

Written by Sharon Schnakenburg-Hess, an assistant managing editor at OpenSystems Media®, by special arrangement with the Intel® Embedded Alliance.

 

References:

 

1. “DO-178B and DO-254: A unified aerospace-field theory?” By Vance Hilderman, HighRely Incorporated, Military Embedded Systems magazine, Jan/Feb 2009, www.mil-embedded.com/articles/id/?3741

 

2. “DO-178C meets safety-critical Java,” by Dr. Kelvin Nilsen, Atego Systems, Inc., VME and Critical Systems magazine, Summer 2010, www.vmecritical.com/articles/id/?4823

 

3. “Intel® Virtualization Technology for Embedded Applications,” by Amit Aneja, Intel, Rev 1.0, July, 2009, http://edc.intel.com/Training/Courses.aspx?ttag=ttipt&ptag=&ftag=&sort=2

The practice of virtualization has been around for more than a decade, but who's to say which of the three frontrunner methods - binary translation (or runtime handling of system behavior/control-sensitive instructions); OS-assisted (also called para-virtualization); or hardware-assisted (also known as full virtualization) - is best? Or is there a best? While some continue to rely on binary translation as the de facto method, others might think that either para-virtualization or hardware-assisted virtualization - which often includes Intel's VT-x technology - is best. But which would serve the industry best?

Virtualization at a 50,000-foot glance
Simply stated, gone are the days of unlimited rack space and costly computers - and their extra expense. Embedded applications like defense, communications, and industrial are trying to reduce size, weight, and power consumption without sacrificing compute power. Legacy applications no longer have to run on different physical machines than their more updated counterparts. Software migrations are even possible without the hassle of taking down the entire system or application. And forget about redundant hardware - it's no longer necessary because system uptime can be increased through software failover instead.

The magic enabling panacea: virtualization, which enables several disparate OSs (and therefore their dependent applications) to execute within one physical machine through the use of a Virtual Machine Monitor (VMM). A VMM is actually a new software layer sometimes referred to as a "hypervisor," which manages these disparate OSs and applications running on them commonly known as Virtual Machines (VMs) Virtualization is executed through a "context switch" state that makes each separate application on its respective OS think it has sole control over all the hardware[1]. Applicable to both single-core and multi-core scenarios, this delusion or illusion of sole control is highly beneficial to engineers for the aforestated reasons. But which of the three methods is the most effective?

The virtualization triad - which one wins?
Like all things technology, there's more than one way to reach the goal, but is there a perfect route for perfectionists? One can only say for certain ... well, it depends.

Binary translation

How it was developed
The binary translation method of virtualization was developed for good reason: OSs crafted for Intel® Architecture processors are designed to execute on native hardware directly and therefore assume they have sole control over computing resources. Additionally, x86 architectures comprise various privilege levels, which presents no issues because OS code is designed to execute at the top privilege level natively. However, that privilege expectation became a challenge when the x86 architecture was virtualized and the guest OS was therefore relegated to execute at a privilege level lower than the VMM (because the VMM manages shared resource allocation). Additionally, instruction semantics differ when an OS is run natively versus in a virtualized scenario[2].

Pros and cons
With the VMM providing decoupling of the guest OS from the hardware platform, no OS assist nor hardware assist is needed. The primary drawback, though: Performance is somewhat hindered as runtime OS modification is necessary[2]. Another snafu of binary translation is its complexity, says Chris Main, CTO at TenAsys Corporation, an Affiliate member of the Intel® Embedded and Communications Alliance (Intel® ECA). "Binary translation describes the technique where 'problem code' in the guest software ... is replaced by on-the-fly 'safe code.' ... It requires detailed knowledge of the guest software and thus is typically complex to implement."

OS-assisted or para-virtualization

Where it fits in
Para-virtualization schemes feature a hypervisor and modified guest OS collaboration, where the OS's privileged access areas are altered to request hypervisor action instead of executing privileged instructions, explains Mark Hermeling, senior product manager at Wind River, an Intel® ECA Associate member. This technique is most suitable for scenarios when the real hardware environment and guest environment are alike or quite similar. The guest OS is optimized for performance and to ensure it does not commit guest-inappropriate actions.

Plusses and minuses
Para-virtualization typically renders the highest performance amongst the virtualization methods discussed herein[2]. "Para-virtualization can result in good system performance but is generally applicable to situations where the guest is well-known or fixed for a given product," says Main.

"[Para-virtualization] can be done on top of any processor, and the real-time performance is the best of all three methods. Para-virtualization is generally regarded as the best option for real-time behavior. Para-virtualization can be mixed with full virtualization, for example, to execute Microsoft Windows using full virtualization in one virtual [machine] and VxWorks or Linux para-virtualized in another on top of the same processor (both single-core and multi-core)," details Hermerling.

Hardware-assisted or full virtualization
In contrast to binary translation and para-virtualization, the hardware-centric full virtualization method utilizes an unmodified OS that runs on a virtual machine - without the OS knowing it's running in a virtualized environment sharing the physical system with other OSs. Of course, the OS will try to execute a privileged instruction, but in this case, the processor sends the hypervisor an exception. The next step: The hypervisor then performs the requested behavior, Hermeling reports.

Consequently, processors such as Intel® Core 2 Duo, Intel® Xeon, and the latest Intel® Core i7 feature Intel's virtualization technology and support technologies like the Intel® Virtualization Technology (Intel® VT)[3] for IA-32, Intel® 64, and Intel® Architecture (Intel® VT-x).

With VT-x, the processor provides two new operation modes, where VMs run in the "VMX non-Root mode" while the VMM executes in the "VMX Root mode." Here's how it works: Processor behavior in VMX non-Root mode operation is modified and restricted for virtualization facilitation. However, in contrast to ordinary operation, specific events and instructions prompt the VMM (Root mode) to take action. This enables the VMM to keep processor-resource control.

Meanwhile, processor behavior in VMX Root mode operation is very similar to that within its native environment. The primary differences include a newly available set of VMX instructions, in addition to a limitation of the values that might be loaded into specific control registers. VMX operation invokes restrictions on software executing with Current Privilege Level (CPL) 0; therefore, guest software is able to execute at the privilege level to which it was originally designed, simplifying VMM development.

Why or why not use it?
With the exploding popularity of the world's most pervasive OS - Windows - it's important to note that the source code for Windows cannot be modified for a para-virtualization scheme. However, Windows embedded flavors including Windows 7 Professional for Embedded Systems, Windows 7 Ultimate for Embedded Systems, Windows Embedded POSReady, Windows XP Embedded, and others are gaining more acceptance among the embedded community, especially in market segments such as medical, industrial, gaming, and retail, to name a few.

"The advantage of the hardware virtualization technique is that it requires no knowledge of the guest software other than the specific set of interaction with the [VMM]. This makes it more useful in a general-purpose solution to support many different guest runtime environments," states Main.

Hermeling has another point of view. "Full virtualization is really attractive as you don't have to modify the operating system. However, there is a significant impact due to the required emulation work when the processor throws an exception. The impact is very much noticeable in handling devices. This is measurable in throughput, as well as latency and jitter in interrupt handling." This method of virtualization also necessitates hardware assist, something not always found in embedded processors. However, most Intel® processors support Intel's VT-x virtualization technology, as do many other competitive architectures featuring their own hardware extensions for supporting virtualization in embedded.

 

86iE4B3AA47E3EDACAC

 

Now you decide
In a fragmented embedded industry where virtualization is relatively new territory and multiple processors - all with different requirements and IP - could be used, not to mention present costliness of the virtualization equation, standardization is likely a perplexing equation. And then there's the matter of whether to standardize one of the two software-based methods (binary translation and para-virtualization) versus the hardware-assisted method. Should software or hardware be emphasized in potential standardization - or both? Time will tell, but for now, it appears that the pros and cons of standardization are evenly weighted. Typically having no answer means a "no" answer ... so I'm told. What are your thoughts?

Written by Sharon Schnakenburg, OpenSystems Media®, by special arrangement with Intel® ECA

References:

[1] "Virtualization for Embedded and Communications Infrastructure Applications," by Edwin Verplanke, Intel® Corporation, Oct. 31, 2006.

[2] "Intel's CPU extensions transform virtualization," by Stuart Fisher, LynuxWorks, Inc., www.mil-embedded.com/articles/id/?3733.

[3] "Intel® Virtualization Technology for Embedded Applications," by Amit Aneja, Intel,
Rev 1.0, July, 2009, http://edc.intel.com/Training/Courses.aspx?ttag=ttipt&ptag=&ftag=&sort=2