Skip navigation

Using Virtualization in Embedded Applications: Infographic


You’re undoubtedly familiar with server virtualization—using an application layer to manage workloads so a single box can act as multiple virtual servers. It’s a no-brainer.


But what about virtualization in embedded devices? What does it get you, and what are the new challenges and solutions you need to know about when developing embedded solutions that rely on virtualization? That’s a whole different story.


Fortunately, a new Intel infographic makes the story crystal clear. This valuable reference illustrates:

  • The top reasons to consider virtualization in your embedded project
  • Industries where virtualization yields the greatest returns for embedded developers
  • Specific Intel chipset solutions for different applications
  • Design differences between various types of embedded devices with virtualization


Download this free infographic today here, and find more information on virtualization to help with your next project here:



Many embedded systems have increasingly included substantial collections of computationally complex protocols to provide superior connectivity for what is a growing segment of embedded applications. That trend shows no sign of slowing down, and in many cases is growing rapidly as developers find new reasons to enable connectivity within their embedded products.  Protocol complexity issues in an embedded system have been partially mitigated through the use of preconfigured protocol stacks. But even a pre-packaged protocol stack doesn’t eliminate associated software test for engineers employing an already-proven validation suite. Provided that the protocol stack is isolated adequately, the pre-configured software stacks can save time and reduce the learning curve for software developers.


Flawless connectivity is a fundamental expectation of remote or distributed embedded systems. Quality and reliability engineers charged with validating the system before it is shipped can adopt pre-configured validation test suites to be certain that the protocol stack functions as intended. Ironically, for some companies the use of a pre-configured and verified protocol stack can hurt engineers’ ability to diagnose operational problems when they inevitably occur. When this situation happens, the robustness and functionality of the vendor’s programming tools, such as a workbench, become critical to the engineering process.


Commercial programming workbench tools often offer regression testing and test suite tools for specific communications stacks.  Using these commercial tools can save time and effort while ensuring a reliable stack. Green Hills Software (1) and Wind River Systems (2) offer both a test workbench and pre-certified protocol stacks.  These test suites incorporate experience gained from hundreds of uses by customers in real world situations.  In effect, choosing a commercially available protocol stack carries with it thousands of man-hours applied to the problem of software verification and validation.

Green Hills Software Networking Solutions Center provides a wide range of software and services focused on communications and communication protocols.  Connectivity of a particular embedded system is often dictated by the environment in use: CAN, Ethernet, or customized protocols. The number and diversity of communication options available to embedded systems can be staggering.




Which portion of the protocol spectrum you choose is dependent on the whole system that the embedded product is used within. Simple communications within a private, purpose-designed embedded network may only require an Ethernet layer with a single network layer. Green Hills Software network professional services include complete networking system design, integration, debug, optimization, customization, test, and validation, training, and enhanced product support. Using these services allows development engineers to deliver higher-quality products with faster time-to-market and at lower development and deployment cost. While using a third party communications package does have developmental advantages, there are tradeoffs to be made dealing with relying on a third party for debug support during both development and deployment.


Most Green Hills protocols are delivered in source code and are ready to run on INTEGRITY, INTEGRITY-178B, velOSity and u-velOSity. These protocols and protocol stacks are designed and tuned to use the capabilities of the Green Hills family of operating systems and the MULTI® IDE.




The MULTI IDE combines a debugger, editor, code browser, configuration manager, and other tools into a single integrated system. The MULTI IDE supports Green Hills Software's optimizing C, C++ and EC++ compilers. MULTI IDE supports the industry open standard Eclipse system that permits re-configuration of the MUTLI environment and integration of Eclipse-based third party tools into the development environment.


Wind River Systems’ Test Management product is a test execution optimization system for embedded devices. It permits run-time visibility into devices while under test. The system provides the operational feedback that test engineers need to achieve the highest possible quality in the least amount of time. The system is a scalable test and development environment intended to enhance automation, control, traceability, and feedback. It leverages unique, dynamic instrumentation technology to measure test coverage, map test-to-code traceability, profile performance, enable white-box access, and speed diagnostics of complex devices. All of these capabilities are available at run-time, providing a unique mechanism for test management. Like, Green Hills Software. Wind River Systems development environment supports the open standard Eclipse facility.


Communications protocols are best tested and validated by purpose-built tools and pre-packaged test and validation data and control suites. Developing communications systems often require some way to simulate dataflows and stimulus-response behavior. An open source network simulator may have a role in your next communications development.  Ns-3 is a discrete-event network simulator for Internet systems, targeted primarily for research and educational use, but usable in other situations. Ns-3 is free software, licensed under the GNU GPLv2 license, and is publicly available for research, development, and use. It provides specialized support for simulation of TCP, routing, and multicast protocols over wired and wireless networks. Ns has no concept of operating systems as part of its structure, but the package can be integrated into any Eclipse-capable development environment. As with any other Eclispe third party tool, there may be additional programming required to make Ns work seamlessly within your specific environment.


Embedded connectivity does not live in an isolated environment. Tools beyond the typical software and hardware development systems are often useful during engineering of connected embedded systems. The importance of Internet –based protocols has spawned a continuously improving collection of tools. Network hardware simulators like Candela Technologies LANforge ICE provides a reproducible set of impairments in which to test robustness of connectivity for embedded systems. LANForge ICE is a network emulator. It can add impairments in a controlled and reproducible manner to aid in testing applications and network equipment that must operate over a network. LANforge supports physical impairments that are difficult or impossible to simulate: latency, bandwidth, jitter, packet loss, packet reordering and others.


PacketStorm Communications offers an IP Network Emulator and simulator designed to reproduce the unfavorable conditions of IP Networks and WANs in a controllable and repeatable lab setting. The company’s Route Analyzers monitor, record, and troubleshoot routing protocols of networks to maximize network availability by identifying and diagnosing IP routing faults in real time. The Route Analyzer’s Intelligent Error Analysis uses algorithms to provide user settable alarms for fast diagnostics. Although the intended audience for the Router Analyzer is large scale datacenters and service providers, it also can aid engineers who are developing large scale connected embedded systems employing typical Internet routing.


Internet-based communications protocols are becoming ubiquitous in embedded systems both for remote control/monitoring, and for data exchange. As these systems expand in capabilities and enter more sensitive applications, the full range of robustness for communications protocols take on a new urgency.  With the increased reliance on connectivity comes the need for better and more comprehensive testing and validation. Today these needs can be met by commercially available development tools combined with network emulators and simulators.


What level of validation do your connected embedded systems require?




  1. Green Hills Software is an Affiliate member of the Intel Embedded Alliance
  2. Wind River Systems is an Associate member of the Intel Embedded Alliance



Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance

At ISS World last week there was a lot of discussion about trends in Lawful Interception (LI) and implications for network security performance. See this post for details:

Software virtualization is a hot topic these days in the embedded market. New processor technologies and consolidation of embedded systems are two important drivers for implementing virtualization technologies.


Many embedded systems have a real-time component and a storage/display/user interface component. These two components can oftentimes be at odds with one another – the real-time component requires deterministic task scheduling for reliable data sampling and processing while the user interface component allows for rich graphics and user interaction. The user interface component might be a Windows™ or Linux environment where there are a wide variety of applications and graphics packages which lower time to market and development effort. The user interface component typically has no real-time requirements. The real-time component typically uses a real-time operating system (RTOS) foundation that provides a reliable, secure environment for the tasks running on the RTOS. The real-time component feeds the information gathered to the user interface component which then stores & displays the information in a user friendly form, allowing the user to interact with the information more easily and effectively. These two worlds have traditionally been two separate boxes or blades with some kind of network pipe in-between to transfer the information.


Multicore processors and virtualization environments enable a single platform that can support consolidating multiple software environments. The key software enabler for virtualization is called a hypervisor - a hypervisor is the low-level software that partitions hardware resources into separate software environments. This allows multiple operating systems to run on a single hardware platform without compromising security or real-time requirements.


The concept of a hypervisor dates back to the mid 1960s when IBM created a hypervisor to share the memory of an IBM 360 and used half of memory to run the IBM 360 and half to create a virtual IBM 7080 environment. But it really took the advent of multicore processors (almost 40 years later!) implementing a distinct set of cores on a single chip where hypervisors could guarantee separation of software environments with no impact on real-time performance. Prior to multicore, multiprocessor platforms enabled separate operating systems running on a single board. Due to processor, I/O, memory, and cache interconnect complications, it was difficult to create a true hypervisor that could manage and control a multiprocessor platform.


There are two types of hypervisors – a type 1 hypervisor runs directly on top of the platform hardware to manage the guest operating systems. Type 1 hypervisors are often called “native” or “bare metal” hypervisors. Type 2 hypervisors (or “hosted” hypervisors) run on top of an operating system environment. Then within the hosted hypervisor, additional operating systems can be launched. Which type of hypervisor is the best fit depends on the application.

There are other considerations when consolidating software environments under a single virtualized platform – will each environment meet performance criteria? How will the management of these components work within the virtual environment and can the hypervisor help?


The LynxSecure hypervisor from Lynuxworks is an example of a virtualization environment available for the Intel® Core™ i7 platforms. The diagram below shows how LynxSecure can operate in an asymmetric multiprocessing (AMP) or symmetric multiprocessing (SMP) manner (or a mix of both). For AMP applications, a guest operating system is assigned a specific core or set of cores. This is known as processor affinity. In an SMP approach, the operating system is allowed to use any available core within of a set of cores. LynxSecure allows configuration for AMP, SMP or a mix.


LynxSecure also allows for fully virtualized and para-virtualized guest operating systems. Fully virtualized OSs includes Windows XP, Windows 7, Solaris 10, Chromium OS, Linux, and LynxOS-178. This means that the operating system and its applications can run directly within LynxSecure without any modifications. Para-virtualized operating systems are supported if the developer has access to the source code. In the scenario, the developer can modify the operating system for optimizations within the LynxSecure para-virtualized environment. LynxSecure time and space partitioning guarantees real-time determinism and secure separation of memory regions. This is important for any applications that require security of data & instruction execution between operating systems. It’s these capabilities within LynxSecure that guarantee complete separation of operating systems. This is especially important in cases where there may be secure applications running side-by-side with non-secure applications.



The link below references a recent webcast where Kontron and LynuxWorks talk about the 2nd generation Intel Core family and how LynxSecure works. When you go to this link, scroll down to the “Level-up with the 2nd Generation Intel Core Processor Family” presentation. Included in this webcast is a video presentation where LynuxWorks shows a real-life application where LynxSecure was used to consolidate operating system environments on an Intel core i7 Mobile processor.|utmccn=(organic)|utmcmd=organic|utmctr=eCast%20multicore%20archive&__utmv=-&__utmk=58113322


LynxSecure is first designed to be a separation kernel – second to be a type 1 hypervisor. This means that LynxSecure provides a high level of application isolation and software security & resource sharing within and between the virtualized environments running within the hypervisor.


For more information about virtualization and separation kernels, go to the following link:


There are a number of video and audio recordings that discuss multicore environments and hypervisors.


To view other community content on Virtualization, see “Virtualization- Top Picks”



Curt Schwaderer

OpenSystems Media®, by special arrangement with the Intel® Embedded Alliance


Kontron is a Premier member of the Intel® Embedded Alliance.

LynuxWorks is an Affliate member of the Alliance.



I’ve recently received a lot of questions about when customers can get the Intel® Boot Loader Development Kit (Intel® BLDK).   The short answer is first release will be available in mid-July.  This first release will support the Intel® Atom™ Processor E6xx Series.


We actually announced this back in April at IDF in Beijing . . . in fact, we had a presentation from the Intel BLDK team that talked about specific availability.  You can get this presentation, as well as another technical presentation Intel delivered on Intel BLDK from here:


Search for “boot loader” and you will see two presentations.  The course labeled EBM002 contains the schedules (see slide 6).


If you are not feeling motivated to do download the presentation, here is what the presentation says about the schedule . . .


The first release of Intel BLDK will be for the Intel® Atom™ Processor E6xx Series.  We have a pre-production release now that we have provided to select customers, but this will be available for public download in mid-July.  The next implementation will be for the Intel® Atom™ Processor E6x5C Series.  Pre-production releases will be actually starting in September . . . this pushed out a couple weeks from when it was presented at IDF.  However the public gold release is still scheduled for October 2011.


Note that both these releases will be publicly available for download at


Beyond that, we have a comprehensive roadmap which can be shared by your Intel sales representatives under non-disclosure.  However, I can say, with a couple possible exceptions, Intel’s long-term intent is to provide the Intel BLKD across the Intel Atom Processor roadmap for Embedded.


Hopefully that answers questions about availability of the Intel BLDK.  If you need more details, please contact your Intel representative and we will be happy to provide those details.


Until next time!




Drew Jensen

Boot Loader Product Marketing Manager

Intel Corporation

Embedded and Communications Group



Twitter: @intel_drew

Product managers and systems developers face a dilemma when confronted by the obsolescence of a hardware platform.  Historically the choice has often been between obsoleting the system or spending time and resources updating and migrating the application to new hardware. Now there is a lower effort alternative using virtualization. Virtualization can hide missing or changed delivery platform hardware and processor differences. It may also extend product life without committing to a full re-development.


Nowhere are the effects of hardware obsolescence seen in a more striking fashion than in large scale military development projects. It’s possible for the software developed for some military systems to span a decade or more in the development process, and outlive three generations of processor developments, including the end of life for the original processor. Today such large scale developments universally adopt a structured set of software rules aimed at minimizing the effort required to target a new or modified hardware architecture.


But what do you do when you’re faced with an overwhelming amount of legacy software hosted on discontinued hardware? Or the time required to retarget the software exceeds the remaining run of available hardware? Or when legacy code will only work on a specific operating system version under impossible time constraints?


One of the first legacy programs that I ever worked on ran on an IBM 1620 simulation program for an IBM 650 computer. The 650 was the target environment for the actual software. The IBM 1620 code simulated all of the operations and peripherals of the original 650. In effect, the 650 code ran in a specialized “virtual” machine implemented entirely in software.


Modern virtualization technology can serve the same purpose as that IBM 650 simulation software – extend the useful life of a legacy software application without investing in full rewrites. Hypervisors like that offered by Wind River Systems (1) provide the basic tools required to continue running software with minimal time investment.




Legacy code falls into several groups:


  1. Code that targeted a specific processor that is no longer available
  2. Code written to exploit specific features of a discontinued or no-longer-supported  Operating System
  3. Code that is useful but does not warrant a full porting effort
  4. Code that must be rehosted as part of a strategy to reduce hardware requirements


Wind River’s virtual board mechanism enables the porting and paravirtualization of an existing proprietary operating system, or legacy code that is fully intertwined with the operating system, onto the hypervisor. Virtualization also permits continued use of no-longer-supported Operating Systems when the specific hardware that would run the code is no longer available. Porting legacy code to a virtualized environment may be as simple as selecting a system like TenAsys’s eVM ® which addresses a related issue of consolidating hardware platforms. The TenAsys approach combines a version of Microsoft (3) Windows with the company’s realtime OS.


Selecting a multi-core CPU permits designers to simplify porting legacy code because the chance of timing conflicts is significantly reduced or eliminated. Virtualization is used to abstract the code from the physical environment. For unique processors, each virtual hardware board provides the opportunity to simulate “odd ball hardware” from legacy systems within the virtual board space.


Embedded hypervisors are implemented using minimal overhead code – a thin layer of code that partitions hardware into virtual environments. Wind River calls these virtual environments a “virtual board.” An OS resides inside each virtual board. Virtual boards run in separate address spaces which are protected in hardware by the Memory Management Unit. A virtual board that can run on a single core can run Symmetric Multi Processing across multiple cores, or can be scheduled with other virtual boards on a single core on either a priority-preemptive or time-based schedule. This is accomplished by virtualizing key components of a system:


  • CPU – virtualizing the CPU permits sharing a single core with multiple virtual boards. Each virtual board runs on one physical processing core. Alternatively, a virtual board may use a dedicated single core or a set of cores.
  • Memory – memory  virtualization requires partitioning the physical memory so multiple virtual board partitions can use parts of the real memory. This creates an abstraction layer that separates and controls memory access.
  • Devices – Input/Output Devices and individual processor hardware can either be dedicated to a single virtual board or virtualized and shared between multiple virtual boards.


In all of these virtualization choices, physical devices conceptually are below the hypervisor. It is possible to simulate obsolete hardware operation through software additions controlled by the hypervisor.


According to Wind River, software test and verification is reduced or eliminated for legacy code running on new platforms using virtualization.

Green Hills Software (4) adds a complete business unit charged with aiding customers in adapting to the virtualized environment. A wide range of engineering services in support of virtualization efforts can be contracted to Green Hills. The company’s virtualization offering focuses on the security aspects of code reliability and especially aerospace and defense applications.


Virtualization can add years of useful life to legacy operating on orphaned processors.


What legacy applications could you use virtualization to avoid costly and time-consuming rewrites?




  1. Wind River Systems is an Associate memer of the Intel Embedded Alliance
  2. TenAsys is an Affiliate memer of the Intel Embedded Alliance
  3. Microsoft® Corporation is an Associate memer of the Intel Embedded Alliance
  4. Green Hills® Software is an Affiliate memer of the Intel Embedded Alliance


Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance