Skip navigation

Multi-core processors bring higher performance and lower power consumption to a broad range of applications. But the development of Next Generation Networks (NGN) relies on multicore processors as part of network elements. NGN is fundamentally about:


  • consolidation of telecommunications standards to permit several transport layers to be merged into a single transport network,
  • a restructuring of the telecommunications infrastructure to remove VoIP switching from the local exchange, and
  • migration of constant bitrate voice data to CableLabs’ PacketCable standard.


NGN is often referred to by many as a “carrier grade” or “carrier class” capability. However, “carrier grade” does not have a standard definition. The essence of “carrier grade” comes from the idea that carrier infrastructure systems carry a better than five sigma reliability, availability, and capabilities.  Although there has been a major difference between carrier and residential systems, the generally increased emphasis on software and hardware reliability has reduced the gap between the two classes of products. Capability and testing remain areas of difference.


In a very real sense, major portions of the telecommunications infrastructure are being distributed throughout the whole system. Part of this distribution is based on increased competition between carriers owning their own infrastructure, but part of it comes from adopting Internet technologies like Internet Protocol (IP) as a backbone of infrastructure systems. Because IP is fundamentally based on dynamic routing of data from source to destination, the telecommunications infrastructure is taking on the same philosophy of distributed systems.


Consolidation of transport layers in a single embedded system implies the need to handle many different transport layers, perhaps simultaneously. Conceptually, it’s easiest to consider each transport layer operating separately on its own processor. This requirement is most easily met by employing one or more multi-core processors to manage the transport layers. But with the benefits of multi-core come new challenges and complexity, not just from a hardware perspective but more importantly from a software development perspective.   The move to multi-core can impact time to market—if you don’t have the right development tools, technologies, training, and support.


Virtualization is a key design and implementation technique for multi-core processors that enables:


  • Consolidation - reduce multi-core hardware cost and footprint
  • Reliability - build a safe and secure platform
  • Performance - maximize throughput on multi-core systems


Consolidation means combining several logical systems together on a single hardware platform. Within the context of creating an NGN system, that means permitting different transport layers to become co-resident on the platform. For existing software that task is best handled by Virtualization and multi-core processors like the Intel® Xeon® processor. By using virtualization, each transport layer can reside on its own processor core with inter-process communications managed by a hypervisor. Thus, virtualization serves to isolate the operating software while providing an operating environment that requires the least amount of effort to make the software operational.


Reliability is both a hardware and software issue. Hardware reliability is a by-product of the consolidation process – fewer integrated circuits in an embedded system means longer meantime to failure. Software reliability is a more intentional factor and is quickly becoming a “must have” for standardized software platforms.


Performance of software operating on a multi-core seems like a given, but there are plenty of opportunities to actually obtain lower performance than on a single core processor.


Wind River Systems (1) has defined an open platform for NGN systems called “Carrier Grade Open Platform” (CGOP). CGOP  is a software platform that includes operating systems, middle ware, application services, and a unique class of carrier grade features.




One of the aspects of CGOP that sets it apart from residential class equipment lies in the cross-application services offered:


  • Fault management
  • Performance management
  • Security hardening
  • Green support
  • Operations support


Each of these serves a vital purpose in infrastructure embedded systems. But, the fundamentals of developing NGN rests on the development tools used to create and manage the applications that operate on the CGOP. CGOP gives engineers a head start on development by providing a large collection of software modules that can be put together in new and unique ways to form custom applications. This collection of purpose-designed support modules reduces the time required to develop a new application, or to simply create a replacement system for an already-fielded telecommunications product.


Software developers gain flexibility with the Wind River approach. CGOP supports Symmetrical Multi-core Processing (SMP), Asymmetrical Multi-core Processing (AMP), and hybrid options. Multiple OSes all run under a hypervisor - keeping memory and peripherals protected from erroneous reads and writes. Virtualization is a powerful concept that defines structured access to needed resources, and allows developers to simplify how multiple concurrently operating applications interact. In addition, virtualization permits the cross-application functions of CGOP to actually work.


It’s crucial for virtualized NGN systems to be supported by debugging tools, including simulation and system-wide techniques for debugging complex multi-core and multi-OS systems. Standard development tools from Green Hills Software (2) and Wind River offer debug tools for embedded Intel processors. JTAG-based probes rely on on-chip access to internal registers to control and report on software operations. The critical support tool must include multi-core functions to permit debugging and diagnostics of the embedded processor.


Wind River has integrated VxWorks and Wind River Linux in the CGOP platform. More importantly to developers, the products are integrated into the Workbench, On-Chip debugging, and Wind River Test Management. By integrating these capabilities together, developers are assured of a platform that is already structured to maximize developer productivity. All of these facilities are optimized for multi-core operation.


CGOP offers engineers creating the Next Generation Network a way to accelerate developments, create more reliable applications, and host them on COTS hardware.


NGN is coming. How will you respond to the challenge?


To view other community content focused on virtualization, see “Virtualization – Top Picks.”





  1. Wind River Systems is an Associate member of the Intel Embedded Alliance
  2. Green Hills Software is an Affiliate member of the Intel Embedded Alliance



Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance

Who:  Embedded systems designers looking for customizable option for creating optimized initialization firmware for Intel® Atom™ processor based platforms.



What:  Intel has officially released our first publicly available version of the Intel BLDK, which is a combination of reference source code, binary libraries, and development tools that allow creation of customized and optimized Intel® Atom™ processor firmware for fixed functioning embedded applications.



When:  Now!   Download is available beginning July 28, 2011.



Where, the Embedded Software Discussion Forum, and the e-Help desk



How:  Find out more how to utilize the Intel BLDK using the key resources below.



Key Resources:


Embedded Intel Atom Platforms: The ABC's of the Intel Boot Loader Development Kit (Intel BLDK)

This is an introductory 60 minute webinar on the Intel BLDK.


Rapid Development of Boot Loaders using Intel® Boot Loader Development Kit (Intel® BLDK) for Embedded Designs

Training on the architectural overview, features and capabilities and roadmap of Intel BLDK, as well as how to use Intel BLDK for rapid development and deployment of boot loaders.


Getting Started Guide: Intel® Boot Loader Development Kit (Intel® BLDK) Version 2.0

This manual provides information on how to install, build, and customize the code base using the Intel BLDK.

Intel® Virtualization Technology (VT) has significantly impacted the way embedded design teams combine many tasks on one processor – whether that processor is a single or multicore design. Running multiple virtualized operating systems on one processor allows one system to replace two or more and adds the ability to partition critical applications on a secure operating system separate from user-interface code and other features that might run on a general-purpose operating system. But virtualization does require a hypervisor layer that can adversely impact performance. Paravirtualization techniques integrated in operating systems can minimize that performance impact.


Before we jump into the paravirtualization topic, I’ll provide a quick reading list for anyone wanting to read more about Intel VT and virtualization in general. We’ve covered the topic a lot here on the community.


Only the last of the above links addresses paravirtualization so let’s move along toward that topic. The most common virtualization implementations present an operating system with a virtual machine with software interfaces that mimic the underlying hardware. That virtualization layer is typically implemented in a hypervisor and allows a guest operating system to run unmodified. The software virtual machine impacts performance because it must handle software accesses that would be made directly to hardware in a non-virtual environment.


Paravirtualization requires that an operating system be modified to run in the virtual environment. Critical elements of the operating system execute directly in the host domain with direct access to the hardware rather than in the virtual domain where access is always limited to the virtual machine.


The problem with paravirtualization is obviously the requirement to modify the operating system. That process is certainly time consuming and time-to-market is always a key concern. Moreover, you simply can’t get the access needed to paravirtualize commercial products such as Microsoft* Windows.

Fortunately there is middle ground and emerging technologies that can provide the performance benefits of paravirtualization without the necessity of extensive operating system modifications.


Let’s continue the discussion relative to virtualization products and technology from embedded-software specialist Wind River**. The company stresses its target in labeling the technology Embedded Virtualization. The company is quick to note that the virtualization requirements in embedded systems are quite different than in the IT and server space The company’s Wind River Hypervisor is designed to support real-time deterministic response, a small footprint, and maximum application and operating-system performance.


Wind River CTO Tomas Evensen authored a white paper called “Multicore challenges and choices: Deciding which solution is right for you.” While the paper uses multicore as a medium for discussion, virtualization is the focus as the technology that can get the most out of a processor in terms of taking on multiple applications. The paper describes how you can virtualize an entire processor, memory, or specific hardware features of a processor.

The paper notes specific requirements for optimal embedded virtualization. Among those two stand out. The hypervisor when possible maps devices directly into the guest operating system for maximum performance. And again when possible the guest operating systems are paravirtualized to run efficiently on the hypervisor.


So how does this play out in the real word and which technologies are there for us when we can’t paravirtualize an operating system? I asked Evensen essentially those questions. He noted that the Wind River Hypervisor can run Windows XP and 7 unmodified. Moreover, he said that Intel® VT has provided a significant performance boost running such general-purpose operating systems. And design teams like to use them for features such as the embedded user interface and for connection to the IT world. Intel VT is available on all of the first- and second-generation Intel® Core™ processors and on others as well including some members of the Intel® Atom™ family.


Still Evensen said that there are things you can do to boost Windows performance further. For example you can replace standard Windows drivers with custom ones. He said that Wind River can dynamically install a network driver designed to work with the Wind River Hypervisor that is faster than what would essentially be an emulation of the network interface with the standard virtualized driver. What he described is basically paravirtualization of the network driver.




Wind River’s own VxWorks and Wind River Linux have paravirtualization features driven deeper into the operating systems to maximize performance when used in a virtual environment. As the nearby figure shows, the Wind River Hypervisor can host VxWorks, Linux, Windows, and other operating systems. Note the technology targets a broad range of applications including military & aerospace, medical, industrial, automotive, and communications.


What types of performance issues have you faced using virtualization technology? Have you turned to paravirtualization for the mission-critical applications that were part of a larger system design. Fans of the Intel® Embedded Community would greatly appreciate it if you shared your insights. Please comment and let us know if virtualization is a standard part of your embedded toolbox.



Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance


*Microsoft is an Associate member of the Intel® Embedded Alliance

** Wind River is an Associate member of the Alliance.


More information

To learn more about virtualization for embedded devices, visit


As embedded designers face the challenge of developing systems with fewer components and lower power consumption while combining multiple applications, many have turned to virtualization software to simplify the process. Even with a single core processor, virtualization allows designers to merge a real time operating system with a high level, graphics based user interface such as Windows or Linux. However, the introduction of the 2nd generation Intel® Core™ architecture with multiple processors and extended 7 year lifecycle support has tempted design teams to question the need for virtualization software. Specifically, they ask: If I can distribute application software across multiple cores why do I need to create virtual machines? In this blog post, I will cover the advantages of integrating virtual environments along with multicore technology and look at a few products available from the most prominent embedded software vendors.


One of the prime reasons to incorporate multicore is to boost performance through parallel processing. Developers have access to multiple techniques to enable this performance gain including symmetric or asymmetric multiprocessing and virtualization. In the symmetric multiprocessing (SMP) configuration, a single operating system allocates threads or tasks across the available cores while managing common memory and hardware resources. Asymmetric multiprocessing (AMP) allows each core to run independent software so that a single system can easily combine real-time, deterministic tasks with a graphical user interface. With virtualization, a hypervisor isolates and allocates system resources between the operating environments so that real-time, general-purpose, and legacy software can be readily integrated in a multicore system.


In addition to memory and hardware device allocation, virtualization allows the developer to assign multiple cores to compute intensive applications as needed to maximize overall system performance. The virtualization software layer also provides security and separation to guarantee that a software failure in one partition does not impact other partitions. Virtualization also simplifies system upgrades by isolating the hardware and software layers so that designers can easily add or modify peripherals, memory, and cores without restructuring the software architecture. A virtual machine hypervisor provides designers with a way to optimize performance by tweaking the resource mapping even after deployment.


The 2nd generation Intel® Core™ architecture includes multiple features that improve hardware and software performance in a virtual environment. For example, specialized Intel® functions such as Extended Page Tables (EPT) and Page Attribute Table (PAT) provide a hardware assist to the partitioning and allocation of physical memory among virtual machines. You can find plenty of information and technical articles about Intel architecture and virtualization at the Intel® Embedded Community page on Virtualization.


To take advantage of these hardware features that assist virtualization, several embedded software suppliers have updated their products to be compatible with the 2nd generation Intel® Core™ architecture. For example, the Wind River Hypervisor gives designers the ability to configure and partition hardware devices, memory, and cores into “virtual boards” that an operating system uses as its execution environment (See figure 1). These virtual boards can be run on a single processor core or distributed across multiple based on the needs of the system. In all cases, the hypervisor can be used to provide virtualization and protection of memory and device access. The Hypervisor can be configured to use all of its features or it can be scaled down to a minimal “supervisor,” to provide protection, reliability, and scalability in a supervised AMP configuration.


Wind River Hypervisor.jpg


TenAsys Corporation and LynuxWorks also provide virtualization products that match the enhanced performance features of the 2nd generation Intel® Core™ architecture. The TenAsys eVM for Windows embedded virtualization platform allows designers to host an embedded or real-time operating system (RTOS) alongside Windows on the same multicore processor platform. A Windows-based control panel allows the user to assign interrupts, allocate I/O devices, and control the passing of resources, including real-time interfaces, from Windows to the RTOS. LynuxWorks offers the LynxSecure version 5.0 separation kernel and hypervisor that is designed to operate in secure defense environments where data and applications with different security levels need to co-reside on a single device without contamination. The LynxSecure 5.0 secure device virtualization mechanism allows limited physical devices to be virtualized and shared between guest operating systems.


Virtualization is a great way to simplify embedded designs with fewer components while integrating the framework needed to easily combine disparate operating software or future updates. If you are starting on a new project and you are unsure if virtualization is the way to go, please share your concerns with fellow followers of the Intel® Embedded Community. You can keep up with the latest technical articles and product announcements at the Embedded Computing Design archives on Virtualization. Also check back to see how embedded designers use virtual machines in real-time applications to simplify hardware, improve performance, and lower power requirements.


To view other community content on virtualization, see “Virtualization - Top Picks


Warren Webb
OpenSystems Media®, by special arrangement with Intel® Embedded Alliance


Wind River Systems is an Associate member of the by Intel® Embedded Alliance. LynuxWorks and TenAsys are Affiliate members of the Alliance.



Virtualization is one of the hottest technologies in embedded design. By enabling multiple operating systems (OSs) to run on a single hardware platform, virtualization opens up powerful new design options. To get a better understanding of this technology, I asked a panel of industry experts to share their perspectives. Here’s what I learned.


Why does virtualization matter for embedded applications? How does it benefit embedded applications?

Chris Ault, Senior Product Manager, Virtualization, Wind River: Some of the benefits of virtualization in embedded include the opportunity and ability to a) consolidate and reduce the number of system components, b) migrate legacy apps to new hardware, and c) separate or isolate safe or secure applications from standard applications. This offers the ability to rapidly d) innovate by adding new functionality incremental to the existing application and operating system stack. New functionality can be offered by utilizing the appropriate OS for the functionality.


Embedded virtualization also offers the opportunity to separate the lifecycles of the isolated partitions; one OS can be upgraded with no impact to the other.


David Kleidermacher, CTO, Green Hills Software: There are many use cases driving adoption of embedded virtualization. However, there are some common benefits across verticals, including:


  • Consolidation: Saving size, weight, power, and cost.
  • IP protection: push proprietary software and data outside of licensing-encumbered operating systems (GPL, Google)
  • Enable mixed criticality operation: rich general purpose OS alongside hard real-time, security-critical, and/or safety-critical functions
  • Open and extensible: run any OS personality at any time – Linux, Android, Windows, RTOS, etc. You can think of hypervisors as providing the ultimate open environment.


Kim Hartman, VP of Marketing and Sales, TenAsys: One has to look at the overall embedded application being made up of a mixture of different functions. Embedded virtualization allows a system to be built using a blend of application-appropriate operating systems, instead of having to rely upon any single OS to serve all needs. Furthermore, a successful embedded virtualization initiative incorporates the re-use of existing application & OS intellectual property without requiring modification, or a port of field-proven code.


Where are you seeing your customers employ virtualization? What applications are using virtualization, and why?

David Kleidermacher: Some examples we are seeing at Green Hills Software include:

  • In-vehicle infotainment: consolidate general purpose head unit (e.g., MeeGo or Android), real-time (e.g. cluster displays), and/or instant-on/safety-critical (e.g., rear-view camera)
  • Software-defined radio: real-time/safety-critical alongside Linux
  • Networking/telecom: consolidate control and data plane; use virtualization as a means of achieving higher-availability (e.g. detect guest OS failure and restart VM)
  • Health: isolate and secure sensitive patient information per HIPAA on mobile devices, e.g., electronic medical records management by physicians


Arun Subbarao, VP Engineering, LynuxWorks, Inc.: One early adopter has been the U.S. Department of Defense (DoD). This is not surprising since mission critical systems are often extremely complex, trying to integrate multiple different operating environments and applications in very small form factor hardware designs. There is also a need to keep highly sensitive environments completely isolated from less sensitive ones, while still allowing the authorized personnel appropriate access. Mission-critical applications also have a key real-time response requirement and a high security requirement. In addition to the DoD, we are also seeing government IT, financial and medical organizations rapidly adopt virtualization technology to help safeguard sensitive data.


Chris Ault: We are seeing increasing deployment of embedded virtualization across all industries, most notably Aerospace & Defense, Industrial, and Networking. There is increasing interests in the Automotive In-Vehicle Infotainment, Mobile, and Medical sectors as well.


One example is using virtualization to add general-purpose operating systems with enhanced graphics interfaces to existing real-time robotic controllers. Another is migrating existing legacy applications from older single-core processors to multi-core CPUs while avoiding the need to ‘port’ application software. By using virtualization, customers can run multiple copies of the single-threaded environment, thus realizing the benefit of the increased performance of the multi-core CPU.


Kim Hartman: One area we are seeing interest in is embedded applications that are adding M2M, enterprise, or cloud services. There are usually not real-time needs and can easily be addressed by leveraging off the shelf software on the Microsoft Windows platform. It’s no longer required to re-invent everything in the RTOS with custom or redundant hardware sub-systems.


What are the key challenges to implementing virtualization? How can developers overcome these challenges?

Arun Subbarao: The key challenges to implementing virtualization are preserving native performance, allowing for higher security and providing an appropriate device management model. However, it is an area of intense focus by processor vendors with Intel being a pioneer in this area. The technology advancements afforded by Intel® VT-x provides significant hardware assists that has closed the gap between native operating system performance and its virtualized counterpart. The trend towards multi-core computing is a natural technology enabler for allowing multiple operating environments to execute without loss of functionality and performance.


David Kleidermacher: By far the biggest challenge is I/O virtualization, and in particular, there are decisions/configurations in three dimensions: how a peripheral is virtualized/interfaced to physical hardware; how a peripheral is shared; and how quality-of-service and security can be assured across all VMs and all system bus masters. There are many choices and tradeoffs with respect to performance, maintainability/time-to-market, and robustness/security, and this is the place where vendor experience and technology flexibility is absolutely critical. There is no one-size-fits-all.


Intel VT technologies, especially Intel® VT-d and Intel® VT-c, dramatically simplify many use cases. However, here again is where experience and technology flexibility is critical. Intel VT-x, Intel VT-d, Intel VT-c and other Intel technologies must be applied the right way depending on the aforementioned tradeoffs.


[Editors’ note: Intel® Virtualization Technology (Intel® VT) for IA-32 Intel® Architecture (Intel® VT-x) traps certain instructions in hardware, relieving the hypervisor of these duties. This greatly reduces the overhead of virtualization.



Intel® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d) adds hardware accelerators that enable secure assignment of I/O devices to specific OSs. This decreases the load on the processor and accelerates data movement.



Intel® Virtualization Technology (Intel® VT) for Connectivity (Intel® VT-c) improves the network performance of virtualized devices.]


Chris Ault: Developing a virtualized embedded product presents challenges and risks with regard to configuring partitioned memory regions, the allocation of CPU cores to partitions, presenting specific devices to the appropriate virtual partition using the appropriate device sharing model, etc.


Mitigating these risks and challenges can be achieved when selecting a software vendor that offers a complete embedded virtualization platform for development, one that consists of operating systems pre-integrated with a hypervisor, along with development tools and BSPs also integrated with the guest operating systems and hypervisor. Point solutions and non-integrated open source software have their place in the software development lifecycle, but embedded virtualization development requires coordinated and integrated products in order to minimize risk and development difficulties.


What would you say to developers who are worried about the difficulty and complexity of deploying a virtualized solution? Is adding this new software layer going to make their jobs harder?

Kim Hartman: Embedded virtualization done right makes the task easier, not harder. Having a clear understanding of how the solution actually works for you makes the process go smoothly and predictably.


David Kleidermacher: Ultimately, the hypervisor should be deployed like the BIOS – it’s always there and simple to configure; end users are mostly unaware of it. The server virtualization market has proven system virtualization practical, from both a technological and business perspective, to deploy on common hardware; while many embedded SoCs are quite a bit more complicated from the standpoint of I/O choices, fundamentally the hypervisor is a thin layer that can be easily ported and managed. Green Hills has proven this in the embedded space since we first shipped the INTEGRITY Multivisor in 2003. Eight years of experience across a wide range of processor technologies brings a lot of confidence to customers that embedded and mobile virtualization is practical.


Arun Subbarao: There is some level of additional complexity in deploying a virtualized solution. However, this is no way greater than the adoption of any other newer disruptive technologies. The separation kernel and hypervisor software is intended to abstract this complexity and allow developers to avail of the enormous flexibility and compelling business advantages this technology has to offer.


If I do need help implementing a virtualized solution, how can I get assistance? What kinds of services do you offer?

Arun Subbarao: We would recommend that developers who are creating their first virtualized solution take advantage of the assistance that software vendors have to offer in this area. Vendors, such as LynuxWorks, have training, support and consulting services to provide additional assistance to enable developers to adopt this technology and become productive in a very short period of time.


Kim Hartman: With over 20 years of experience, TenAsys has been providing solutions that mix GPOS and RTOS application needs. Our INtime for Windows RTOS is a para-virtualized solution deployed for 14 years, applied to the absolute fastest multi-core processing needs.


Chris Ault: Wind River offers industry-specific Professional Services practices, with focused offerings that help developers meet strict market deadlines while keeping development costs down. We also offer extensive training and education services and rapid integration mentoring programs aimed at helping our customers kick-start their development activities. Finally, Wind River’s Online Support website offers a comprehensive knowledge base and various subscription features.


David Kleidermacher: We recommend you Contact the Green Hills Software INTEGRITY Secure Virtualization (ISV) business unit experts for education and design advice in this area.


What are the most important trends you see developing in embedded virtualization? What does the future hold for this technology?

Arun Subbarao: The most important trends we see developing in embedded virtualization are rich user experience, miniaturization, and high awareness of security. The sensitivity to security will be the new dominant aspect of the three trends in the near future, given the emerging threats of identity theft and malware attacks. Luckily, virtualization technology can address all these requirements simultaneously. In fact, we believe it is the only technology at the intersection of all these emerging market trends.


Kim Hartman: Multi-core proliferation will continue making more discrete processing services available to embedded applications at lower cost. Admitting that a single SMP approach is not the universal solution, AMP and mixed-OS solutions emphasizing utility become attractive. Embedded virtualization is the natural solution to partitioning these resources for directed control.


Chris Ault: Increasing core counts for multi-core CPUs is certainly an important trend, as well as advances in hardware accelerators that further assist with the implementation of virtualization. As core counts increase, the effectiveness of SMP operating systems tends to decrease, thus driving more demand and relevance for embedded virtualization with hardware-assist features.


David Kleidermacher: Embedded microprocessors only recently have enough hardware assistance features to make system virtualization practical in embedded, resource-constrained electronics. New use cases and benefits are popping up on a daily basis. That this technology will be successful is a foregone conclusion. However, we still need the CPU vendors to improve hardware support, per my earlier comment on I/O virtualization. I have no doubt that Intel will continue to lead in this area.



More information

virtualization.pngTo learn more about virtualization for embedded devices, visit


For more information on the solutions from our panelists, please visit the following sites:


Wind River Systems is an Associate member of the Intel® Embedded Alliance. Green Hills Software, LynuxWorks, and TenAsys are Affiliate members of the Alliance.



Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine


Follow me on Twitter at!/kentonwilliston

The Wind River Linux Guest OS documentation provides step-by-step instructions to build a Linux image that is capable of booting on top of their hypervisor. If you follow the instructions you will end up with a Guest OS that uses a RAM disk for its file system. While this may be a desirable setup for embedded devices as they head to market, during the design/debugging phases it may be convenient to have a file system that doesn't lose all changes on every reboot.



To address this issue, let’s take a look at how we can use a USB thumb drive to hold the file system for our Guest OS. Setup is as follows:



  • Intel Xeon 5500 Series Platform with Intel VT-x and VT-d Enabled
  • Wind River Hypervisor 1.3
  • Wind River Linux 4.1
  • 16GB USB Flash Drive (A smaller drive will work, but the larger the better)



Step 1

Since we are going to use a USB thumb drive to hold the file system for the Guest OS, we need to pass through the host USB controller to the Guest OS's Virtual Board (VB). The easiest way to do this is:


  • Grab the pre-built system image provided by Wind River for your target

  • Boot the target and drop to the hypervisor debug shell via serial port

    • Prebuilt images are located in the hypervisor BSP directory, for example
      <install_dir>/wrhv-1.3/bsp/x86_hanlan_creek_nehalem/obj_x86_hanlan_creek_nehalem/ as system.elf. Follow the instructions Booting an IA-32 Target from USB Storage in the Wind River Hypervisor Users Guide to get up and running quickly.

    • When partitioning the USB drive I typically make the boot partition 512MB or less and use the remaining space on the drive for the file system. This way I only need 1 USB drive to boot the platform and serve as the file system for the Guest OS.

  • At the hypervisor debug shell, issue a 'lspci -x' to get a print out of all the devices in the system. This is also good time to build/update your PhysicalBoard.xml for your target and identify the host USB controller(s) that needs to be passed through to the Virtual Board.


As you can see below, my platform has 4 USB controllers and I will  pass all of them through to the VB:


wrhv> lspci -x





    <Location Bus="0"  Device="26" Function="0" /> <!-- Set Attribute WriteEnable="1" in the Guest Device -->


        <Interrupt IntNum="19"/>



        <RegSet Address="0x000020c0" Length="0x20"/>





    <Location Bus="0"  Device="26" Function="1" /> <!-- Set Attribute WriteEnable="1" in the Guest Device -->


        <Interrupt IntNum="19"/>



        <RegSet Address="0x000020a0" Length="0x20"/>





    <Location Bus="0"  Device="26" Function="2" /> <!-- Set Attribute WriteEnable="1" in the Guest Device -->


        <Interrupt IntNum="19"/>



        <RegSet Address="0x00002080" Length="0x20"/>





    <Location Bus="0"  Device="26" Function="7" /> <!-- Set Attribute WriteEnable="1" in the Guest Device -->


        <Interrupt IntNum="19"/>



        <RegSet Address="0xb1a21000" Length="0x400"/>







For a more detailed look at how to configure a device for passthrough to a Virtual Board, reference the Wind River Hypervisor Users Guide and also my previous blog post Device Passthrough with Wind River Hypervisor.



Step 2

Assuming you have correctly configured the USB controller(s) for passthrough, now it is time to build your Guest OS image. The steps are very similar as those described in the Configuring and Building the Linux Guest OS section of the Wind River Linux Guest OS for Hypervisor Programmer's Guide, with a few small modifications.



Start by setting up the environment and configuring the project:


$ cd <install_dir>

$ ./wrenv.linux -p wrlinux-4


If you already have a workspace setup, change to that directory now. Or proceed with creating one...


$ mkdir workspace; mkdir workspace/wrlpp-gos-usbfs_prj

(abbreviation: Wind River Linux Platform Project – Guest OS – USB file system)


$ cd workspace/wrlpp-gos-usbfs_prj


$ $WIND_LINUX_CONFIGURE --enable-kernel=standard \

        --enable-board=intel_xeon_5520_ioh \

        --enable-rootfs=glibc_std \

        --with-layer=wrll-multicore,wrll-userspace/compiler,wrll-linux-2.6.34 \



Parameters explained:



            --enable-kernel=standard: Select the standard linux kernel.



            --enable-board=intel_xeon_5520_ioh: The target board. For a list of available options, look in

            <install_dir>/wrlinux-4/layers/wrll-multicore/boards/ .



--enable-rootfs=glibc_std: The file system type to use. The documentation often specifies glibc_small which produces a much smaller image but often at the cost of functionality. If you want a full featured shell for less headaches and ease of use, stay with glibc_std. This option is what may cause you to need a larger flash drive.



--with-layer=wrll-multicore,wrll-userspace/compiler,wrll-linux-2.6.34: wrll-multicore specifies this will be an image intended to run on top of the hypervisor, wrll-userspace/compiler and wrll-linux-2.6.34 are included for compiler support. If you don't need to compile anything, these may be left off to decrease the image size. Make sure to keep the wrll-multicore in either case.



          Note: Depending on your version of Wind River Linux, you may need to change the 2.6.34 parameter to match

             your kernel version.



            --with-template=feature/mipc,feature/hyp,target/tools-extended: These are the recommended options. If you

            removed the compiler support in the previous parameter, then you may also remove target/tools-extended.



Once the project is created, it's time to build:


$ make -j 16 all



Note: The -j 16 parameter should be adjusted to be equal to the number of cores in the system, or the number of cores that you would like to use to build packages in parallel.



Step 3

Now it's time to prepare the USB drive with the file system. Use your favorite partitioning tool (I like gparted) to create and format an ext3 partition on the USB drive.



The 'make all' command will have produced a file system in tar.bz2 format, which we will now extract to the USB drive:


$ cd /media/wrlinux                         #This is the location where I mounted the drive

$ sudo tar xjf ~/workspace/wrlpp-gos-usbfs_prj/export/intel_xeon_5520_ioh-standard-glibc_std-dist.tar.bz2



Note: It is important to run the extraction with super user privileges, otherwise it will fail.



At this point your file system is ready to go, and we only require two small changes to the Guest OS XML file.



The BootLine parameter of the Guest might currently look something like this:

BootLine=”console=ttyS0,115200 rw root=/dev/ram ramdisk_size=32768”



This means we are using the ramdisk as the root file system, and we need to change that. This is where it gets tricky, and might require some trial and error. The root= parameter needs to point to the USB partition which holds the root file system, but it may not be completely obvious where that is going to be. One option is to boot the Guest OS (with the USB controllers passthrough and the USB drive plugged into the system) and look at the output as the kernel begins initializing devices. It should tell you where it placed the USB drive. In my case, it happened to be /dev/sdb2, so my new BootLine looks like this:


BootLine=”console=ttyS0,115200 rw root=/dev/sdb2 rootwait”



The addition of the rootwait parameter is extremely important! It tells the kernel to wait for /dev/sdb2 to become available, which happens sometime after the kernel begins looking for its root file system. Without this parameter, you will see a kernel panic early on in the Guest OS boot process.


You have now done all the necessary steps for building a Guest OS that uses a USB drive for its file system. Continue to provide the kernel image to the hypervisor project as you normally would.





If you built a 64-bit Linux Guest OS, then you need to provide ~/workspace/wrlpp-gos-usbfs_prj/build/linux-intel_xeon_5520_ioh-standard-build/arch/x86/boot/compressed/vmlinux to the hypervisor.



If you built a 32-bit Linux Guest OS, then you need to provide ~/workspace/wrlpp-gos-usbfs_prj/export/intel_xeon_5520_ioh-vmlinux-stripped-* to the hypervisor.



Failure to provide the correct kernel image will result in the Guest OS failing to boot.




Good luck!

Hardware consolidation is often thought of as a general purpose computing strategy aimed at servers and datacenters. But virtualization-based consolidation can have big impacts in embedded systems, too. Intel multi-core and multi-threaded processors offer unexpected paths to embedded system cost reduction and control using virtualization.


Virtualization fundamentally provides an ability to decouple and abstract an operating system/application from the physical hardware it resides on - encapsulated in a virtual machine. The technology of virtualization is largely independent of the type of software that operates on the virtual machine.  The concepts of ‘decouple’, ‘abstract’ and ‘virtual’ provide the strength of the approach. Beyond just the technical qualities inherent in the virtualization process, virtualization enables the transformation of workloads so that they are continuously consolidated.


Hardware consolidation for embedded systems is underway – witness the legacy retargeting program of ready-mix concrete company. In this case, the consolidation was as much about the obsolescence of the legacy hardware, used to make the integrated system function, as it was about reduction in hardware complexity.


Performance, peripherals, and supported software often drive embedded systems decisions. More capable processors make it practical to implement workload consolidation for embedded systems where consolidation can be accomplished.


Workload consolidation for general purpose computing focused initially on homogeneous workloads, but is now taking on a heterogeneous workload profile.  The most critical portion of workload consolidation management is the process by which applications are selected for consolidation. In an IT environment the deployment strategy is a key part of the decision criteria. For embedded systems the decision matrix may be dictated by a whole host of considerations:


  • Embedded system hardware reliability
  • Proximity of related systems
  • Standards-based requirements for separate hardware platforms
  • Regulatory requirements for medical, aeronautics, financial and other systems
  • Physical security
  • Data integrity and security
  • Risk of loss – financial, data, liability, and others


Virtualization is one tool that can assist in meeting these wide-ranging requirements while still achieving consolidation. Virtualization provides technical advantages for embedded systems. Virtualization can protect against implementation deficiencies of poor isolation that lead to memory leakage, writes to peripherals not owned by the application, interference of other applications, resource/application scheduling conflicts, and operating system conflicts. For these reasons, many IT systems run each application on its own hardware platform. Virtualization approaches for embedded systems have addressed these issues in a variety of ways.


Many embedded systems can be partitioned into a User Interface (UI) and a hardware oriented portion. This portioning approach allows different engineering approaches to the components. UI is a “soft” realtime system that can be implemented using an ordinary operating system to host the UI. Typically the UI is hosted on the Windows or Linux operating system and the hardware-oriented application is controlled by a realtime operating system.


This logical partitioning has lead to several commercial approaches to software and hardware consolidation. Philips medical x-ray equipment employs TenAsys (1) INtime to achieve a workload consolidation including Windows and realtime control systems. UI functions like the systems console operate in the Windows space while remote INtime nodes can operate autonomously with CANbus connected realtime nodes. According to Philips software project leader, INtime is used to control machine movement and radiation emission for imaging.  Phoenix Contact, the largest supplier of soft PLC systems also employs the INtime product to control its Virtual Logic Controller (VLC) for use in steel mills, automotive assembly, packaging, and material handling. CNC machining company ANCA is a world leading CNC tool vendor using INtime to control its 5-axis CNC machining center.


INtime supports Intel processors and all Windows SMP platforms. It provides direct support for realtime USB, Ethernet and industrial I/O devices. Using multi-core processors permits engineers to dedicate a single CPU core to realtime processes or share CPU-cycles between INtime and Windows operation.


New systems may be designed from the start to employ virtualization to partition UI and realtime portions into separate subsystems using the appropriate software development tools. With OS systems that enable both Windows and a realtime portion to coexist on the same platform, engineers can create embedded applications that include both realtime requirements and exemplary realtime functions. Systems previously partitioned into multiple boards may be able to be consolidated with only a modest increase in processor performance. This has benefits both in inventory and SKUs for the OEM.


Modern realtime and Windows systems allow both software development approaches to be used to their best advantage.


How will you partition future embedded systems to leverage both Windows-based User Interfaces and realtime operation?




  1. TenAsys is an Affiliate member of the Intel Embedded Alliance

Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance