Virtualization is one of the hottest technologies in embedded design. By enabling multiple operating systems (OSs) to run on a single hardware platform, virtualization opens up powerful new design options. To get a better understanding of this technology, I asked a panel of industry experts to share their perspectives. Here’s what I learned.


Why does virtualization matter for embedded applications? How does it benefit embedded applications?

Chris Ault, Senior Product Manager, Virtualization, Wind River: Some of the benefits of virtualization in embedded include the opportunity and ability to a) consolidate and reduce the number of system components, b) migrate legacy apps to new hardware, and c) separate or isolate safe or secure applications from standard applications. This offers the ability to rapidly d) innovate by adding new functionality incremental to the existing application and operating system stack. New functionality can be offered by utilizing the appropriate OS for the functionality.


Embedded virtualization also offers the opportunity to separate the lifecycles of the isolated partitions; one OS can be upgraded with no impact to the other.


David Kleidermacher, CTO, Green Hills Software: There are many use cases driving adoption of embedded virtualization. However, there are some common benefits across verticals, including:


  • Consolidation: Saving size, weight, power, and cost.
  • IP protection: push proprietary software and data outside of licensing-encumbered operating systems (GPL, Google)
  • Enable mixed criticality operation: rich general purpose OS alongside hard real-time, security-critical, and/or safety-critical functions
  • Open and extensible: run any OS personality at any time – Linux, Android, Windows, RTOS, etc. You can think of hypervisors as providing the ultimate open environment.


Kim Hartman, VP of Marketing and Sales, TenAsys: One has to look at the overall embedded application being made up of a mixture of different functions. Embedded virtualization allows a system to be built using a blend of application-appropriate operating systems, instead of having to rely upon any single OS to serve all needs. Furthermore, a successful embedded virtualization initiative incorporates the re-use of existing application & OS intellectual property without requiring modification, or a port of field-proven code.


Where are you seeing your customers employ virtualization? What applications are using virtualization, and why?

David Kleidermacher: Some examples we are seeing at Green Hills Software include:

  • In-vehicle infotainment: consolidate general purpose head unit (e.g., MeeGo or Android), real-time (e.g. cluster displays), and/or instant-on/safety-critical (e.g., rear-view camera)
  • Software-defined radio: real-time/safety-critical alongside Linux
  • Networking/telecom: consolidate control and data plane; use virtualization as a means of achieving higher-availability (e.g. detect guest OS failure and restart VM)
  • Health: isolate and secure sensitive patient information per HIPAA on mobile devices, e.g., electronic medical records management by physicians


Arun Subbarao, VP Engineering, LynuxWorks, Inc.: One early adopter has been the U.S. Department of Defense (DoD). This is not surprising since mission critical systems are often extremely complex, trying to integrate multiple different operating environments and applications in very small form factor hardware designs. There is also a need to keep highly sensitive environments completely isolated from less sensitive ones, while still allowing the authorized personnel appropriate access. Mission-critical applications also have a key real-time response requirement and a high security requirement. In addition to the DoD, we are also seeing government IT, financial and medical organizations rapidly adopt virtualization technology to help safeguard sensitive data.


Chris Ault: We are seeing increasing deployment of embedded virtualization across all industries, most notably Aerospace & Defense, Industrial, and Networking. There is increasing interests in the Automotive In-Vehicle Infotainment, Mobile, and Medical sectors as well.


One example is using virtualization to add general-purpose operating systems with enhanced graphics interfaces to existing real-time robotic controllers. Another is migrating existing legacy applications from older single-core processors to multi-core CPUs while avoiding the need to ‘port’ application software. By using virtualization, customers can run multiple copies of the single-threaded environment, thus realizing the benefit of the increased performance of the multi-core CPU.


Kim Hartman: One area we are seeing interest in is embedded applications that are adding M2M, enterprise, or cloud services. There are usually not real-time needs and can easily be addressed by leveraging off the shelf software on the Microsoft Windows platform. It’s no longer required to re-invent everything in the RTOS with custom or redundant hardware sub-systems.


What are the key challenges to implementing virtualization? How can developers overcome these challenges?

Arun Subbarao: The key challenges to implementing virtualization are preserving native performance, allowing for higher security and providing an appropriate device management model. However, it is an area of intense focus by processor vendors with Intel being a pioneer in this area. The technology advancements afforded by Intel® VT-x provides significant hardware assists that has closed the gap between native operating system performance and its virtualized counterpart. The trend towards multi-core computing is a natural technology enabler for allowing multiple operating environments to execute without loss of functionality and performance.


David Kleidermacher: By far the biggest challenge is I/O virtualization, and in particular, there are decisions/configurations in three dimensions: how a peripheral is virtualized/interfaced to physical hardware; how a peripheral is shared; and how quality-of-service and security can be assured across all VMs and all system bus masters. There are many choices and tradeoffs with respect to performance, maintainability/time-to-market, and robustness/security, and this is the place where vendor experience and technology flexibility is absolutely critical. There is no one-size-fits-all.


Intel VT technologies, especially Intel® VT-d and Intel® VT-c, dramatically simplify many use cases. However, here again is where experience and technology flexibility is critical. Intel VT-x, Intel VT-d, Intel VT-c and other Intel technologies must be applied the right way depending on the aforementioned tradeoffs.


[Editors’ note: Intel® Virtualization Technology (Intel® VT) for IA-32 Intel® Architecture (Intel® VT-x) traps certain instructions in hardware, relieving the hypervisor of these duties. This greatly reduces the overhead of virtualization.



Intel® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d) adds hardware accelerators that enable secure assignment of I/O devices to specific OSs. This decreases the load on the processor and accelerates data movement.



Intel® Virtualization Technology (Intel® VT) for Connectivity (Intel® VT-c) improves the network performance of virtualized devices.]


Chris Ault: Developing a virtualized embedded product presents challenges and risks with regard to configuring partitioned memory regions, the allocation of CPU cores to partitions, presenting specific devices to the appropriate virtual partition using the appropriate device sharing model, etc.


Mitigating these risks and challenges can be achieved when selecting a software vendor that offers a complete embedded virtualization platform for development, one that consists of operating systems pre-integrated with a hypervisor, along with development tools and BSPs also integrated with the guest operating systems and hypervisor. Point solutions and non-integrated open source software have their place in the software development lifecycle, but embedded virtualization development requires coordinated and integrated products in order to minimize risk and development difficulties.


What would you say to developers who are worried about the difficulty and complexity of deploying a virtualized solution? Is adding this new software layer going to make their jobs harder?

Kim Hartman: Embedded virtualization done right makes the task easier, not harder. Having a clear understanding of how the solution actually works for you makes the process go smoothly and predictably.


David Kleidermacher: Ultimately, the hypervisor should be deployed like the BIOS – it’s always there and simple to configure; end users are mostly unaware of it. The server virtualization market has proven system virtualization practical, from both a technological and business perspective, to deploy on common hardware; while many embedded SoCs are quite a bit more complicated from the standpoint of I/O choices, fundamentally the hypervisor is a thin layer that can be easily ported and managed. Green Hills has proven this in the embedded space since we first shipped the INTEGRITY Multivisor in 2003. Eight years of experience across a wide range of processor technologies brings a lot of confidence to customers that embedded and mobile virtualization is practical.


Arun Subbarao: There is some level of additional complexity in deploying a virtualized solution. However, this is no way greater than the adoption of any other newer disruptive technologies. The separation kernel and hypervisor software is intended to abstract this complexity and allow developers to avail of the enormous flexibility and compelling business advantages this technology has to offer.


If I do need help implementing a virtualized solution, how can I get assistance? What kinds of services do you offer?

Arun Subbarao: We would recommend that developers who are creating their first virtualized solution take advantage of the assistance that software vendors have to offer in this area. Vendors, such as LynuxWorks, have training, support and consulting services to provide additional assistance to enable developers to adopt this technology and become productive in a very short period of time.


Kim Hartman: With over 20 years of experience, TenAsys has been providing solutions that mix GPOS and RTOS application needs. Our INtime for Windows RTOS is a para-virtualized solution deployed for 14 years, applied to the absolute fastest multi-core processing needs.


Chris Ault: Wind River offers industry-specific Professional Services practices, with focused offerings that help developers meet strict market deadlines while keeping development costs down. We also offer extensive training and education services and rapid integration mentoring programs aimed at helping our customers kick-start their development activities. Finally, Wind River’s Online Support website offers a comprehensive knowledge base and various subscription features.


David Kleidermacher: We recommend you Contact the Green Hills Software INTEGRITY Secure Virtualization (ISV) business unit experts for education and design advice in this area.


What are the most important trends you see developing in embedded virtualization? What does the future hold for this technology?

Arun Subbarao: The most important trends we see developing in embedded virtualization are rich user experience, miniaturization, and high awareness of security. The sensitivity to security will be the new dominant aspect of the three trends in the near future, given the emerging threats of identity theft and malware attacks. Luckily, virtualization technology can address all these requirements simultaneously. In fact, we believe it is the only technology at the intersection of all these emerging market trends.


Kim Hartman: Multi-core proliferation will continue making more discrete processing services available to embedded applications at lower cost. Admitting that a single SMP approach is not the universal solution, AMP and mixed-OS solutions emphasizing utility become attractive. Embedded virtualization is the natural solution to partitioning these resources for directed control.


Chris Ault: Increasing core counts for multi-core CPUs is certainly an important trend, as well as advances in hardware accelerators that further assist with the implementation of virtualization. As core counts increase, the effectiveness of SMP operating systems tends to decrease, thus driving more demand and relevance for embedded virtualization with hardware-assist features.


David Kleidermacher: Embedded microprocessors only recently have enough hardware assistance features to make system virtualization practical in embedded, resource-constrained electronics. New use cases and benefits are popping up on a daily basis. That this technology will be successful is a foregone conclusion. However, we still need the CPU vendors to improve hardware support, per my earlier comment on I/O virtualization. I have no doubt that Intel will continue to lead in this area.



More information

virtualization.pngTo learn more about virtualization for embedded devices, visit


For more information on the solutions from our panelists, please visit the following sites:


Wind River Systems is an Associate member of the Intel® Embedded Alliance. Green Hills Software, LynuxWorks, and TenAsys are Affiliate members of the Alliance.



Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine


Follow me on Twitter at!/kentonwilliston