Interoperability is just a fancy way of saying that systems have to work together. The concept is simple but the mechanisms required to make it work are not. Interoperability is achieved through adherence to international and other standards, combined with translation packages that homologate “nearly identical” communications channels. The first approach is exemplified by Internet standards like TCP/IP while the second is implemented by systems like Common Object Request Broker Architecture (CORBA) and its Object Request Broker (ORB).  Microsoft Corporation (1) has a different idea than CORBA, instead relying on its own Distributed Component Object Model (DCOM) and Windows Communication Foundation. However, Microsoft has agreed to a gateway standard to translate between CORBA and DCOM.  Although CORBA and DCOM have been developed for the Information Technology server-client model, embedded systems are closing the system complexity gap between embedded systems and IT systems.


Interoperability has multiple levels of meaning including hardware and software interoperability. Adlink’s(2) Jeff Munch presented an overview of COM Express interoperability in an Intel® Embedded Community blog that also shows the way for other hardware interoperability. While hardware is one key piece of the interoperability solution, software represents the bigger total systems challenge.

Virtualization of interoperability enforces a new discipline on software developers. By adopting virtualization as a fundamental of the software design process, engineers will develop software components that may be combined in different ways to easily create new products. Until recently, virtualization for microprocessor-based systems was touted as a way to control expenses and manageability in data centers. While virtualizations do those things for datacenters, embedded systems have other more pressing concerns today and for the near future.


Let’s take apart virtualization to see how you might be able to apply the principles to embedded systems. Virtualization comes in multiple flavors:


  • System virtual machines - System virtual machines are sometimes called hardware virtual machines by some authors. They support sharing the underlying physical machine resources between different virtual machines running their own operating system. The software layer implementing the virtualization is known as a Virtual Machine Monitor or Hypervisor. A hypervisor can run on bare hardware (native VM also called Type 1) or on an operating system (hosted VM also called Type 2).
  • Process virtual machines – also called application virtual machines, runs as an application inside an OS and supports a single process. It is created when the process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts the environment. Doing so hides details of the underlying hardware or operating system. Using this type of virtualization allows a program to execute in the same way on any platform.

  • Emulation of the underlying raw hardware - This approach is also called full virtualization of the hardware. Implementation is done using a Type 1 or Type 2 hypervisor. Each virtual machine can run any operating system supported by the underlying hardware. Users can run two or more different guest operating systems simultaneously. Each guest OS is resident in a separate private virtual computer.

  • Emulation of a non-native system - Virtual machines can also perform the role of a software emulator. Emulating non-native hardware allows operation of software applications and operating systems written for different processor architecture.

  • Operating system-level virtualization - can be thought of as partitioning: a single physical embedded platform is sliced into multiple small partitions (sometimes called virtual environments (VE), virtual private servers (VPS), guests, and zones). Each partition looks like a real system from the point of view of its operating software.

All of these virtualization approaches must provide abstraction from the physical hardware peripherals and services to the operating system or application. The dividing line between the physical hardware and the systems software depends on which type of virtualization is employed. But regardless of the virtualization chosen, virtualization establishes a mindset for designers.


Virtualization benefits:

  • Consolidation
  • Maximizing hardware cycle usage
  • Security
  • Separate development & production  platforms
  • Better logical software partitioning
  • Hardware independence


Most of the fielded systems employing virtualization have been deployed in IT-focused servers based on the first two factors. The financial benefit for server configurations is leading to industry-wide API interface standardization of non-embedded virtualization systems. As a natural outgrowth of this standardization, you can expect to see similar efforts take hold for embedded systems. In the meantime, virtualization for embedded systems is here and provides ways to improve systems’ ability to be re-targeted to new and different platforms.




Virtualization tools from RTOS vendors like TenAsys (3), QNX (4), Microsoft Corporation, Green Hills Software (5), and Wind River Systems (6) provide frameworks for developers to create new software structures. Virtualization can be extended to include physically separate hardware systems using software techniques such as remote procedure calls and inter-process communications.

Interoperability can be defined by the mechanism used to communicate between computing components – whether they are realized in a hardware platform or inside a virtual environment. By virtualizing the software components of a system you gain more flexibility and control. For systems that use Ethernet communications, virtualizing the system means creating virtual Ethernet adapters, switches, and other communications support systems. How you accomplish this depends on the virtualization approach that you adopt.


Full virtualization gives you the easiest environment in which to implement interoperation elements. For full virtualization, software is written for a virtual Ethernet adapter without regard for other software that may require the resource. Relief from considerations for physical hardware during the design process gives programmers the freedom to develop a software structure that best relates to the problem statement.


Programming is an intellectually complex undertaking that is difficult regardless of the techniques used to implement software. There is no “magic bullet” for embedded programming. Some tools and languages have specific advantages for some programming problems, but there really isn’t one tool to solve every programming problem. Each part of the systems programming specification carries with it tradeoffs. In theory, software developed using ‘C’ is faster to develop than the same program written in assembly language, but assembly language carries with it the prospect of greater machine (platform) utilization. The rapid decrease in cost per CPU operation, with simultaneously dramatic increases in CPU performance, combined with the increase in systems complexity, has shifted the software decision point sharply to favor higher level languages like ‘C’ and Ada.  The same hardware dynamics are favoring increased use of parallel programming to use multi-core processors effectively.  Adopting virtualization as part of your programming bag of tricks encourages better program structure because the modules that are naturally defined by systems requirements are easier to code and test than a single glut of code. This is especially important for improving interoperability.


Virtualization helps improve program abstraction. Many popular embedded programming languages lack semantic structure to abstract concurrency, and with a few notable exceptions like Ada and Java, creating concurrency is left to explicit programming by the developers. Virtualization aids the expression of concurrency by embedding the inter-process communications and control within the virtualization mechanism. In short, the virtualization mechanism can remove the need for explicit software structure to deal with multiple modules.


Ada intrinsically had language semantics to create concurrent or parallel programs, but ‘C’ doesn’t. Threading libraries help in this regard, but using virtualization abstracts the program structure so that there is better modularity. As part of using virtualization to its best effect:


  • Avoid side-effect programming at all costs
  • Employ dataflow and data-parallel design techniques
  • Focus on task-centric programming
  • Emphasize reliability
  • Design in architectural efficiency
  • Employ asynchronicity in your designs


Each of the RTOS companies mentioned above includes unique capabilities in their products. But every virtualization/OS offers some type of Inter-Process Communication (IPC).  IPC is one of the powerful tools available to improve modularity. Just as Ada intrinsically supports parallelism through its semantics, IPC mechanisms  extend similar mechanisms to all programming languages. Inter-Process Communications (IPC) can be achieved through a number of mechanisms:


  • Files
  • Signals
  • Sockets
  • Message queues
  • Pipes
  • Named pipes
  • Semaphores
  • Shared memory
  • Message passing
  • Memory-mapped file


Using virtualization to achieve interoperability encourages better code structure while providing more usable components for developers. Well defined software modules that adhere to virtualization principles result in lower error rates and more robust systems. The degree of abstraction used in a specific system is largely up to the embedded systems designers.


Using the advanced techniques that virtualization and expended parallel program structure provide is not without its risks. Choosing a fine program structure can burden the system with excessive compute cycles dedicated to the IPC mechanisms and parallel structures.


For the major systems that influence interoperability, there are simple means to minimize compute overhead while maximizing module reusability:


  • Define modules according to industry standard protocols’ state diagrams
    • Choose maximum size modules that are self-contained
    • Within the self-contained maximum modules, decompose the modules to find high compute requirements that can benefit from parallel execution
    • Generalize the module operation to apply to multiple standards
    • Choose the persistence of each module
      • Create and destroy each time used?
      • Load and save select modules
      • Consider the number of threads or cores present on the minimally-capable hardware platform
      • Too much parallelism can overwhelm the processor(s) with executing virtualization and IPC mechanisms


Virtualization and parallelism make a powerful programming technique. The two together can offer freedom and flexibility to choose new, more capable multi-core platforms while improving interoperability.


How will you make your next systems design virtualization and parallelism decisions?


More information



To learn more about virtualization for embedded devices, visit




  1. Microsoft® Corporation is an Associate member of the Intel      Embedded Alliance
  2. Adlink is an Associate member of the Intel Embedded      Alliance
  3. TenAsys is an Affiliate member of the Intel Embedded      Alliance
  4. QNX Software Systems, Ltd is an Associate member of the      Intel Embedded Alliance
  5. Green Hills Software, Inc is an Affiliate member of the      Intel Embedded Alliance
  6. Wind River Systems is an Associate member of the Intel      Embedded Alliance


Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance