Skip navigation
1 2 3 Previous Next

Software & Bios

35 Posts authored by: Henry-Davis

Set Top Box (STB), television, IVI, and other developers looking to add value to Intel Atom™ embedded systems are increasingly turning to Google* Android OS. In particular, they are leveraging end users’ familiarity with Android’s User Interface and its large volume of apps to give their products an edge. However, many developers face challenges making the transition to Android. For example, embedded devices often employ proprietary designs that do not natively support Android.




American Megatrends (AMI) DuOS (1) helps solve these challenges by allowing Android applications on top of Microsoft* Windows* (2). Among other benefits, this approach allows you to integrate Android on proprietary hardware or software platforms.  Using DuOS can also save development time and money by avoiding the need to port existing Microsoft  Windows-based software to the Android environment. In addition, DuOS enables you to add Android to an already-shipping product as an after-market enhancement.



Since its beginning in 2003, Google Android has maintained its focus on mobile computing - especially for smartphones and tablet computers. But as the ease of use of Android phones grabbed users’ imagination, corporate users have not-so-secretly yearned for integration of their most-used and useful Android apps with the ubiquitous Microsoft Windows operating system. DuOS Dual OS Environment from AMI is a solution for adding an Android environment to products incorporating Intel Atom embedded processors. DuOS brings the fun and functionality of Android to embedded systems, adding seamless coexistence to Windows or Linux* on the same hardware. DuOS leverages AMI’s Intel Atom processor expertise to achieve the best user experience combined with the security, performance and extensibility that developers require to make Android a viable value-add for their embedded products.



DuOS cuts across traditional operating systems boundaries by combining Microsoft’s Windows (or Linux) and Google’s Android system together on a single Intel-based platform. Embedded devices are often lumped into a purpose-built category, assuming that only the essential parts of drivers and support are included in the embedded product. But DuOS has been developed to support embedded applications in an enterprise environment. Many of the platforms supported by DuOS are PC platform-based, many of which include versions of Windows that are not embedded-specific. Although enabling embedded apps, DuOS supplies the features that you’d expect in product designed to work with nearly any version of Windows:


  • Quick to make ready for deployment – very few customizations required, installs readily on nearly any Windows machine
  • Instantaneous hot-key switching between Windows and Android OS environments
  • Small memory footprint and low performance impact
  • Can run multiple instances of Android
  • User interaction for non-touch based interface accessible in Android environment
  • Viruses and malware in one environment do not affect the other
  • Compatible with Amazon Appstore for Android




AMI achieves this apparently unlikely technology marriage of the enterprise Windows OS with a smartphone Android OS by taking a different tack on making Android operational. Engineers at AMI isolate Windows operation from Android using hypervisor-like functionality. But, unlike some other virtualization approaches, AMI relies on the Windows (or Linux) driver software, codecs, and other hardware control to provide Android-compliant input-output capabilities. While it may not be apparent at the first look, using existing Windows drivers brings a substantial advantage to embedded enterprise-class products. Driver software for enterprise hardware is typically well-tested by the manufacturer and IT professionals alike. Further, years of fielded applications offer the security of billions of hours of operation in well-documented systems. Android and Android-compatible test environments offer developers one way to verify and validate the system, but for systems containing thousands of third-party software and hardware add-ins, there’s little that can compare to actual field experience.


The AMI solution for Windows+Android provides the same degree of security as a native Android system. Of course, Windows retains its own security and safety software measures. In addition, Android applications can not be affected by Windows malware because Android runs in a separate virtualized environment. Android applications also carry their native protection against other errant Android applications. Developers can configure which files or devices can be accessed from Android to both control which devices are available to Android, and to further secure the system. Android retains its “sandbox” facility that can isolate apps from each other.


Android is an ever-evolving environment. The current AMI-supported Froyo v ersikon of Android (V 2.2) has a long track record, with one major update to functionality. The 2.3 version code named  Gingerbread:


  • refined the user interface,
  • improved the soft keyboard and copy/paste features,
  • improved gaming performance,
  • added Session Initiation Protocol (SIP) support for Voice Over Internet Protocol (VoIP calls), and
  • added support for Near Field Communication.


AMI is working on a new version of Android. The most recent release of Android code named Ice Cream Sandwich, adds:

  • facial recognition unlock to improve Android      device security,
  • network data usage monitoring and control,
  • unified social networking contacts,
  • photography enhancements,
  • offline email searching,
  • app folders, and
  • information sharing using NFC.

Installing the Froyo Android environment onto a Windows-based system is straightforward using DuOS. There are very few customizations required to configure a DuOS Android environment for a specific hardware platform, but AMI provides tools specifically to accomplish this task. It is likely that AMI will continue the streamlined approach to installation used for the Froyo version for future versions of Android.


AMI’s productization of  Windows-Android integration points the way for a rapidly evolving personal computing environment. How will your embedded products use the shift in technology?




*Microsoft and Windows are trademarks of Microsoft, Google and Android are trademarks of Goolge, DuOS is a trademark of AMI.


  1. American Megatrends, Inc is an Affiliate member of the Intel Embedded Allianc
  2. Microsoft Corporation is an Associate member of the Intel Embedded Alliance



Henry Davis

Roving Reporter (Intel Contractor)

Intel Embedded Alliance

The Android software platform offers a wide range of facilities that can be configured to meet many different tasks. Developing software to operate within the Android environment means writing in Java. Few embedded developers currently use Java for many historic reasons: concern about memory size, program performance, power consumption and more. Most of these concerns have proven to be unfounded, but the new style of embedded application development requires new thinking and new tools.


In this overview of Android software development we’ll explore how to develop a new application, power control, and some Android built-in functions like compressed binary Java code that reduces memory usage. Then we’ll consider COTS products from the Intel Embedded Alliance members that can run Android – regardless of their form factor.


The seventies, eighties, and nineties saw ‘C’ become the dominant language for programming a wide variety of systems. ‘C’ started as a systems-focused language for developing the Unix operating system and associated utilities. The University standard for developing programming skills was based on ‘C’ in North America until recent times. Now, Universities have adopted Java as the language of choice to teach new programmers fundamental skills.  Embedded systems programming is beginning to adopt Java as well, but Java-friendly operating environments are just beginning to add capabilities for hard real-time programming. Nevertheless, Java is on the cusp of acceptance as a required language for embedded systems programmers.




Java is the product of decades of development based on threaded-interpretive machines. A Java platform includes an implementation of the Java Virtual Machine (JVM), a byte coded virtual computer. The fact that the intended environment for Java programs is interpretive, gives rise to both the flexibility of the system, and concern over performance. The use of the JVM extends the development platforms that may be used to create Java applications.


There are a number of ways to learn Java including free online tutorials such as:




Realtime Java is addressed by other online tutorials:




As with most embedded programming, developing code for an embedded system is best run on the target hardware. Developing a Java-based application targeted for Android involves the same decisions as any other embedded project: will the software be developed initially using an emulator or on physical hardware with the same or similar capabilities as the final intended hardware platform, what tools are required to create an application delivery package, and can Android Java apps be developed using my existing tools?


Android embraces both emulated hardware and real physical hardware, providing flexibility for developers.  The choice is a matter of the state-of-readiness of the final hardware platform. Software development often needs to begin creating the application before final hardware is available. Emerson Network Power (1) provides a COTS hardware solution that is specifically promoted as a platform for Android-based systems. The Emerson NITX-300 Series Development Board I is based on the Intel® Atom™ processor and includes Gigabit Ethernet, SATA, USB, serial ports, touchscreen, LCD, PCI Express x1 and PCI Express Mini Card slots. The NITX-300 is used by Thunder Software Technology Co.,Ltd (Thundersoft) (2) to deliver a medical tablet computer software embedded applications package. The Android software supplied has been adapted to serve the Tablet market. Called the BigDroid, Thundersoft delivers new display management that handles large format displays and touch screen technology.


Advantech (3) HIT-W121 is another system that explicitly supports Android. It was featured as a “teardown” article in EETimes that attributed Intel’s Atom processor performance as one of the reasons why Advantech chose the processor – combined with Intel’s engineering support. The HIT-W121 includes a 11.6” touchscreen and Intel’s ICH8 I/O hub. The I/O Hub supports the PCI Express and local bus, ACPI power management, and enhanced DMA. It is also available with an expended number of USB ports, high-definition audio, and a Gigabit Ethernet controller.

The growing popularity of Android for non-telecommunications applications is spawning a new group of service providers.  Adeneo Embedded (4) is one of these companies. Adeneo is a full service contract engineering firm that  can assist is developing Android-based systems starting with evaluation, moving to systems architecture, on to applications development and finally to fielding the final system. Like Adeneo, ArcherMind Technology (Nanjing) Co., Ltd  (5), iwave systems technologies pvt ltd, and Mobica Limited (6) also provide similar engineering services for Android systems using Intel processors.



Developing an Android-based Java application requires creating an “.apk” file that has been signed with a key denoting the .apk is either for debug or release. Without automation, the development task requires that engineers perform a number of individual steps that are purely mechanical. Tool chain suppliers including Green Hills Software (7), Intel, and Wind River Systems (8) all base their development tool platform on the Eclipse environment. When using tool chains from any of these suppliers, developers may choose to use a utility supplied with the Android environment to automate the creation of an .apk file from the Java source and specified libraries and non-compiled objects like images.


Programming in Java is different from ‘C’ because it is completely object oriented and encompasses syntactic and semantic structures that serve to eliminate certain classes of programming errors.


Structurally, Java looks similar, but different from ‘C’:


public class HelloWorld


    public static void main(String args[])


        System.out.println("Hello World!");




The ubiquitous “Hello World” program is defined as a class in Java; the filename of the class must be the same as the class. By comparison, a ‘C’ program source file can be named as the programmer wishes. The Java compiler creates a new file with the extension class.  For the example above, a new file named HelloWorld.class is compiled from Java programs are assembled in packages, but you don’t have to write all the necessary code yourself. Java includes more than 300 packages with thousands of individual functions. And Android has a rich library of functions already pretested and certified.


Android’s management of virtual devices and the associated API aids in the transition from emulated hardware to final hardware during the development process. Since most common peripherals are already defined as classes moving from an emulated environment to a delivery platform is simplified – providing that you use standard defined APIs. Some hardware, like Thundersoft’s BigDroid tablet displays, require extensions to the standard Android display class. But, it’s possible to maintain compatibility between standard Android environments and environments using tablet displays by defining the extended classes to that the same API is still used.


Java was developed by programmers developing for workstation environments, and the native Java tools reflect that viewpoint. Eclipse-based toolchains provide a convenient mechanism to replace workstation-focused scripting languages while providing all of the control that is intrinsic in the Android SDK.

Like many other systems, power control is best addressed at the software architectural layer because that’s where the biggest paybacks can happen. But there’s still low hanging fruit for Android applications – mostly related to wireless communications and display brightness settings. Many of these choices are selected by the Android user, but you can influence their choices. Apps in the Android Marketplace, like the Power Widget, give users the ability to control how their Android-based handheld uses power. But there are other choices that you can make. If the Android device is designed using an AMOLED display, choosing to use a black background as the default in your app will save up to 85% of the power as compared to a white background. You can also close the application automatically if a logical place exists to do so and further extend battery life. Lastly, design your software so that it makes minimal use of GPS and radio functions – permitting them to be turned off when not in actual use. Users may still select less optimal power configurations, but there’s plenty of room to influence their choices.


Some intrinsic Android design decisions serve to minimize memory usage - both for program storage and during runtime execution. Recall that we talked briefly about the multitude of steps required to compile and prepare a program for execution on an Android platform. Android is based on a byte-code Java virtual machine. This Java machine interprets the byte-codes that make up each specific program. By choosing to compile to a byte-code format, programs are automatically smaller than the typical native machine code that results if the Java program is compiled to the underlying, physical, target computer.  My experience with byte-coded machines is that byte-coded programs can be one third the size of natively compiled programs.  Android takes this a step further. A special tool is used by the Android SDK to translate “normal” Java byte-codes to another virtual machine called Dalvik. Dalvik includes a 16 word register file as part of its architecture which results in object code that is even smaller and faster than the original Java byte-codes.


Java is gaining more realtime facilities as embedded development demands more reliability and faster development times.


Is Android in your future?




  1. Emerson Network Power is a Premier member of the Intel Embedded Alliance
  2. Thunder Software Technology Co. is a General member of the Intel Embedded Alliance
  3. Advantech is a Premier member of the Intel Embedded Alliance
  4. Adeneo Embedded is an Affiliate member of the Intel Embedded Alliance
  5. ArcherMind Technology (Nanjing) Co., Ltd is a General member of the Intel Embedded Alliance
  6. iWave Systems Technologies is a General member of the Intel Embedded Alliance
  7. Mobica Limited is a General member of the Intel Embedded Alliance
  8. Green Hills Software is an Affliate member of the Intel Embedded Alliance
  9. Wind River Systems is an Associate member of the Intel Embedded Alliance


Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance

Embedded apps for Android are easy to develop, with some of the simpler ones completed in a few short hours - some claim minutes. Android is a dynamically configured software environment, but application software is isolated from lower level Android hardware platform functionality by virtue of the Java language definition and the use of predefined APIs and libraries. Android uses XML as part of the language environment and relies on logical and physical isolation between applications. The built-in virtualization is required to permit Android to manage the wide array of unique apps that are available. For example, there are an overwhelming 27,000 apps from which to choose – each of which must operate correctly.


All of this software has to operate in concert with other applications that are loaded into the Android-appliance.   With thousands of apps available for Android-based platforms, how do you ensure correct operation of your applications? Part of the answer lies with the architecture of the Android platform itself. By limiting software span of control though well-defined APIs, the environment provides a first defense against faulty software. But commercial apps developers want to provide reliable software that provides the intended function.  Providing that the app is targeted to a well behaved and correct Android system, the problem is fairly straightforward. The testing and validation problem becomes more significant when the totality of the Android environment is considered. Low level modifications and additions, other middle ware, and the Android software platform itself must work as the Androidstandard specifies. And, so the application must work as intended in the Android environment.




The bigger picture of Android testing is daunting: the Android environment includes thousands of validation tests that must be passed to begin the process of ensuring correct app operation, BSP-specific tests must likewise verify that the hardware is operating as designed, hardware test sequences also verify correct component function, and user interface interactions must be tested.


Software unit test, application validation, Linux kernel operation verification, and correct device operation validation are absoluterequirements.   Wind River goes a long way towards providing the tools necessary to automatically control and manage the application of test sequences to Android-based hardware and the software environment.





Wind River Systems (1) Framework for Automated Software Testing (FAST) aids developers to determine software compliance, measure performance, and assess stability of mobile devices running the Android environment. The test tool accepts the results of a build, can execute a selection of Wind River–authored tests, invoke existing open source scripts including scripts that validate many third party software such as the Java runtime environment, and archive the relevant results. FAST can then provide the results in a single report.  All of the operations of FAST can be managed in a completely automated way once appropriate test, scripts, and configurations are selected for FAST.


Using FAST reportedly can save up to 25% of the project development time. Much of the time savings comes from the systematic integration of existing Android test suites, which number in the hundreds and perhaps thousands, with specific tests developed by Wind River as part of the Android FAST environment. FAST incorporates:


  • Android Compatibility Test Suite (CTS)
  • MeeGo Core Test      Suite (MCTS)
  • Android Monkey
  • Java instrumentation
  • Wind River Android Device Characterization Suite (ADCS)
  • Hardware Abstraction Layer (HAL)


Other facilities of FAST include integration of tests run under other test frameworks such as:


  • Linux Test Project (LTP)
  • IOZone
  • IPerf


FAST can be augmented by test code created by development teams to test unique hardware, specific application functionality, and quality/reliability issues such as the use of root kits.


Wind River also offers the UX Test Development Kit that can be used to write additional Android-specific tests of the User Interface. While FAST already includes some tests developed by the company using the UX kit, developers can create additional tests using the kit. Engineers can simulate clicking widgets in the user interface, dragging graphical elements, reading the contents of the User Interface elements, and typing entries on a physical or virtual keyboard. Automating this aspect of testing can save considerable time and effort.


Gaining the most utility from the FAST tool requires flexibility in the tool’s user interface. Wind River FAST provides test engineers two different interfaces for controlling automated test execution. One of the methods is based on a web user interface, while the second relies on a command-line interface. Regardless of the interface method selected, test engineers can determine which tests to run and which results to view. The command line interface provides greater flexibility for engineers who wish to integrate FAST’s results with other development tools, including Eclipse-based tools. Tests are easy to configure and run using FAST. Test engineers can create a control file through the command line interface that specifies all aspects of the tests, including the test targets, user running the tests, which tests to run, and test run priority. The tool also provides a regression test report through the web interface. Many of the tests and test results can be scheduled to run at specific times and the results emailed to recipients defined in the control file.


In the next installment on Android, we’ll talk about Android solutions for Intel Architectures and some specific Intel-based hardware platforms using Android.




1. Wind River Systems is an Associate member of the Intel Embedded Alliance



Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance

According to Personal Computer industry-watchers like IDC-Gartner, general purpose PCs are declining in sales year-on-year, but are being more than made up by the growth of small form factor computers including  tablet PCs, smartphones, and purpose-built hand-held embedded systems.  Systems that were once the domain of science fiction writers have become commonplace in an increasing number of embedded applications like consumer-quality, digitally-compensated microphones, smart self-calibrating sensors, motor controllers, automotive engine control, and even the telephone systems we use every day.

In the 1950s early IBM analysts predicted that the world’s demand for digital computers would be satisfied with at most a few dozen systems. In the early 1970s the minicomputer seemed destined to take over computing tasks. In the 1980s the mainframe was declared dead. Now there’s a new approach in which the mainframe concept is back – cloud computing.


Cloud computing carries with it increased requirements for remote access and control. As a mainstream technology, cloud computing is viable only because ubiquitous computing is nearly here. The difference between small form factor PCs and handheld embedded systems continues to blur as systems power consumption continues to drop while processing capacity increases. The most visible aspect of this shift in the technology space is in how data is presented to users. Where once the predecessor technology to cloud computing relied on IBM Channel Controllers managing alpha-numeric video terminals, today’s presentation technologies embrace most media forms including video and audio.  Companies like Eurotech (1), ILS Technology (2), and GoS Networks (3) are at the forefront of the addition of cloud computing and embedded systems to the technology mix.




Eurotech offers a complete solution for gaining access to the cloud with its Everyware Cloud products. Everyware Device Cloud is the integration between devices running Everyware Software Framework and the Isidorey device cloud platform. Eurotech’s ESF relies on Wind River Linux to provide a flexible, communications oriented development environment. The Wind River Carrier Grade Operating System Linux (CGOS) includes more than 500 software packages, virtualization based on KVM (kernel-based Virtual Machine), and multi-core offload capabilities. This facility allows Eurotech customers to use the potential of modern multi-core Intel hardware.


Everyware employs the OSGi Alliance software platform based on Linux that is supported by dozens of Information Technology companies and embedded systems suppliers. The Eurotech system runs on a virtualized platform using the communications services provided by Wind River’s (4) CGOS.  In an earlier blog I commented on the desirability to develop software in a more modern, fully typed language. Eurotech does this by developing their software using Java.  Adopting Java as their preferred programming language accrues many advantages both to Eurotech and their customers:


  • Java applications run in a virtual machine environment –  improving reliability.
  • Java is a “type safe” applications language, meaning that the larger operating environment is safe from mal-formed code.
  • Java is network centric, a crucial architectural choice for cloud computing environments.
  • Object oriented programming.
  • Built-in memory management.
  • Rapid code development and test.
  • Large software repository of Java code.


Java has recently benefitted from Intel’s research group with the availability of an open source implementation of parallel Java. Although Java currently lacks some of the facilities needed to implement general hard realtime systems, the addition of parallel Java moves the performance barrier back to allow more high-performance signal processing algorithms to operate within the Java environment.


Perhaps the most significant benefit of using Java for embedded applications development in the ESF environment is the interpretive nature of Java code. Code can always be translated to native machine language, but the mind set of employing an interpretive language on a virtual machine encourages a mind-shift to dynamically defined systems. Adding or changing a piece of application code is a simple matter of using Java to stop an application, load a new instance in to the operating environment, and then start the new instance. According to Eurotech, systems using the ESF have not failed due to an errant program over-writing system memory.




Eurotech’s Catalyst TC card, based on the Intel® Atom™ processor E6xx series, is a highly integrated and flexible embedded module that offers multiple communications, I/O, and multimedia capabilities with low power consumption in a small footprint. The product has been chosen by a supplier to the United States Department of Homeland Security and the Defense Department to host handheld embedded applications using Eurotech software.


ILS Technologies offers a machine-to-machine (M2M) platform called “deviceWISE.” The deviceWise M2M Application Deployment Kit includes configurable deviceWISE M2M Gateway software, an Intel Atom processor-based M2M gateway hardware platform, and a developer cloud subscription.  The software includes a device connection library with an edge processing and event engine. The software allows developers to create simple workflows to translate raw data from connected devices into intelligent events. These events are subsequently securely transmitted to applications residing in the cloud or in ad hoc customer locations. ILS’ communications software also works with Wind River VxWorks and Linux products to provide communications protocols within that environment – in addition to the deviceWise M2M Applications Deployment Kit. According to industry analysts Frost & Sullivan, simplifying embedded application deployment in a cloud environment is a major technological hurdle. ILS Technology simplifies that task. deviceWISE allows end users to connect device-to-device, device-to-database and device-to-application. deviceWise also includes Remote Access Service features which allow end users to create role-based accessibility while retaining control of all the information flow and access.




GoS Networks takes a different slant on M2M communications and access to cloud computing. GoS software fits into the link connections of networks and manages network traffic.  Motivated by increasing their providers’ ability to acquire, manage, maintain, and bill customers, GoS has a unique approach to Service Quality Assurance. It enables service providers to see traffic as the user experiences it. GoS enforces policies and shapes traffic at the real edge of the network. In the case of this network definition, the edge is the client device. GoS manages network congestion with:


  • guaranteed bandwidths
  • enforced bandwidth limits
  • reuse of unclaimed bandwidth
  • separate control of loss and delay priority
  • fair sharing of quality, not just bandwidth
  • network stability under increasing load
  • predictions of loss and delay behavior under saturation
  • live monitoring of delivered quality


GoS manipulates the relationship between data loss, delay and throughput. These three variables are fundamentally interdependent - if one is fixed, the other two adjust to the fixed variable. If two are fixed, the third is determined by default. Using this principle, GoS Technology permits different simultaneous levels of management for different data streams. Some data streams can’t tolerate delay but can accept packet loss without a perceptible drop in quality, while others require that all packets arrive at their destination eventually. GoS permits managing these and other data handling issues.


GoS Networks’ cloud-specific software is GoS 360. Using the software network, operators can ensure secure, reliable delivery of complex cloud-based services by deploying GoS Agent at client premises and GoS Manager in the network. GoS Manager acts as a performance monitor for all connected GoS Agents. This ensures that accurate, real-time information of application performance is available to client devices. Network operators can respond to the data by controlling individual sessions via the embedded GoS Agents.


Eurotech, ILS Technology, and GoS Networks impact cloud computing in different ways: Eurotech and ILS Technology addresses client side data presentation and management, while GoS Networks manages dataflow, control, and QoS. Each approach has a role to play in modern cloud computing.


How will you deploy embedded systems within a cloud structure?


To view other community content focused on connectivity, see -- Connectivity-Top Picks.




  1. Eurotech is an Associate member of the Intel Embedded Alliance
  2. ILS Technology is an Affiliate member of the Intel Embedded Alliance
  3. GoS Networks is a General member of the Intel Embedded Alliance
  4. Wind River is an Associate member of the Intel Embedded Allaince


Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance

Real world signals are the basis for stimulus and response for embedded systems. Without real world signal interfaces, embedded processors would have a shortage of applications. Much attention has been paid to high data rate sensors, but the sensor continuum includes data rates that span a wide range.

Most real world signals, unlike digital signals, are continuous in nature. Continuous (analog) signals can be processed by linear components as part of a control system, digital processing carries with it significant benefits. These benefits include repeatability, no component drift, ease of calibration, and the ability to adapt signal processing to changing environments. More than thirty years ago telecommunications engineers recognized the power of digital signals that could represent analog quantities. Digital signal representation for analog signals was understood by very few engineers when realtime digital signal processing first became available.


Today the landscape has changed dramatically. Ordinary run-of-the-mill processors are fully capable of processing lower bandwidth signals that don’t require substantial signal processing. More modern, higher performance processors, like the Intel® Core™ i7, bring new architectural features to the embedded application space.  Engineers can select from Intel processors to meet application needs: Atom processors offer low power operation giving designers flexibility in low power applications, and Core™ i5 and Core i7 processors provide high performance instructions including Advanced  Vector Extensions (AVX).  Parallelism through multi-cores and multiple-data operation instruction extensions, like AVX, are key to achieving needed performance for realtime signal processing. Early special-purpose signal processors achieved their realtime performance by using multi-data instructions that performed a dual data fetch, multiply-accumulate, and save instruction in a single cycle. Regardless of the specific processor that you choose, you will be maintaining compatibility with the large code base of the Intel Architecture.


AVX packs a lot of functionality into the new extensions to the Intel Architecture. Much of Intel’s AVX power comes from the vector pipeline width. The execution unit and critical registers are 256 bits wide. AVX’s vector pipeline can execute multiple operations in parallel using a Single Instruction, Multiple Data (SIMD) architecture. Instead of a single Arithmetic Logic Unit (ALU) performing a function using two operands to yield a single result, several ALUs can operate on multiple pairs of data simultaneously. By replicating the number of ALUs, AVX can produce several results in the same time that a standard processor core takes to produce one. This is particularly effective where the same operation is performed many times across a large data set, such as typical digital filtering. The ability to recast sequential operations into multiple parallel sequences is common in DSP algorithms including matrix operations, filter functions, and Fast Fourier Transforms.


AVX operates on 256-bit wide registers which can represent either:


  • eight 32-bit single-precision floating point numbers or
  • four 64-bit double-precision floating point numbers


Current Intel AVX implementations retain 128-bit wide integer registers which can represent:


  • two 64-bit integers or
  • four 32-bit integers or
  • eight 16-bit short integers or
  • sixteen 8-bit bytes or characters


Many algorithms for performing realtime signal processing are well suited to single precision fixed point operations. Eight fixed point data can fit within 128 bits of the 256-bit pipeline. AVX extends the data type to 32-bit single precision floating point data. Eight concurrent operations are possible, and for some operation sequences like multiply-add, two instructions can be executed per cycle. Under these circumstances, this hardware capability allows sixteen operations to proceed concurrently.




The bandwidth that can be processed by Intel Core processors with AVX has prompted many Intel Embedded Alliance members to offer single board computers as standard product. Member companies Kontron’s (1) VX3030, Advantech’s (2)ARK-3440, Radiys’ (3) CEQM67, and GE Intelligent Platforms (4) SBC324 are a few of Core i7-based products.


GE Intelligent Platforms has developed a series of embedded hardware platforms that bring the power of Intel’s Advanced GE Intelligent Platforms  has developed a series of embedded hardware platforms that bring the power of Intel’s Advanced Vector Extensions (AVX) to the SBC324 3U OpenVPX rugged single board computer. One of five core i7-based boards from GE-IP, the SBC324 features a quad core processor operating at up to 2.1GHz, and up to 8GBytes of DDR3 1.333MHz memory. These hardware features make for exceptional performance in size, weight and power-constrained applications. The GE-IP AVX-based boards are targeted at especially harsh environments such as unmanned vehicles.  GE-IP plans for the SBC324 include deployment not only in traditional 3U VPX applications such as command/control, but also in Intelligence, Surveillance, Reconnaissance (ISR), radar/sonar and other signal processing.


The real power behind the SBC324 is software. GE-IP’s AXISLib-AVX is a set of signal and vector processing libraries that include more than 600 high performance digital signal processing and vector mathematical functions – optimized for AVX operation. Each function helps developers maximize system and application performance while minimizing time-to-market. The libraries are designed to complement the hardware power of the AVX-based SBC324 by supporting advanced realtime embedded signal processing applications such as ISR. AXIS-AVX can operate standalone or as an integral software module within the AXIS Advanced Multiprocessor Integrates Software environment.


Wind River Systems’ (5) Linux® and VxWorks OSes can be used to manage Curtiss Wright Controls  Embedded Computing  Core i7-based CHAMP-AV8. The board features two quad-core Intel® Core™ i7 processors that combine to produce an incredible 269 GFLOPS of peak performance. Harnessing all of this computing power takes some forethought by engineers. Central to the application architecture, the use of multi-cores requires explicit decisions about how multi-cores and signal processing algorithms will be managed by software.  Increasingly embedded software shifts the responsibility for managing multi-cores and parallelism from the application code to operating systems. Wind River’s VxWorks includes Symmetric Multi Processing (SMP) core reservation, which allows for a single core in a multi-core system to be allocated for a single process and be isolated from other processes/cores. An additional facility of VxWorks called “spinlock” works in the Asymmetric Multi Processing (AMP) configuration to avoid instruction stalls. These capabilities can be critical to ensure hard-realtime operation for signal processing algorithms.



(Intel E6x5 DSP engines in FPGA)


In addition to multi-core programmable processors, Intel provides alternatives for a wide range of signal processing using the Atom E6x5 processor with up to 39 configurable DSP engines – or any other unique accelerator you can define. The E6x5’s multi-chip module packaging technology controls package capacitance and bond wire inductance by minimizing conductor length and keeping those connections within the same package.  The use of an Altera FPGA as part of the E6x5 brings all of the design flexibility of an FPGA to single package embedded processors.  From a systems standpoint, these FPGA-based computing elements may be treated as peripherals that are explicitly invoked through setting registers and the like, or managed through the use of an Asymmetrical Multi Processing (AMP) Operating System like Wind River’s Carrier Grade Operating System (CGOP).  Kontron offers a single board computer featuring an Intel Atom E6x5 processor. The Microspace® MSMT PCIe/104™ SBC includes a 1.3GHz processor, 1GB RAM, HSMC for custom interfaces, all in a small footprint. The E6x5 procesor’s Altera FPGA provides flexible I/O configuration for application-specific elements. Standard and optional I/O on the SBC include USB, audio, graphics, Ethernet, and a realtime clock. All you need to add is your code and FPGA configuration.


Embedded processor choices can simplify your next systems design.


What kind of decision can you make?




  1. Kontron is a Premiere member of the Intel Embedded Alliance
  2. Advantech is a Premiere member of the Intel Embedded Alliance
  3. RadiSys is a Premiere member of the Intel Embedded Alliance
  4. GE Intelligent Platforms is an Associate member of the Intel Embedded Alliance
  5. Wind River Systems is an Associate member of the Intel Embedded Alliance


Henry Davis
Roving Reporter (Intel Contractor)
Intel® Embedded Alliance

The original design philosophy for the underlying technology that enables the Internet was founded on providing multiple alternative routings from one computer to another. By re-establishing the original “Internet” approach to communications channels, the communications can continue through most communications outages. Internet routing employs automatic fallback to alternative channels which assures continuous communications under most circumstances. Establishing redundant physical connections further improves reliability, but it is also possible to bond channels together to achieve faster communications. This was one approach used by some vendors of V.32 telephony modems to achieve better communications throughput. Today, there is a wide variety of physical channels available to designers of embedded systems.

A basic tenant of systems design is to avoid all single point failures if practical. Within embedded systems that include communications to other systems, the highest probability of failure lies with the communications channel. For any system that relies on communication with another system, the failure of a sole communications link generally leads to systems failure. The answer to this point failure is to add another communications link.

Companies like Australia’s NewSat actively deploy backup and emergency communications to supplement their mainline offerings.  Despite the ever-shrinking size of electronics, physical limitations impose minimum footprint sizes for systems that use satellite links.


Systems that include redundant communications links can use all available channels to increase bandwidth. Or, the links may be used in fallback configurations to increase systems reliability and availability. The process of using multiple links to increase bandwidth is variously called:

  • Link aggregation
  • trunking
  • link bundling
  • Ethernet/network/NIC bonding
  • NIC teaming


For LAN connections, more than one physical port is configured to be bonded. But not all bonding schemes increase the channel bandwidth.  For example, the Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical interface. How the bonded interfaces behave depends on the bonding mode selected in the driver. For Linux, the bonding driver modes provide either hot standby or load balancing.

  • Round-robin
  • Transmit packets in sequential order from the first available NIC through the last - provides both load balancing and fault       tolerance.
    • Active-backup
      • Only one NIC in the bond is active. A different NIC becomes active if the active NIC fails - provides fault tolerance only.
    • XOR
    • Selects the same NIC for each destination MAC address - provides load balancing and fault tolerance.
      • Broadcast
        • transmits on all NICs - provides fault tolerance.
      • IEEE 802.3ad Dynamic link aggregation
      • Adaptive transmit load balancing
        • channel bonding that does not require any special switch support - if the receiving NIC fails, another NIC takes over the       MAC address of the failed receiving NIC.
      • Adaptive load balancing
        • The Linux bonding driver overwrites the source hardware address with the hardware address of one of the NICs in the bond       - different peers use different hardware addresses.



      Bonding may take place at any one of the three lowest level OSI layers.  Wireless and power line devices are generally bonded at layer 1 while Ethernet links are typically bonded at the data link layer (layer 2). It is also possible to bond at the network protocol (layer 3) such as Internet Protocol (IP). Bonding was originally the domain of infrastructure companies using large scale hardware. But the advances in CPUs have moved the potential of bonding from near the central office towards the outer edges of the communications network.


      Edge Access is a pioneering company in the VoIP business focusing on emergency communications. Edge Access’ equipment was deployed to New Orleans, Louisiana during hurricane Katrina. The equipment facilitated the first voice call from the devastated area. The electronics required to perform this embedded task was a bit bigger than a full size PC tower, but the use of a satellite link makes the footprint of the complete package footprint to be defined by the size of the satellite dish.  Edge also manufactures VoIP systems based on other communication links that are smaller in size.


      Norco (1) has another smaller, lower power approach to assuring reliable communications. The BIS 6623 can provide 3G/4G connections with voice, video, and data. Presented as a way of maintaining mission critical communications in an ever-expanding application of data types in emergency communications, the 6625 is a fan-less design base in the Intel® Atom™ 6xx processor.


      BIS-6625 Block Diagram.jpg




      Norco’s BIS-6623 is an example of using commercially available communications channels in a redundant manner to provide critical communications for systems used by emergency services and others. The system employs standard protocol stacks to manage multiple channels simultaneously under Microsoft® Corporation’s (2) Windows® embedded Operating Systems. Fallback protocols are implemented in software to select (or even bond) between available channels.

      The hardware platform Is available in several standard versions and comes standard with a 1 GHz processor clock rate and 1 G of DDR2 memory. Alternative standard versions are available including one based on the 600 MHz Tunnel Creek, and OS-less bare hardware option.


      Norco identifies third party wireless hardware suppliers Huawei and Sierra Wireless products to enable wireless communications. The 3G-capable Sierra Wireless AirCard® 503 2-in-1 Data Card provides PC Card and PC Card Express form factors in one package. Redundancy for the BIS-6623 is achieved by loading two or more 3G cards (or USB modems) into the 6625. If you are using the Windows operating system, device bonding is determined by how you set up the Windows drivers. Keep in mind that although the Norco product is aimed at wireless situations, these compact products also have other connections including USB and Ethernet.


      Bare hardware options for the BIS-6625 can run any OS that supports Tunnel Creek. So, Wind River Systems (3) Linux product for telecommunications carrier grade applications brings high software reliability to the compact Norco form factor.


      The Intel Atom processor family enables many small form-factor designs. Intel Embedded Alliance Premiere members Advantech (4), Emerson (5), Kontron (6), and RadiSys (7) all offer a variety of configurations. You can learn more about these and other Alliance members’ Atom-based products at Intel’s convenient membership web site.


      Communications channel bonding is increasingly becoming a viable option for embedded systems, but consider the totality of the environment that your system operates within. Single points of failure may exist in switches and routers outside your embedded system.

      Can your next embedded system benefit from redundant communications?




      1. Norco is an Associate member of the Intel Embedded Alliance
      2. Microsoft Corporation is an Associate member of the Intel Embedded Alliance
      3. Wind River Systems is an Associate member of the Intel Embedded Alliance
      4. Advantech is a Premiere member of the Intel Embedded Alliance
      5. Emerson is a Premiere member of the Intel Embedded Alliance
      6. Kontron is a Premiere member of the Intel Embedded Alliance
      7. RadiSys is a Premiere member of the Intel Embedded Alliance




      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      Interoperability is just a fancy way of saying that systems have to work together. The concept is simple but the mechanisms required to make it work are not. Interoperability is achieved through adherence to international and other standards, combined with translation packages that homologate “nearly identical” communications channels. The first approach is exemplified by Internet standards like TCP/IP while the second is implemented by systems like Common Object Request Broker Architecture (CORBA) and its Object Request Broker (ORB).  Microsoft Corporation (1) has a different idea than CORBA, instead relying on its own Distributed Component Object Model (DCOM) and Windows Communication Foundation. However, Microsoft has agreed to a gateway standard to translate between CORBA and DCOM.  Although CORBA and DCOM have been developed for the Information Technology server-client model, embedded systems are closing the system complexity gap between embedded systems and IT systems.


      Interoperability has multiple levels of meaning including hardware and software interoperability. Adlink’s(2) Jeff Munch presented an overview of COM Express interoperability in an Intel® Embedded Community blog that also shows the way for other hardware interoperability. While hardware is one key piece of the interoperability solution, software represents the bigger total systems challenge.

      Virtualization of interoperability enforces a new discipline on software developers. By adopting virtualization as a fundamental of the software design process, engineers will develop software components that may be combined in different ways to easily create new products. Until recently, virtualization for microprocessor-based systems was touted as a way to control expenses and manageability in data centers. While virtualizations do those things for datacenters, embedded systems have other more pressing concerns today and for the near future.


      Let’s take apart virtualization to see how you might be able to apply the principles to embedded systems. Virtualization comes in multiple flavors:


      • System virtual machines - System virtual machines are sometimes called hardware virtual machines by some authors. They support sharing the underlying physical machine resources between different virtual machines running their own operating system. The software layer implementing the virtualization is known as a Virtual Machine Monitor or Hypervisor. A hypervisor can run on bare hardware (native VM also called Type 1) or on an operating system (hosted VM also called Type 2).
      • Process virtual machines – also called application virtual machines, runs as an application inside an OS and supports a single process. It is created when the process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts the environment. Doing so hides details of the underlying hardware or operating system. Using this type of virtualization allows a program to execute in the same way on any platform.

      • Emulation of the underlying raw hardware - This approach is also called full virtualization of the hardware. Implementation is done using a Type 1 or Type 2 hypervisor. Each virtual machine can run any operating system supported by the underlying hardware. Users can run two or more different guest operating systems simultaneously. Each guest OS is resident in a separate private virtual computer.

      • Emulation of a non-native system - Virtual machines can also perform the role of a software emulator. Emulating non-native hardware allows operation of software applications and operating systems written for different processor architecture.

      • Operating system-level virtualization - can be thought of as partitioning: a single physical embedded platform is sliced into multiple small partitions (sometimes called virtual environments (VE), virtual private servers (VPS), guests, and zones). Each partition looks like a real system from the point of view of its operating software.

      All of these virtualization approaches must provide abstraction from the physical hardware peripherals and services to the operating system or application. The dividing line between the physical hardware and the systems software depends on which type of virtualization is employed. But regardless of the virtualization chosen, virtualization establishes a mindset for designers.


      Virtualization benefits:

      • Consolidation
      • Maximizing hardware cycle usage
      • Security
      • Separate development & production  platforms
      • Better logical software partitioning
      • Hardware independence


      Most of the fielded systems employing virtualization have been deployed in IT-focused servers based on the first two factors. The financial benefit for server configurations is leading to industry-wide API interface standardization of non-embedded virtualization systems. As a natural outgrowth of this standardization, you can expect to see similar efforts take hold for embedded systems. In the meantime, virtualization for embedded systems is here and provides ways to improve systems’ ability to be re-targeted to new and different platforms.




      Virtualization tools from RTOS vendors like TenAsys (3), QNX (4), Microsoft Corporation, Green Hills Software (5), and Wind River Systems (6) provide frameworks for developers to create new software structures. Virtualization can be extended to include physically separate hardware systems using software techniques such as remote procedure calls and inter-process communications.

      Interoperability can be defined by the mechanism used to communicate between computing components – whether they are realized in a hardware platform or inside a virtual environment. By virtualizing the software components of a system you gain more flexibility and control. For systems that use Ethernet communications, virtualizing the system means creating virtual Ethernet adapters, switches, and other communications support systems. How you accomplish this depends on the virtualization approach that you adopt.


      Full virtualization gives you the easiest environment in which to implement interoperation elements. For full virtualization, software is written for a virtual Ethernet adapter without regard for other software that may require the resource. Relief from considerations for physical hardware during the design process gives programmers the freedom to develop a software structure that best relates to the problem statement.


      Programming is an intellectually complex undertaking that is difficult regardless of the techniques used to implement software. There is no “magic bullet” for embedded programming. Some tools and languages have specific advantages for some programming problems, but there really isn’t one tool to solve every programming problem. Each part of the systems programming specification carries with it tradeoffs. In theory, software developed using ‘C’ is faster to develop than the same program written in assembly language, but assembly language carries with it the prospect of greater machine (platform) utilization. The rapid decrease in cost per CPU operation, with simultaneously dramatic increases in CPU performance, combined with the increase in systems complexity, has shifted the software decision point sharply to favor higher level languages like ‘C’ and Ada.  The same hardware dynamics are favoring increased use of parallel programming to use multi-core processors effectively.  Adopting virtualization as part of your programming bag of tricks encourages better program structure because the modules that are naturally defined by systems requirements are easier to code and test than a single glut of code. This is especially important for improving interoperability.


      Virtualization helps improve program abstraction. Many popular embedded programming languages lack semantic structure to abstract concurrency, and with a few notable exceptions like Ada and Java, creating concurrency is left to explicit programming by the developers. Virtualization aids the expression of concurrency by embedding the inter-process communications and control within the virtualization mechanism. In short, the virtualization mechanism can remove the need for explicit software structure to deal with multiple modules.


      Ada intrinsically had language semantics to create concurrent or parallel programs, but ‘C’ doesn’t. Threading libraries help in this regard, but using virtualization abstracts the program structure so that there is better modularity. As part of using virtualization to its best effect:


      • Avoid side-effect programming at all costs
      • Employ dataflow and data-parallel design techniques
      • Focus on task-centric programming
      • Emphasize reliability
      • Design in architectural efficiency
      • Employ asynchronicity in your designs


      Each of the RTOS companies mentioned above includes unique capabilities in their products. But every virtualization/OS offers some type of Inter-Process Communication (IPC).  IPC is one of the powerful tools available to improve modularity. Just as Ada intrinsically supports parallelism through its semantics, IPC mechanisms  extend similar mechanisms to all programming languages. Inter-Process Communications (IPC) can be achieved through a number of mechanisms:


      • Files
      • Signals
      • Sockets
      • Message queues
      • Pipes
      • Named pipes
      • Semaphores
      • Shared memory
      • Message passing
      • Memory-mapped file


      Using virtualization to achieve interoperability encourages better code structure while providing more usable components for developers. Well defined software modules that adhere to virtualization principles result in lower error rates and more robust systems. The degree of abstraction used in a specific system is largely up to the embedded systems designers.


      Using the advanced techniques that virtualization and expended parallel program structure provide is not without its risks. Choosing a fine program structure can burden the system with excessive compute cycles dedicated to the IPC mechanisms and parallel structures.


      For the major systems that influence interoperability, there are simple means to minimize compute overhead while maximizing module reusability:


      • Define modules according to industry standard protocols’ state diagrams
        • Choose maximum size modules that are self-contained
        • Within the self-contained maximum modules, decompose the modules to find high compute requirements that can benefit from parallel execution
        • Generalize the module operation to apply to multiple standards
        • Choose the persistence of each module
          • Create and destroy each time used?
          • Load and save select modules
          • Consider the number of threads or cores present on the minimally-capable hardware platform
          • Too much parallelism can overwhelm the processor(s) with executing virtualization and IPC mechanisms


      Virtualization and parallelism make a powerful programming technique. The two together can offer freedom and flexibility to choose new, more capable multi-core platforms while improving interoperability.


      How will you make your next systems design virtualization and parallelism decisions?


      More information



      To learn more about virtualization for embedded devices, visit




      1. Microsoft® Corporation is an Associate member of the Intel      Embedded Alliance
      2. Adlink is an Associate member of the Intel Embedded      Alliance
      3. TenAsys is an Affiliate member of the Intel Embedded      Alliance
      4. QNX Software Systems, Ltd is an Associate member of the      Intel Embedded Alliance
      5. Green Hills Software, Inc is an Affiliate member of the      Intel Embedded Alliance
      6. Wind River Systems is an Associate member of the Intel      Embedded Alliance


      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      Developing software for embedded systems is different and arguably more complex than software intended to be used in an Information Technology-focused PC environment. For thirty years the paradigm for Personal Computers followed the progression of mainframe and mini-computer models for computing. These models have evolved to rely heavily on abstracting the operating environment into a common programming interface in which applications are separated fully from essential underlying hardware.   Where PC software is most often completely divorced from the environment, embedded systems are intimately involved with their environment.


      Commercial software development tools have benefited from decades of tool development and millions of man-years of usage by professional programmers. These general purpose software development tools have a wide range of little-used capabilities that can empower embedded software developers. Single processor/single core CPUs are straightforward platforms for development and software debug and lack some of the complicating aspects of multi-core systems.

      Uniprocessors (CPUs with only one processor) can support multiple threads but only one thread actually executes at a time. Still, having tools to trace the operation of threaded software can aid both the understanding of how threads work in practice and debugging code that has been written to use threading. Green Hills Software®, Inc (1) offers MULTI EventAnalyzer as part of their Multi Integrated Development Environment (IDE). Part of the Time Machine Suite, the EventAnalyzer centers on a graphical display of operating system events.  These events include essential information such as kernel service calls, interrupts, exceptions, and context switches. This information is often viewed primarily as an optimization tool – operations that take the most time are obvious meaning that developers can spend time on optimization efforts that will have the best payback. But the EventAnalyzer can be a good learning tool for new members of a development team.




      Being able to capture operating systems events are a “nice to have” feature for uniprocessors, but when development moves to multi-processors or multi-core processors the feature goes from a nice feature to a “must have” capability.


      When developing with a threading environment all drivers and libraries must be “thread safe.” But adding multiple processors adds an additional requirement on software: everything must be priority inversion safe as well. Priority inversion can happen when software executes on more than one processor. Priority inversion is a problem of scheduling in which a higher priority task is indirectly preempted by a lower priority task. This effectively inverts the relative priorities of the two tasks. Such situations apparently violate the priority model - high priority tasks can only be prevented from running by higher priority.

      By way of illustrating the problem we’ll use a task called L, with low priority. This task uses a resource called R. Assume that L is executing and it gains control of resource R. The problem comes in when  there is another high priority task H that also requires resource R. If H starts after L has acquired resource R, H has to wait until L relinquishes resource R. Everything works as expected up to this point, but problems arise when a new task M starts with medium priority during this time. Since R is still in use (by L), H cannot run. Since M is the highest priority unblocked task, it will be scheduled before L. Since L has been preempted by M, L cannot relinquish R. So M will run till it is finished, then L will run - at least up to a point where it can relinquish R - and then H will run. Thus, in above scenario, a task with medium priority ran before a task with high priority, effectively giving us a priority inversion.


      Green Hills’ Event Analyzer shows developers the operating systems calls graphically. It’s straightforward to detect this situation by looking just at the task statuses. Wind River Systems® (2) adds another capability to their tool chain called Simics.




      Wind River Simics simulates everything from a single processor, system-on-chip (SoC), or board to the most complex system conceivable. Simics can simulate an entire system including racks of platforms, with each running different operating systems on different processor architectures. One of the goals of Simics is to allow all developers, testers, and integrators to debug/test and integrate the system as a single unit rather than working with individual system pieces as has been the case. Lest you think that these facilities are more than required for developing embedded systems, consider large scale embedded systems like telecom central office functions implemented by racks of dedicated embedded systems. Or, think of alternative systems hardware portioning such as might be found in Digital Signage in which displays are remote but the embedded computing could be performed in a physically localized fashion.

      Experts at Green Hills, Wind River, and Intel® offer these tips for multi-core development:


      • Consolidate the hardware as much as possible.
      • Employ virtualization.
      • Choose Symmetrical Multi Processor Operating System if building on a  homogeneous processor base, or an Asymmetrical Multi Processor Operating System if using some specialty processors like Digital Signal Processors.
      • Select libraries that have been optimized for multi-core use – when operating on fewer cores the code will still operate.
      • Adopt driver software and OSes that are both thread and priority inversion safe.
      • Develop software that makes maximum use of threading since it can simplify software development and also adapts to a new platform embodying more cores with little effort.
      • Enforce coding standards that minimize the ability to violate priority models while encouraging maximum architectural parallelism consistent with the selected processor family.


      Chances are that your company has already standardized on a specific tool chain. Regardless of the vendor, most tool chains support the Eclipse standard. Green Hills tools embrace Eclipse as part of the framework. Eclipse is recommended by the company to extend the range of capabilities provided in the EventAnayzer tool to meet unique requirements. Wind River’s tool chain also uses Eclipse as part of their framework. Wind River has been a strong supporter of the Eclipse efforts -contributing software to the industry wide effort.


      Embedded systems are gaining in complexity at a rate faster than implied just by the growth of Integrated Circuit complexity. Developing embedded systems is a challenging undertaking that demands a series of development techniques that are new to the embedded community.

      How will you adapt your tool chain to fit evolving embedded requirements?




      1. Green Hills Software, Inc  is an Affiliate member of the Intel Embedded Alliance
      2. Wind River Systems is an Associate member of the Intel Embedded Alliance


      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      How do you solve embedded scalability issues to build physically dispersed, large scale, real world systems?


      Embedded systems were once relatively independent, purpose-built hardware intended to serve a fixed function within a fixed and predictable demand system.  The emergence of organically growing embedded systems like streaming media and the “SmartGrid” system demand scalability on a large scale.




      Design techniques pioneered for large scale computing can be applied to embedded systems. The techniques rely on systems scalability, which is enabled by software structure. Embedded software vendors offer many of the building blocks necessary to create these complex systems.


      In a recent blog I posted about some methods available to embedded developers. But, as with any program designed to solve a specific problem, the best program structure reflects the problem statement. And that is where the software structure comes into play. Good programming languages should enable one obvious way to create the code and not easily permit many alternatives. Unfortunately, the language of choice for general embedded systems (‘C’) doesn’t inherently funnel the software creative process towards one “right” implementation. Software tool chains can be used to augment the language to programming standards and styles.  For example, Green Hills Software (1) DoubleCheck product is aimed at finding and flagging potential errors in C and C++ programs. DoubleCheck is a tightly integrated adjunct to Green Hills’ C and C++ compilers. DoubleCheck extends traditional static analyzers to help catch a slew of errors that can become runtime reliability problems:


      • Potential NULL pointer dereferences
      • Access beyond an allocated area - otherwise known as a buffer overflow and also underflows
      • Potential writes to read-only memory
      • Reads of potentially uninitialized objects
      • Resource leaks including memory and file descriptor leaks
      • Use of memory that has already been deallocated
      • Out of scope memory usage such as returning the address of an automatic variable from a      subroutine
      • Failure to set a return value from a subroutine


      As with the Green Hills tool chain framework, this tool may be extended by programming it to recognize uniquely defined structures with their own unique checking requirements.


      Wind River Systems (2) Link-Time Lint Checker is also an integrated error-checking tool. The lint facility finds common C programming mistakes at compile and link time. Typical errors flagged include:


      • unused variables and functions
      • missing return statements
      • constants out of range
      • function call mismatches.


      Link-time checking finds inconsistencies across modules, which is impossible to do at compile time.


      Maybe it’s time to consider other languages that don’t have the faults inherent in C. Ada is one such language supported directly by Green Hills Software and through a partnership between Wind River Systems and AdaCore. Ada had its genesis in a US Department of Defense contract starting in 1977. Today, it is the language of choice in many embedded fields including aerospace and other high reliability applications. Ada is a structured, statically typed, imperative, object-oriented programming language.  It has strong built-in language support for explicit concurrency, synchronous message passing, protected objects, tasks, and nondeterminism . Synchronous message passing employs a monitor-like construct with additional guards as in conditional critical regions. Nondeterminism is accomplished by the select statement. This is a language that you should seriously consider when developing large scale, advanced, systems requiring high reliability.




      Referring to an overview level of detail for SmartGrid Operations, it should be clear from inspection that the operations environment is conceptually complex. In an earlier blog a holistic system for readi-mix concrete incorporated some of the elements of the software complexity required for SmartGrid, but for a well defined problem of managing and controlling a batch concrete plant. One of the main differences between the complexity of a batch plant operation and the ever-growing infrastructure of power distribution and management is the dispersed nature of the American (and other) power grid. The US power “grid” started as a series of ad hoc, local, distributed networks to supply local consumers with relatively small amounts of power more than one hundred years ago. Since then these local distribution networks have been connected together in an expanding series of power distribution cables. There may be social debate about SmartGrid, but the proliferation of residential power generation net-metered to the grid, combined with deregulation of larger scale power generation requires a smarter mechanism to control supply and demand not only locally, but also regionally and globally.


      SmartGrid points towards a mixture of embedded systems for the US electric infrastructure: systems that are of varying sizes, complexity and architecture. Looking at the High Performance Computing project (HPC) gives us a look not too far into our future. HPC saw many of the problems that embedded systems are just starting to encounter when it was in its early stages. Embedded systems are quickly closing the gap between the pedantic, isolated small-scale embedded system and HPC – the problems  HPC faced a decade ago are our problems today, and today’s HPC problem will be on our doorstep in a few short years.  We will need to deal with three broad categories:


      • Efficient use of systems with a large number of concurrent operations (scalability)
      • Reliability with large tightly coupled systems
      • Jitter based on hardware, software, and the applications


      Scalability carries with it an intrinsic requirement for improved reliability of each software component. As the number of components increases, the reliability of each component in isolation becomes critical to the continued operation of the assembled system. Although an individual component, software or hardware, may fail, design techniques are available to permit continued operation in the face of component failure.  Embedded systems can be implemented by custom hardware or collections of industry standard hardware modules combined with scalable software.


      One of the key messages from the experience with large scale systems such as HPC is that virtualization is a key technology required to manage complexity, reliability, and multiple hardware platform types. Virtualization is a technique that separates software from the underlying hardware on which it operates.  While scalability is possible without virtualization, using virtualization simplifies systems design and offers more options for systems implementation. Using an approach based on scalability systematically improves effectiveness while minimizing power consumption. Virtualization is a key component of the software architecture. Embedded system providers achieve one code base that can support a continuum of performance and efficiency.


      Operating Systems Jitter is a new concept for many embedded programmers, and indeed to most programmers. OSJitter is related to other unexpected performance degradations of large numbers of computing nodes. In one of the most recent of research results on the subject, researchers at Lawrence Livermore Laboratories discovered that a computer made up of 4096 elements had a 13-fold reduction in throughput  based solely on jitter. This fact has some future implications for embedded systems. Looking again at the SmartGrid Operations block, you can see that there is substantial potential for large numbers of processors configured in computing clusters. Which in turn means that for at least this application, we’ll be facing OSJitter issues. Researchers believe that OSJitter can best be managed by:


      • Improving interrupt routing
      • Better user and kernel thread scheduling
      • More intelligent scheduling policies
      • Synchronization of jitter sources through various co-scheduling techniques 


      Virtualization makes many of these systems design decisions able to be changed with minimal perturbation of the remainder of the system. Vendors of RTOS products like QNX (3) and TenAsys (4) have different takes on what is important in an RTOS. <url to recent blog including QNX and TenAsys> But by employing virtualization as a cornerstone of your systems design you can minimize code rework.

      Although the subject is vast, improving software scalability boils down to a handful of points:


      • Adopt virtualization as a fundamental part of your design process
      • Consider the changing landscape of large scale embedded systems – what lessons are to be learned from them?
      • Chose a language, like Ada, that includes concurrency in the language itself
      • Employ threading
      • Identify what information is required to be used by the embedded system – minimize the span of information
      • Investigate your existing systems execution profile for bottleneck information – sometimes the resulting information is counter-intuitive
      • Evaluate the minimum number of cores that your application requires with a load low enough to NOT impact software development – usually keep loading under 80%
      • Ensure that your drivers and libraries are written for a maximum number of processors, but don’t force the use of more cores than are required


      There’s a place for most every RTOS, embedded programming language, and tool chain in your future. Which will you choose?




      1. Green Hills Software, Inc  is an Affiliate member of the Intel Embedded Alliance
      2. Wind River Systems is an Associate member of the Intel Embedded Alliance
      3. QNX Software Systems, Ltd. is an Associate member of the Intel Embedded Alliance
      4. TenAsys is an Affiliate member of the Intel Embedded Alliance


      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      Multi-core processor technology can bring higher systems performance and lower power consumption to a broad range of embedded applications running on distributed computing elements. But with the benefits of multi-core come new challenges and complexity, not just from a hardware perspective but more importantly from the software development task. Many developers find the move from single-core to multi-core systems challenging.  Developing embedded systems to achieve scalability is a particular challenge.  How can developers migrate software between processors with different core counts without rewriting their code?  An even bigger challenge is present in distributed systems, where the processing cores are in physically separate processors.  How can developers harness these physically separate multi-processor distributed resources to work in concert for their system?


      There are alternative approaches to developing software that can be migrated between systems employing processors with differing numbers of cores in each processor. Generally, software is usually developed using message passing with a Single Process, Multiple Data (SPMD) model or for shared memory with threads in OpenMP, Threads+C/C++ or Java. Software using message passing generally scales easily while the shared memory approach is easier to program but has performance limitations.





      Some programming languages encourage developing software architectures that employ parallel execution paths to use parallel hardware resources.  Unified Parallel C  (UPC) is one such language. Originally targeted at massively parallel mainframe computers, the UPC language was created based on experiences gained from three earlier languages: AC, Split-C, and Parallel C Preprocessor (PCP). UPC combines the programmability advantages of the shared memory programming approach with control over data layout. In addition, there are performance improvements gained with the message passing programming paradigm.


      QNX Software Systems, Ltd.(1) approach to supporting multi-core and multi-CPU systems is based on the idea of a micro kernel. Traditional embedded operating systems are constructed from amonolithic software in which every aspect of the OS is loaded whether used or not. Depending on the OS chosen, it may not be possible to reduce the memory footprint of the OS. The QNX kernel contains only support for CPU scheduling, interprocess communication, interrupt redirection and timers. All other support runs as a user process - including the special process called “proc” which performs process creation, and memory management by operating in conjunction with the microkernel. QNS achieves this functionality using two key mechanisms: subroutine-call type interprocess communication and a boot loader. The bootloader can load an image containing the kernel and any desired collection of user programs and shared libraries. QNX contains no device drivers in the kernel which separates much of the machine-specific code from the general OS code. Like many OS functions available in the market, the network stack is based on NetBSD code. QNX supports its legacy, io-net manager server, and the network drivers ported from NetBSD along with its own native device drivers.


      QNX’s interprocess communication technique works by sending a message from one process to another and waiting for a reply in one operation called MsgSend in the OS. The message is copied by the OS kernel from the address space of the sending process into the address space of the receiving process. Context switching is streamlined by QNX’s decision to switch control to the receiving process if the receiving process is waiting for the message - without invoking a pass through the CPU scheduler.


      QNX can be a distributed Operating System due to its microkernel architecture. Using this approach, a logical system may be partitioned across multiple hardware instances, each of which may perform a unique function such as disk access, I/O operation, and network operations without software regard for where the actual operation is taking place.  Each of these operations may be accessed through the message passing mechanism. By taking advantage of advanced inter-process communications techniques, developers can write code that scales across different core counts and even across disparate, networked processors.

      SMP is not the only Multi Processing approach that works for embedded systems. TenAsys’(2)  INtime® Distributed RTOS (DRTOS) is a 32 bit RTOS using embedded virtualization technology to partition resources on a multi-core processor platform.  The DRTOS enables multiple instances of the INtime RTOS running on a multi-core processor to communicate with each other.


      TenAsys takes a different approach to embedded Multi Processing. Developers work in a delivery platform using a managed Asymmetric Multi Processing (AMP) environment with the ability to distribute an application across several CPUs in a manner similar to SMP.




      TenAsys’ approach recognizes the value in assigning a specific processor to deal with critical real time I/O.  In a TenAsys-based design, the critical I/O resources are explicitly dedicated to a specific processor and its’ associated OS. This relationship is maintained by virtue of the binding of processes  within a specific processor and the dedicated connection to the I/O. QNX has a similar facility to bind specific process to a specific processor using what the company calls “Bound Multi Processing.” Both forms of binding a process to a processor minimizes the chances that a critical I/O channel with get starved for processing cycles inadvertently.


      A third approach to multi-processing is embodied by a software package sponsored by an industry consortium of companies. The OpenMP Application Program Interface (API) supports multi-platform shared-memory parallel programming in C/C++ on many software architectures, including Unix and Unix-like platforms. Since the work group has been mostly focused on data processing systems, the majority of effort has been in that arena. OpenMP has been jointly defined by a group of major computer hardware and software vendors. It is a portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications. A GNU implementation of OpenMP is available for GNU-based tool chains and can be adapted to other tools as well. Of course, unlike TenAsys and QNX offerings, there’s work to implementing an OpenMP system for your embedded systems.


      Wind River Systems’(3) Mark Hermeling asks a pertinent question in a blog that he wrote about AMP vs SMP. There is no question that programming is easier for SMP-based systems. But AMP clearly has some performance advantages under some conditions. Since the particulars of every embedded system will be different, the answer to the question is “it depends.” Not a surprising answer, but one that provides little guidance. Wind River’s VsWorks OS puts a foot firmly in all three camps. Three camps? VxWorks can operate as a single OS in either SMP or AMP modes, or it can operate on top of Wind Rivers’ Hypervisor to provide more options for platform configuration.


      AMP, SMP, and Hypervisors. There’s powerful arguments for both AMP and SMP. Hypervisors add flexibility and power to both approaches. How will you choose what path is right for you?



      1. QNX Software Systems, Ltd is an Associate member of the Intel Embedded Alliance
      2. TenAsys is an Affiliate member of the Intel Embedded Alliance
      3. Wind River is an Associate member of the Intel Embedded Alliance


      Henry Davis

      Roving Reporter (Intel Contractor)

      Intel® Embedded Alliance

      Multi-core processors bring higher performance and lower power consumption to a broad range of applications. But the development of Next Generation Networks (NGN) relies on multicore processors as part of network elements. NGN is fundamentally about:


      • consolidation of telecommunications standards to permit several transport layers to be merged into a single transport network,
      • a restructuring of the telecommunications infrastructure to remove VoIP switching from the local exchange, and
      • migration of constant bitrate voice data to CableLabs’ PacketCable standard.


      NGN is often referred to by many as a “carrier grade” or “carrier class” capability. However, “carrier grade” does not have a standard definition. The essence of “carrier grade” comes from the idea that carrier infrastructure systems carry a better than five sigma reliability, availability, and capabilities.  Although there has been a major difference between carrier and residential systems, the generally increased emphasis on software and hardware reliability has reduced the gap between the two classes of products. Capability and testing remain areas of difference.


      In a very real sense, major portions of the telecommunications infrastructure are being distributed throughout the whole system. Part of this distribution is based on increased competition between carriers owning their own infrastructure, but part of it comes from adopting Internet technologies like Internet Protocol (IP) as a backbone of infrastructure systems. Because IP is fundamentally based on dynamic routing of data from source to destination, the telecommunications infrastructure is taking on the same philosophy of distributed systems.


      Consolidation of transport layers in a single embedded system implies the need to handle many different transport layers, perhaps simultaneously. Conceptually, it’s easiest to consider each transport layer operating separately on its own processor. This requirement is most easily met by employing one or more multi-core processors to manage the transport layers. But with the benefits of multi-core come new challenges and complexity, not just from a hardware perspective but more importantly from a software development perspective.   The move to multi-core can impact time to market—if you don’t have the right development tools, technologies, training, and support.


      Virtualization is a key design and implementation technique for multi-core processors that enables:


      • Consolidation - reduce multi-core hardware cost and footprint
      • Reliability - build a safe and secure platform
      • Performance - maximize throughput on multi-core systems


      Consolidation means combining several logical systems together on a single hardware platform. Within the context of creating an NGN system, that means permitting different transport layers to become co-resident on the platform. For existing software that task is best handled by Virtualization and multi-core processors like the Intel® Xeon® processor. By using virtualization, each transport layer can reside on its own processor core with inter-process communications managed by a hypervisor. Thus, virtualization serves to isolate the operating software while providing an operating environment that requires the least amount of effort to make the software operational.


      Reliability is both a hardware and software issue. Hardware reliability is a by-product of the consolidation process – fewer integrated circuits in an embedded system means longer meantime to failure. Software reliability is a more intentional factor and is quickly becoming a “must have” for standardized software platforms.


      Performance of software operating on a multi-core seems like a given, but there are plenty of opportunities to actually obtain lower performance than on a single core processor.


      Wind River Systems (1) has defined an open platform for NGN systems called “Carrier Grade Open Platform” (CGOP). CGOP  is a software platform that includes operating systems, middle ware, application services, and a unique class of carrier grade features.




      One of the aspects of CGOP that sets it apart from residential class equipment lies in the cross-application services offered:


      • Fault management
      • Performance management
      • Security hardening
      • Green support
      • Operations support


      Each of these serves a vital purpose in infrastructure embedded systems. But, the fundamentals of developing NGN rests on the development tools used to create and manage the applications that operate on the CGOP. CGOP gives engineers a head start on development by providing a large collection of software modules that can be put together in new and unique ways to form custom applications. This collection of purpose-designed support modules reduces the time required to develop a new application, or to simply create a replacement system for an already-fielded telecommunications product.


      Software developers gain flexibility with the Wind River approach. CGOP supports Symmetrical Multi-core Processing (SMP), Asymmetrical Multi-core Processing (AMP), and hybrid options. Multiple OSes all run under a hypervisor - keeping memory and peripherals protected from erroneous reads and writes. Virtualization is a powerful concept that defines structured access to needed resources, and allows developers to simplify how multiple concurrently operating applications interact. In addition, virtualization permits the cross-application functions of CGOP to actually work.


      It’s crucial for virtualized NGN systems to be supported by debugging tools, including simulation and system-wide techniques for debugging complex multi-core and multi-OS systems. Standard development tools from Green Hills Software (2) and Wind River offer debug tools for embedded Intel processors. JTAG-based probes rely on on-chip access to internal registers to control and report on software operations. The critical support tool must include multi-core functions to permit debugging and diagnostics of the embedded processor.


      Wind River has integrated VxWorks and Wind River Linux in the CGOP platform. More importantly to developers, the products are integrated into the Workbench, On-Chip debugging, and Wind River Test Management. By integrating these capabilities together, developers are assured of a platform that is already structured to maximize developer productivity. All of these facilities are optimized for multi-core operation.


      CGOP offers engineers creating the Next Generation Network a way to accelerate developments, create more reliable applications, and host them on COTS hardware.


      NGN is coming. How will you respond to the challenge?


      To view other community content focused on virtualization, see “Virtualization – Top Picks.”





      1. Wind River Systems is an Associate member of the Intel Embedded Alliance
      2. Green Hills Software is an Affiliate member of the Intel Embedded Alliance



      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      Hardware consolidation is often thought of as a general purpose computing strategy aimed at servers and datacenters. But virtualization-based consolidation can have big impacts in embedded systems, too. Intel multi-core and multi-threaded processors offer unexpected paths to embedded system cost reduction and control using virtualization.


      Virtualization fundamentally provides an ability to decouple and abstract an operating system/application from the physical hardware it resides on - encapsulated in a virtual machine. The technology of virtualization is largely independent of the type of software that operates on the virtual machine.  The concepts of ‘decouple’, ‘abstract’ and ‘virtual’ provide the strength of the approach. Beyond just the technical qualities inherent in the virtualization process, virtualization enables the transformation of workloads so that they are continuously consolidated.


      Hardware consolidation for embedded systems is underway – witness the legacy retargeting program of ready-mix concrete company. In this case, the consolidation was as much about the obsolescence of the legacy hardware, used to make the integrated system function, as it was about reduction in hardware complexity.


      Performance, peripherals, and supported software often drive embedded systems decisions. More capable processors make it practical to implement workload consolidation for embedded systems where consolidation can be accomplished.


      Workload consolidation for general purpose computing focused initially on homogeneous workloads, but is now taking on a heterogeneous workload profile.  The most critical portion of workload consolidation management is the process by which applications are selected for consolidation. In an IT environment the deployment strategy is a key part of the decision criteria. For embedded systems the decision matrix may be dictated by a whole host of considerations:


      • Embedded system hardware reliability
      • Proximity of related systems
      • Standards-based requirements for separate hardware platforms
      • Regulatory requirements for medical, aeronautics, financial and other systems
      • Physical security
      • Data integrity and security
      • Risk of loss – financial, data, liability, and others


      Virtualization is one tool that can assist in meeting these wide-ranging requirements while still achieving consolidation. Virtualization provides technical advantages for embedded systems. Virtualization can protect against implementation deficiencies of poor isolation that lead to memory leakage, writes to peripherals not owned by the application, interference of other applications, resource/application scheduling conflicts, and operating system conflicts. For these reasons, many IT systems run each application on its own hardware platform. Virtualization approaches for embedded systems have addressed these issues in a variety of ways.


      Many embedded systems can be partitioned into a User Interface (UI) and a hardware oriented portion. This portioning approach allows different engineering approaches to the components. UI is a “soft” realtime system that can be implemented using an ordinary operating system to host the UI. Typically the UI is hosted on the Windows or Linux operating system and the hardware-oriented application is controlled by a realtime operating system.


      This logical partitioning has lead to several commercial approaches to software and hardware consolidation. Philips medical x-ray equipment employs TenAsys (1) INtime to achieve a workload consolidation including Windows and realtime control systems. UI functions like the systems console operate in the Windows space while remote INtime nodes can operate autonomously with CANbus connected realtime nodes. According to Philips software project leader, INtime is used to control machine movement and radiation emission for imaging.  Phoenix Contact, the largest supplier of soft PLC systems also employs the INtime product to control its Virtual Logic Controller (VLC) for use in steel mills, automotive assembly, packaging, and material handling. CNC machining company ANCA is a world leading CNC tool vendor using INtime to control its 5-axis CNC machining center.


      INtime supports Intel processors and all Windows SMP platforms. It provides direct support for realtime USB, Ethernet and industrial I/O devices. Using multi-core processors permits engineers to dedicate a single CPU core to realtime processes or share CPU-cycles between INtime and Windows operation.


      New systems may be designed from the start to employ virtualization to partition UI and realtime portions into separate subsystems using the appropriate software development tools. With OS systems that enable both Windows and a realtime portion to coexist on the same platform, engineers can create embedded applications that include both realtime requirements and exemplary realtime functions. Systems previously partitioned into multiple boards may be able to be consolidated with only a modest increase in processor performance. This has benefits both in inventory and SKUs for the OEM.


      Modern realtime and Windows systems allow both software development approaches to be used to their best advantage.


      How will you partition future embedded systems to leverage both Windows-based User Interfaces and realtime operation?




      1. TenAsys is an Affiliate member of the Intel Embedded Alliance

      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      Many embedded systems have increasingly included substantial collections of computationally complex protocols to provide superior connectivity for what is a growing segment of embedded applications. That trend shows no sign of slowing down, and in many cases is growing rapidly as developers find new reasons to enable connectivity within their embedded products.  Protocol complexity issues in an embedded system have been partially mitigated through the use of preconfigured protocol stacks. But even a pre-packaged protocol stack doesn’t eliminate associated software test for engineers employing an already-proven validation suite. Provided that the protocol stack is isolated adequately, the pre-configured software stacks can save time and reduce the learning curve for software developers.


      Flawless connectivity is a fundamental expectation of remote or distributed embedded systems. Quality and reliability engineers charged with validating the system before it is shipped can adopt pre-configured validation test suites to be certain that the protocol stack functions as intended. Ironically, for some companies the use of a pre-configured and verified protocol stack can hurt engineers’ ability to diagnose operational problems when they inevitably occur. When this situation happens, the robustness and functionality of the vendor’s programming tools, such as a workbench, become critical to the engineering process.


      Commercial programming workbench tools often offer regression testing and test suite tools for specific communications stacks.  Using these commercial tools can save time and effort while ensuring a reliable stack. Green Hills Software (1) and Wind River Systems (2) offer both a test workbench and pre-certified protocol stacks.  These test suites incorporate experience gained from hundreds of uses by customers in real world situations.  In effect, choosing a commercially available protocol stack carries with it thousands of man-hours applied to the problem of software verification and validation.

      Green Hills Software Networking Solutions Center provides a wide range of software and services focused on communications and communication protocols.  Connectivity of a particular embedded system is often dictated by the environment in use: CAN, Ethernet, or customized protocols. The number and diversity of communication options available to embedded systems can be staggering.




      Which portion of the protocol spectrum you choose is dependent on the whole system that the embedded product is used within. Simple communications within a private, purpose-designed embedded network may only require an Ethernet layer with a single network layer. Green Hills Software network professional services include complete networking system design, integration, debug, optimization, customization, test, and validation, training, and enhanced product support. Using these services allows development engineers to deliver higher-quality products with faster time-to-market and at lower development and deployment cost. While using a third party communications package does have developmental advantages, there are tradeoffs to be made dealing with relying on a third party for debug support during both development and deployment.


      Most Green Hills protocols are delivered in source code and are ready to run on INTEGRITY, INTEGRITY-178B, velOSity and u-velOSity. These protocols and protocol stacks are designed and tuned to use the capabilities of the Green Hills family of operating systems and the MULTI® IDE.




      The MULTI IDE combines a debugger, editor, code browser, configuration manager, and other tools into a single integrated system. The MULTI IDE supports Green Hills Software's optimizing C, C++ and EC++ compilers. MULTI IDE supports the industry open standard Eclipse system that permits re-configuration of the MUTLI environment and integration of Eclipse-based third party tools into the development environment.


      Wind River Systems’ Test Management product is a test execution optimization system for embedded devices. It permits run-time visibility into devices while under test. The system provides the operational feedback that test engineers need to achieve the highest possible quality in the least amount of time. The system is a scalable test and development environment intended to enhance automation, control, traceability, and feedback. It leverages unique, dynamic instrumentation technology to measure test coverage, map test-to-code traceability, profile performance, enable white-box access, and speed diagnostics of complex devices. All of these capabilities are available at run-time, providing a unique mechanism for test management. Like, Green Hills Software. Wind River Systems development environment supports the open standard Eclipse facility.


      Communications protocols are best tested and validated by purpose-built tools and pre-packaged test and validation data and control suites. Developing communications systems often require some way to simulate dataflows and stimulus-response behavior. An open source network simulator may have a role in your next communications development.  Ns-3 is a discrete-event network simulator for Internet systems, targeted primarily for research and educational use, but usable in other situations. Ns-3 is free software, licensed under the GNU GPLv2 license, and is publicly available for research, development, and use. It provides specialized support for simulation of TCP, routing, and multicast protocols over wired and wireless networks. Ns has no concept of operating systems as part of its structure, but the package can be integrated into any Eclipse-capable development environment. As with any other Eclispe third party tool, there may be additional programming required to make Ns work seamlessly within your specific environment.


      Embedded connectivity does not live in an isolated environment. Tools beyond the typical software and hardware development systems are often useful during engineering of connected embedded systems. The importance of Internet –based protocols has spawned a continuously improving collection of tools. Network hardware simulators like Candela Technologies LANforge ICE provides a reproducible set of impairments in which to test robustness of connectivity for embedded systems. LANForge ICE is a network emulator. It can add impairments in a controlled and reproducible manner to aid in testing applications and network equipment that must operate over a network. LANforge supports physical impairments that are difficult or impossible to simulate: latency, bandwidth, jitter, packet loss, packet reordering and others.


      PacketStorm Communications offers an IP Network Emulator and simulator designed to reproduce the unfavorable conditions of IP Networks and WANs in a controllable and repeatable lab setting. The company’s Route Analyzers monitor, record, and troubleshoot routing protocols of networks to maximize network availability by identifying and diagnosing IP routing faults in real time. The Route Analyzer’s Intelligent Error Analysis uses algorithms to provide user settable alarms for fast diagnostics. Although the intended audience for the Router Analyzer is large scale datacenters and service providers, it also can aid engineers who are developing large scale connected embedded systems employing typical Internet routing.


      Internet-based communications protocols are becoming ubiquitous in embedded systems both for remote control/monitoring, and for data exchange. As these systems expand in capabilities and enter more sensitive applications, the full range of robustness for communications protocols take on a new urgency.  With the increased reliance on connectivity comes the need for better and more comprehensive testing and validation. Today these needs can be met by commercially available development tools combined with network emulators and simulators.


      What level of validation do your connected embedded systems require?




      1. Green Hills Software is an Affiliate member of the Intel Embedded Alliance
      2. Wind River Systems is an Associate member of the Intel Embedded Alliance



      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      Product managers and systems developers face a dilemma when confronted by the obsolescence of a hardware platform.  Historically the choice has often been between obsoleting the system or spending time and resources updating and migrating the application to new hardware. Now there is a lower effort alternative using virtualization. Virtualization can hide missing or changed delivery platform hardware and processor differences. It may also extend product life without committing to a full re-development.


      Nowhere are the effects of hardware obsolescence seen in a more striking fashion than in large scale military development projects. It’s possible for the software developed for some military systems to span a decade or more in the development process, and outlive three generations of processor developments, including the end of life for the original processor. Today such large scale developments universally adopt a structured set of software rules aimed at minimizing the effort required to target a new or modified hardware architecture.


      But what do you do when you’re faced with an overwhelming amount of legacy software hosted on discontinued hardware? Or the time required to retarget the software exceeds the remaining run of available hardware? Or when legacy code will only work on a specific operating system version under impossible time constraints?


      One of the first legacy programs that I ever worked on ran on an IBM 1620 simulation program for an IBM 650 computer. The 650 was the target environment for the actual software. The IBM 1620 code simulated all of the operations and peripherals of the original 650. In effect, the 650 code ran in a specialized “virtual” machine implemented entirely in software.


      Modern virtualization technology can serve the same purpose as that IBM 650 simulation software – extend the useful life of a legacy software application without investing in full rewrites. Hypervisors like that offered by Wind River Systems (1) provide the basic tools required to continue running software with minimal time investment.




      Legacy code falls into several groups:


      1. Code that targeted a specific processor that is no longer available
      2. Code written to exploit specific features of a discontinued or no-longer-supported  Operating System
      3. Code that is useful but does not warrant a full porting effort
      4. Code that must be rehosted as part of a strategy to reduce hardware requirements


      Wind River’s virtual board mechanism enables the porting and paravirtualization of an existing proprietary operating system, or legacy code that is fully intertwined with the operating system, onto the hypervisor. Virtualization also permits continued use of no-longer-supported Operating Systems when the specific hardware that would run the code is no longer available. Porting legacy code to a virtualized environment may be as simple as selecting a system like TenAsys’s eVM ® which addresses a related issue of consolidating hardware platforms. The TenAsys approach combines a version of Microsoft (3) Windows with the company’s realtime OS.


      Selecting a multi-core CPU permits designers to simplify porting legacy code because the chance of timing conflicts is significantly reduced or eliminated. Virtualization is used to abstract the code from the physical environment. For unique processors, each virtual hardware board provides the opportunity to simulate “odd ball hardware” from legacy systems within the virtual board space.


      Embedded hypervisors are implemented using minimal overhead code – a thin layer of code that partitions hardware into virtual environments. Wind River calls these virtual environments a “virtual board.” An OS resides inside each virtual board. Virtual boards run in separate address spaces which are protected in hardware by the Memory Management Unit. A virtual board that can run on a single core can run Symmetric Multi Processing across multiple cores, or can be scheduled with other virtual boards on a single core on either a priority-preemptive or time-based schedule. This is accomplished by virtualizing key components of a system:


      • CPU – virtualizing the CPU permits sharing a single core with multiple virtual boards. Each virtual board runs on one physical processing core. Alternatively, a virtual board may use a dedicated single core or a set of cores.
      • Memory – memory  virtualization requires partitioning the physical memory so multiple virtual board partitions can use parts of the real memory. This creates an abstraction layer that separates and controls memory access.
      • Devices – Input/Output Devices and individual processor hardware can either be dedicated to a single virtual board or virtualized and shared between multiple virtual boards.


      In all of these virtualization choices, physical devices conceptually are below the hypervisor. It is possible to simulate obsolete hardware operation through software additions controlled by the hypervisor.


      According to Wind River, software test and verification is reduced or eliminated for legacy code running on new platforms using virtualization.

      Green Hills Software (4) adds a complete business unit charged with aiding customers in adapting to the virtualized environment. A wide range of engineering services in support of virtualization efforts can be contracted to Green Hills. The company’s virtualization offering focuses on the security aspects of code reliability and especially aerospace and defense applications.


      Virtualization can add years of useful life to legacy operating on orphaned processors.


      What legacy applications could you use virtualization to avoid costly and time-consuming rewrites?




      1. Wind River Systems is an Associate memer of the Intel Embedded Alliance
      2. TenAsys is an Affiliate memer of the Intel Embedded Alliance
      3. Microsoft® Corporation is an Associate memer of the Intel Embedded Alliance
      4. Green Hills® Software is an Affiliate memer of the Intel Embedded Alliance


      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance

      If you want to make certain that your embedded system is hack proof, don’t take it out of the box.


      It may sound harsh, but there is truth to that statement. People hack, or try to hack, systems for a variety of reasons: for the intellectual challenge, monetary rewards, and theft of data are just the tip of the iceberg. Ultimately, the risk associated with loss determines how much effort must be applied to securing the system from malicious hackers.


      Sophisticated hacks are often considered the ultimate scam to thwart. But the facts are that hackers will often seek out the lowest technology solution that will allow them to break a system. Often times the low tech hack can be a light designed to fool internal circuitry, an analog of the car door jimmy used to retrieve car keys that have been locked in a car, a barrier to prevent an interlock from closing, and dozens of other ingenious solutions each aimed at causing the system to fail in some manner that benefits the hacker.  Anticipating and protecting against physical attacks on an embedded system can be  a challenge for developers – and one that’s best left to experienced physical/mechanical designers.


      Every system can be hacked. The question is the level of difficulty and the cost associated with the hack. Physical hacks can be made more difficult by many decisions, including encapsulating electronic circuitry, eliminating obvious physical interlock switches, and creating a protocol for technician access to the inner workings of the system. These and other techniques all serve to increase the difficulty in physically hacking an embedded system.

      Electronic hacking seeks to gain unauthorized access to the system. Generally these hacks fall into categories:


      • Forcing the system to enter into an unanticipated state by entering a non-permissible data sequence.
      • Exploiting known standard software deficiencies is the most focused on vulnerability for many hackers. Unexpectedly, Open Source is not more susceptible to this type of exploit because there are often thousands of professional programmers looking at the code and installing it in production systems.  But regardless of the source, the ultimate responsibility for system security lies with the developers.
      • Theft of test equipment or actual embedded hardware aimed at gaining direct access to the internal software to find vulnerabilities.
      • “Social engineering” (theft) of user IDs and pass codes to gain access to the system.
      • Addition of snooping hardware to the installed embedded system to gain information.


      Broadly speaking, embedded software security falls into groups:


      • Underlying Operating System
      • Security of System updates
      • External communications
      • Data security
      • Authorized personnel access
      • Intrusion detection
      • Application security




      I’ve previously written in these blogs about specific Operating Systems (OS) and some of the security features that they offer. Security isn’t the only aspect of an OS that’s important. Equally important is the correctness of the OS because one of the major attacks to break into systems relies on exploiting errors caused by programming faults in the OS. Choosing an OS that is certified through one of the accepted standards is one hedge against that class of entry for a hacker.   Aerospace is one of the industries that has a series of certifications for OSes and software development tools. Aerospace Certified Real Time Operating Systems (RTOSes) are available from Green Hills Software, Inc (1), QNX(2), and Wind River Systems(3). Green Hills offers a specific high reliability aerospace package called the Integrity RTOS. Wind River also provides an aerospace platform  that conforms to many of the world’s standards for critical avionics systems including RTCA DO-178B, EUROCAE ED-12B (“Software Considerations in Airborne Systems and Equipment Certification”),  IEC 61508 other related software standards. LinuxWorks®(4) offers several versions of the Linux OS including LynuxSecure™that is described briefly in a previous blog. Validation of the OS ranges from DO-178B to other industry practices.


      The Intel® Atom™ processor family has added E6xx members that include Virtualization Technology (VT) in addition to low power operation. These new Atom family members and Intel® Core™ i5 and i7 processors are including advanced features for virtualization and boot load security previously found on processors intended for general purpose computing. These capabilities are quickly becoming requirements for networked embedded systems.


      Microsoft ® Corporation (5) has recently added a facility aimed at allowing centralized configuration, management, and updating for embedded devices. Embedded systems that are networked with a computer running System Center Configuration Manager 2007 can be managed by Windows Embedded Device Manager. Embedded Device Manager provides a single management console to manage Windows Embedded devices. Microsoft’s goal is to improve insight and control of embedded devices by presenting information unique to embedded systems.

      Device Manager generates collections of embedded device and aggregates similar devices into groups. Groups may include a wide variety of devices such as thin clients, point of service terminals, and digital signage. This function simplifies integration and management of embedded devices within a larger context.   Windows Embedded Device Manager provides a single management console to manage Windows Embedded devices.


      Solutions to assure data security depend on information encryption technology using a public key – private key scheme. Whether the encryption uses the 35 year old 56-bit key (and generally considered to be vulnerable) or more advanced encryption like the Advanced Encryption Standard (AES) using up to a 256 bit key, the foundation of the technique rests on a four step process:


      1. You send your public key to a sender of data
      2. Sender encrypts the data using your public key
      3. Sender sends the data to you
      4. You decrypt the data using your private key.


      By comparison, the AES standard can consume ten times the processing required to encrypt a message using the 35 year old DES (56 bit) standard. The AES standard is practically crack-proof since it would require more than a hundred trillion years to break one message. Nearly every open source and proprietary operating system has encrypted communications as part of the package. But if your chosen OS lacks suitable data encryption, if can be remedied b y adding an encryption module from an open source project like which can encrypt many things including an entire disk, or JPPF.


      Complexity of Embedded Systems is quickly catching up with general purpose computing systems.   This step up in complexity will require more competent operating software management than has been historically required.


      How will you guard your future systems from hack attacks?



      1. Green Hills Software is an Affiliate member of the Intel Embedded Alliance

      2. QNX Software Systems is an Associate member of the Intel Embedded Alliance

      3. Wind River Systems is an Associate member of the Intel Embedded Alliance

      4. LynuxWorks is an Affiliate member of the Intel Embedded Alliance

      5. Microsoft Corporation is an Associate member of the Intel Embedded Alliance


      Henry Davis
      Roving Reporter (Intel Contractor)
      Intel® Embedded Alliance


      More information




      To view other community content focused on security, see “Security – Top Picks.”