Skip navigation

Software & Bios

11 Posts authored by: Maury_Wright

As you can read in many posts on the Intel® Embedded Community web site, virtualization is a popular topic given that the technology can allow a single processor to partition and host two or more separate set of tasks that have traditionally required multiple processors or boards. Many of the posts have focused on high-end Intel® Architecture (IA) processors but power-optimized Intel® Atom™ processors can also support virtualization – both processors with and without explicit Intel® Virtualization (Intel VT) support in hardware. Today, let’s consider the type of virtualization implementation that you might deploy on Atom and discuss a methodology for evaluating the level of processor performance required. Indeed virtualization can maximize system efficiency and fewer ICs and a smaller system footprint is almost always better given that the system meets the application requirements.

 

One key application for virtualization is the combination of real-time tasks and an intuitive user interface on one processor for applications such as industrial control. Design teams find it much easier to implement a compelling user interface using a general-purpose operating system such as Windows. But Windows can’t support the real-time requirements of the control application. Virtualization allows a real-time operating system (RTOS) to coexist with Windows on one processor whereas legacy systems have required two separate processors and often two separate boards or modules.

 

You have likely seen the theme I’ve laid out above relative to high-end multi-core processors such as second-generation Intel® Core i5, and i7 processors. But you can combine Windows and an RTOS on Atom processors as well. Indeed you can even use virtualization on some single-core Atom processors, although dual-core models may offer superior real-time performance. In control applications, performance is measured in terms of how fast the processor can respond to an external event that triggers an interrupt as much or more so than the aggregate performance of the processor measured by MIPS or a benchmark such as Dhrystone.

 

Consider the Atom 400 and 500 processor series announced last year that were formerly code named Luna Pier.  Members of the family include the dual-core D510 and D525, and the single-core D410, N425, N450, and N455. None of the family members includes Intel VT support but all include Intel® Hyper-Threading (Intel HT) capability.

 

In general, hyper-threading or multiple-threading technology allows two separate threads to run simultaneously on a single core. A dual-core processor with Intel HT can simultaneously execute four threads. Embedded design teams can leverage Intel HT technology in multiple ways. You can simply boost application performance by using threaded-programming techniques to add parallelism to the applications. I covered the potential of such performance in a post on Xeon processors working on an imaging application.

 

You can also use Intel HT to support virtualization, for instance running windows on one thread and an RTOS on the other. For example, TenAsys* software supports virtualization on single- and dual-core IA processors that feature Intel HT support. And of course the company supports Intel-VT-enabled processors as well.

 

TenAsys published a white paper last year entitled “Delivering deterministic, real-time performance with a low cost Intel Atom processor-based platform” that focuses on a comparison of D410- and D510-based systems. TenAsys offers a software product called INtime® for Windows that allows the company’s INtime RTOS to run alongside Windows on single- or multi-core IA processors.

 

TenAsys tested the D410 and D510 first in what the company referred to as an unloaded state where the processor only ran industrial-control benchmark code. The company measured response to interrupts and the thread response that it defines as the interrupt-response time plus the time required for the interrupt service routine to start a new thread. That means that thread response includes the time it takes the operating-system scheduler to allocate memory and setup the thread. TenAsys then repeated the tests on the single-and dual-core processors with a Windows graphics application running concurrently.

 

TenAsys_VT_graphs.jpg

 

The above figure summarizes the results. In the loaded instance that’s typical of a virtualized environment, the dual-core processor performed considerably better. A deep examination of the exact details are beyond our scope here. If you want to know more, please read the entire whitepaper. But the point is that even the single-core, loaded test revealed results that would be suitable in some real-time applications, although the response time would also fall short of the requirements of many applications.

 

TenAsys has continued to add support for newer Atom family members. After the whitepaper was published, the company announced support for the Atom E6XX series processors that include both Intel VT and Intel HT support. Embedded teams can use both INtime for Windows and the TenAsys eVM™ for Windows Embedded Virtualization Manager with the Intel-VT-enabled processors. The eVM software allows you to run Windows alongside RTOS products from other software vendors in a virtualized environment.

 

With both TenAsys software products, teams will realize better performance using a processor that supports Intel VT. That doesn’t mean that an Intel-VT-enabled processor is the right choice for every project where a team might use virtualization. It does mean that design teams should evaluate a processor in terms of virtualized performance on code similar to their specific application requirements and relative to other factors such as power consumption, cost, and I/O features.

 

I’ll leave you with a link to a related post that you might find interesting. RadiSys** has also utilized its Hypervisor software with Atom processors and documented sub-10-µsec interrupt response. The Atom family is quite capable in virtualized real-time applications.

 

How do you leverage Intel HT? Have you focused on parallel, threaded code, or have you used it with virtualization software? What kind of interrupt-response requirements have you faced recently? Please share your experience via comments with the Intel® Embedded Community and tell other community followers how you evaluate real-time-response requirements relative to IA processor capabilities.

 

To view other community content focused on virtualization, see “Virtualization – Top Picks.”

virtualization.jpg

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

 

*TenAsys is an Affiliate member of the Intel® Embedded Alliance

** RadiSys is a Premier member of the Alliance.

Intel® Virtualization Technology (VT) has significantly impacted the way embedded design teams combine many tasks on one processor – whether that processor is a single or multicore design. Running multiple virtualized operating systems on one processor allows one system to replace two or more and adds the ability to partition critical applications on a secure operating system separate from user-interface code and other features that might run on a general-purpose operating system. But virtualization does require a hypervisor layer that can adversely impact performance. Paravirtualization techniques integrated in operating systems can minimize that performance impact.

 

Before we jump into the paravirtualization topic, I’ll provide a quick reading list for anyone wanting to read more about Intel VT and virtualization in general. We’ve covered the topic a lot here on the community.

 

Only the last of the above links addresses paravirtualization so let’s move along toward that topic. The most common virtualization implementations present an operating system with a virtual machine with software interfaces that mimic the underlying hardware. That virtualization layer is typically implemented in a hypervisor and allows a guest operating system to run unmodified. The software virtual machine impacts performance because it must handle software accesses that would be made directly to hardware in a non-virtual environment.

 

Paravirtualization requires that an operating system be modified to run in the virtual environment. Critical elements of the operating system execute directly in the host domain with direct access to the hardware rather than in the virtual domain where access is always limited to the virtual machine.

 

The problem with paravirtualization is obviously the requirement to modify the operating system. That process is certainly time consuming and time-to-market is always a key concern. Moreover, you simply can’t get the access needed to paravirtualize commercial products such as Microsoft* Windows.

Fortunately there is middle ground and emerging technologies that can provide the performance benefits of paravirtualization without the necessity of extensive operating system modifications.

 

Let’s continue the discussion relative to virtualization products and technology from embedded-software specialist Wind River**. The company stresses its target in labeling the technology Embedded Virtualization. The company is quick to note that the virtualization requirements in embedded systems are quite different than in the IT and server space The company’s Wind River Hypervisor is designed to support real-time deterministic response, a small footprint, and maximum application and operating-system performance.

 

Wind River CTO Tomas Evensen authored a white paper called “Multicore challenges and choices: Deciding which solution is right for you.” While the paper uses multicore as a medium for discussion, virtualization is the focus as the technology that can get the most out of a processor in terms of taking on multiple applications. The paper describes how you can virtualize an entire processor, memory, or specific hardware features of a processor.

The paper notes specific requirements for optimal embedded virtualization. Among those two stand out. The hypervisor when possible maps devices directly into the guest operating system for maximum performance. And again when possible the guest operating systems are paravirtualized to run efficiently on the hypervisor.

 

So how does this play out in the real word and which technologies are there for us when we can’t paravirtualize an operating system? I asked Evensen essentially those questions. He noted that the Wind River Hypervisor can run Windows XP and 7 unmodified. Moreover, he said that Intel® VT has provided a significant performance boost running such general-purpose operating systems. And design teams like to use them for features such as the embedded user interface and for connection to the IT world. Intel VT is available on all of the first- and second-generation Intel® Core™ processors and on others as well including some members of the Intel® Atom™ family.

 

Still Evensen said that there are things you can do to boost Windows performance further. For example you can replace standard Windows drivers with custom ones. He said that Wind River can dynamically install a network driver designed to work with the Wind River Hypervisor that is faster than what would essentially be an emulation of the network interface with the standard virtualized driver. What he described is basically paravirtualization of the network driver.

 

WR_hypervisor.jpg

 

Wind River’s own VxWorks and Wind River Linux have paravirtualization features driven deeper into the operating systems to maximize performance when used in a virtual environment. As the nearby figure shows, the Wind River Hypervisor can host VxWorks, Linux, Windows, and other operating systems. Note the technology targets a broad range of applications including military & aerospace, medical, industrial, automotive, and communications.

 

What types of performance issues have you faced using virtualization technology? Have you turned to paravirtualization for the mission-critical applications that were part of a larger system design. Fans of the Intel® Embedded Community would greatly appreciate it if you shared your insights. Please comment and let us know if virtualization is a standard part of your embedded toolbox.

 

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

 

*Microsoft is an Associate member of the Intel® Embedded Alliance

** Wind River is an Associate member of the Alliance.

 

More information

To learn more about virtualization for embedded devices, visit intel.com/go/embedded-virtualization

virtualization.jpg

Over the past couple of years, there have been numerous articles posted on the Intel® Embedded Community web site that focused on security topics. Today let’s discuss a similar but separate topic – safety. Embedded systems used in applications in the transportation industry, industrial process control, power generation, and similar instances must meet relevant safety standards, primarily IEC 61508. That standard seeks to ensure the safety of people, ranging from workers involved directly with a system managing a process or some other application and citizens in general. Embedded design teams working on applications with safety requirements can utilize operating systems certified to IEC 61508 combined with redundant Intel® Architecture (IA) processors to quickly deliver systems that meet the most stringent safety requirements.

 

First let’s briefly consider the similarities and differences of safety and security requirements. Both require a compartmentalized approach to protect systems both from inadvertent faults and malicious activity. In security systems, the primary goal is to ensure the reliability of data – financial data or data that might control a military mission. In safety systems, the primary goal is ensuring that a system operates in a fail-safe manner.

 

Some of the same techniques are useful in both safety and security applications. Redundant hardware is often used. And software techniques such as Intel® Virtualization Technology (VT) are used to partition and protect the mission-critical elements of the code.

 

Security requirements are prescribed by the Common Criteria for Information Technology Security Evaluation (called Common Criteria or CC) defined in the ISO/IEC 15408. The CC provides a framework of security levels called the Evaluation Assurance Level (EAL) – EAL 1 through EAL 7, with EAL 7 being the most secure system. For more background, review the article I posted last year about separation kernels and EAL compliance.

 

Safety requirements are prescribed by the IEC 61508 standard that is entitled “Functional safety of electrical/electronic/programmable electronic safety-related systems.” Safety levels are graded based on a SIL (Safety Integrity Level) scale that suns from SIL1 to SIL4. SIL3 is considered the highest-level that can be achieved using a programmed microprocessor-based system.

 

The safety standard prescribes the need for functional safety that relies on an active system. For example, consider a process-control application that involves a dangerous chemical. A containment vessel that might limit the damage should an overflow occur is considered a passive system. Sensors and check valves that prevent an overflow would be considered an active system.

 

Hardware and software safety elements

 

There are both hardware and software elements involved in designing for safety compliance. Primarily we’re going to discuss software – operating systems specifically – in this article. But there is an excellent hardware-centric whitepaper that has been posted on the Intel Embedded Community site by Men Mikro Elektronik GmbH*. The paper is entitled “Implementing safety-critical embedded systems designs” and it covers more details on the SIL levels and focuses on strategies for deploying redundant modular systems and meeting safety requirements.

 

On the software side, several embedded-operating-system vendors have products that have been certified to SIL3. And some vendors offer both safety and security certifications in the same product.

 

Consider QNX Software Systems** and the QNX Neutrino Realtime Operating System (RTOS) Certified Plus product. The RTOS has been certified to SIL3 and EAL 4+. The safety certification was performed by conformity-assessment specialist Sira Test & Certification who is accredited by the United Kingdom Accreditation Service. The key to compliance is a modular microkernel architecture utilized in the RTOS. Applications, device drivers, file systems, and network stacks all run in separate memory-protected partitions.

 

QNX has also published a series of two whitepapers entitled “Building functional safety into complex software systems. Part 1 of the series coves the basics of safety theory and provides some illustrative real-world examples to explain the concepts. Part 2 digs deeper into actual implementation scenarios. The illustration below is from Part 2 and illustrates Reason’s Model explaining how faults become failures.

 

QNX_safety_fig.jpg

 

Green Hills Software*** also offers an SIL-3-certified RTOS. The Intergity/velOSity product was certified by TuV Nord certification body. The company actually offers what it calls the Green Hills Platform for Industrial Safety that it targets specifically at systems developed for the automotive, rail-transportation, nuclear-power, and similar industries. The platform combines the RTOS, middleware, a development and verification tool set, and services including system certification support. The diagram below summarizes the coverage of the platform.

 

GHS_safety_fig.jpg

 

Green Hills also has an excellent web page that covers the safety topic. The Integrity architecture partitions code that run at different safety levels and allows a single microprocessor to safely run critical and noncritical partitions. velOSity is the kernel that underlies the Integrity platform.

 

Middleware includes network support, graphics and video support, a file system. Moreover embedded databases allows for the safe storage and retrieval of complex data using a structured architecture.

 

Green Hills also has partners that contribute to the safety platform. For example, design teams can use Telelogic’s Rhapsody modeling and code-generation tools, Esterel Technologies’ modeling and code-generation tools, and Vector Software’s test and code-coverage tools in safety-critical designs.

 

Has your design team tackled a project requiring SIL certification? How did you approach the problem? Did you rely on a commercial software platform? Please share you experiences with fellow followers of the Intel® Embedded Community via comments. Readers would welcome your insight into the safety issue.

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

 

* Men Mikro Elektronik GmbH is an Affiliate member of the Intel® Embedded Alliance

** QNX Software Systems is an Affiliate member of the Alliance

***Green Hills Software is an Affiliate member of the Alliance

 

More information

security.jpg

 

 

To view other community content focused on security, see “Security – Top Picks.”

We take Internet connectivity for granted in our PCs, but the fact is that Internet support is coming to embedded systems of all types. Applications from military & aerospace to industrial to medial utilize the Internet as way to share data. Security is paramount in such applications. It’s extremely convenient, for example, when a factory system can be managed and updated over the Internet. But such a system connected to the public Internet must be completely secured from any malicious or accidentally-damaging code. Embedded designers can use technologies such as Intel® Virtualization Technology (VT) and Intel® Trusted Execution Technology (TXT) to provide such security.

 

Kontron* and Intel have collaborated on a project to develop a security methodology with research being conducted at the University of Applied Science in Deggendorf, Germany. The work is focused on the development of proof-of-concept security-system designs that utilize Intel® Architecture (IA) initiatives such as VT and TXT along with other accepted security mechanisms including the Trusted Platform Module (TPM) technology that’s promulgated by the Trusted Computing Group of which Intel is a member.

 

The research has focused on several scenarios. One is the connection of legacy systems to the Internet. In such a case the studied methodology includes adding a secure system that acts as a firewall to handle the communications. A second scenario focuses on the need for a secure boot chain in a case where the required operating system (OS) doesn’t include a secure-boot mechanism. The researchers are studying VT as a way to add a second secure OS that boots first and then boots the non-secure OS.  A third scenario is focused on highly-secure applications in which separate security and application OSs are always required.

 

The Figure below shows a block diagram of the system that’s being used to research and validate the security capabilities. The design relies on a carrier board with the CPU functionality hosted on a computer-on-module (COM) design. Such a design could use the industry-standard COM Express modular technology, ETX modules originally developed by Kontron and now supported by the ETX Industrial Group, or alternatives such as the Qseven module developed by the Qseven Consortium.

kontron_secure.jpg

 

The design utilizes TPM ICs on both the carrier board and COM platform. The embedded designer can determine whether the TPM security should be associated with the carrier or CPU module. TPM combined with a TXT-enabled CPU allows the system designer to create isolated execution environments or partitions that can be safely accessed remotely by a system with the correct TPM encryption key. TXT is supported across most of the latest IA processors and chip sets including the second-generation Intel® Core™ 2 processors -- the Intel® Core i7, i5, and i3 families.

 

The virtualization layer provides the isolation between the secure OS and either legacy OSs with no security or perhaps an OS with no secure boot capability. The ongoing research is using the open-source Xen hypervisor, and that hypervisor is the only software entity that has direct access to the physical hardware.

 

Kontron has published an initial whitepaper that details the ongoing research. The paper entitled “Standardized security principles for embedded computing industries” details the hardware structure of the researchers test system as well as the steps in a secure boot chain. The research has utilized a variety of Kontron ETX modules,.

 

Embedded design teams can apply the methodology described in the whitepaper with different combinations of hardware and software. Indeed the techniques could be applied to single-board computers that integrate TXT and TPM features.

 

If you want more background on the software side of the topic, there are a number of blogs on the topic including “Virtualization boosts security for Internet-connected devices” written by Felix McNulty.

 

Moreover a number of software companies support VT technology and offer hypervisors capable of securing partitions in a multi-OS system. Green Hills Software** for example offers Integrity Secure Virtualization for its Integrity real-time OS. Likewise Wind River*** offers the Wind River Hypervisor for applications including military & aerospace, industrial, medical, and other applications with a security requirement.

 

Have you used virtualization technology to implement a secure system? Please share you experience with other followers of the Intel® Embedded Community via comments. What would you like to know more about in the security area? And what design challenges are you facing? Community members may surprise you with the help that they can offer.

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

 

*Kontron is a Premier member of the Intel® Embedded Alliance

**Green Hills Software is an Affiliate member of the Alliance

***Wind River is an Associate member of the Alliance

Many embedded systems are deployed remotely, far from any technical staff that might be able to fix a problem with a malfunctioning computer. Certainly military and &aerospace applications rely on remote systems that operate reliably, but even digital signage, industrial, and retail applications locate business-critical systems that are far from the tech staff. Systems designed for such applications need some ability to automatically recover from faults along with remote-management capabilities. Intel® Architecture (IA) processors can readily support such applications and manufacturers of modular- and system-level IA-based products that target embedded applications often add reliability layers

.

The watchdog timer is among the most commonly used techniques that can allow a remote system to reboot itself upon some type of system fault. Most manufacturers of IA-based computer modules, single-board computers (SBCs), and embedded-targeted systems include a watchdog timer.  For example, the small Advantech* ARK-6310 ruggedized system based on a Mini-ITX board includes a watchdog timer that can be programmed with an interval ranging from 1 to 255 seconds.

 

Design teams typically utilize the timer to automatically reset a system. A properly-functioning system includes a recurring task that resets the timer on a regular basis. A malfunctioning system that is hung on a task and fails to reset the timer will allow the timer to count all of the way to zero and trigger a system reset.

 

While most all systems include the watchdog functionality, teams may want to add features that allow a combination of preventive maintenance, remote management, and automatic restoration. Intel® Active Management Technology (AMT) supports remote-management capabilities and as I covered in a recent post is one of several IA features that can deliver mission-critical reliability.

 

Let’s consider how a couple of third parties have added system reliability features. LiPPERT Embedded Computers**, for example, has a technology suite called LEMT (LiPPERT Enhanced Management Technology) that it is supporting on all of its new SBCs. For example, LEMT is available on the recently announced CoreExpress ECO2 computer-on-module (COM) product that is based on the CoreExpress standard that was originally developed by LiPPERT and that now is being promulgated by the Small Form Factor Special Interest Group (SFF-SIG). The ECO2 (pictured below) is based on the Intel® Atom™ E6xx processor family. The company also supports LEMT on its E6xx-based Toucon-TC  COM Express module.

 

cem-eco2.jpg

 

The LEMT technology is based on the combination of a System Management Controller (SMC) IC and an application-layer program that can be accessed locally or remotely via a network connection. The SMC IC is a microcontroller that combines power-sequencing functions needed at boot time with the ability to monitor and control elements of  the system.

 

The LEMT technology essentially enables preventive actions that can keep a system running reliably. The LEMT application can provide details of operating parameters such as system voltages, watchdog status, the current CPU and board temperatures, fan speeds, maximum temperature over a period of operation, and other data.

 

LEMT works with Windows and Linux systems. It allows a technical team to preempt costly failures or at the very least prepare to replace systems that are vulnerable. Moreover, LEMT combined with AMT will allow a remote team to change system settings and perhaps keep a system operational until it can be replaced.

 

There are also technologies that help a remote team diagnose a failure and perhaps restore system operation after what would be a fatal fault in many cases. Advantech, for example, has a software product that it calls Advantech eSOS – emergency secondary OS for system recovery.  eSOS is a Linux-based secondary OS that is stored in ROM in an embedded system and that is unaffected by any problems that may have impacted the execution of the primary OS. The eSOS doesn’t have the complex set of hardware dependencies that are present in the primary OS and can often be booted by a failed system.

 

In a typical implementations, an eSOS-based system would first try and reboot when triggered by the watchdog timer. If the

boot to the primary OS fails, the system will boot into eSOS. The eSOS then performs a hardware analysis on the system and emails a detailed report to the remote technical team.

 

esos.jpg

 

The remote team can connect with the system via telnet or ftp, and attempt to restore system operation. In fact the system will allow a complete restore of a Windows-based operating system. At a minimum, eSOS allows the team to determine the cause of failure and simplify the repair process.

 

The eSOS technology can be used with a variety of Windows operating systems including XP and Windows Embedded.  The technology was initially deployed on the Advantech PCM-9361 SBC and the SOM-5761 COM products. Both are based on the Atom N270 processor. Advantech plans to support other Atom-based products and perhaps other OSs going forward.

 

Please share you experience with remote mission-critical systems with other followers of the Intel® Embedded Community.  Your comments will be greatly appreciated. How have you handle remote monitoring and maintenance? What challenges did you face? What do you think about the LiPPERT and Advantech technologies covered here?

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

 

*Advantech is a Premier member of the Intel® Embedded Alliance

** LiPPERT Embedded Computers is an Affiliate member of the Alliance

Think about the popular slang term app and you probably first think about application programs for smart phones. But the downloadable app concept will pervade the embedded space as well in scenarios such as auto infotainment systems, home media gateways, and embedded gaming products. Open source operating-system platforms such as MeeGo and Android will allow embedded design teams to develop extensible systems based on Intel® Architecture (IA) processors. In turn, the third-party app developers will build rich ecosystems around such platforms moving forward as is already happening with Android.

 

IA platforms are ready hosts for open-source platforms such as Android and MeeGo. Intel for instance has an installation guide for installing the Linux-based Android operating system on IA processors. The guide even includes a video focused on an installation that uses an Intel® Atom™ N450 board.

 

MeeGo, meanwhile, is a Linux-based platform that has resulted from the combination of Intel’s Moblin initiative aimed at developing a Linux-based platform for mobile applications with the similarly-targeted Maemo project that Nokia had pursed. Now the efforts have been unified in MeeGo and The Linux Foundation Is shepherding the initiative.

 

While MeeGo will be used in handsets and netbooks, the platform will also find use in a variety of other embedded applications such as automotive infotainment, media phones, and multimedia gateways. The Linux Foundation released version 1.1 of the software in late 2010, and the first MeeGo-based products will come to market in 2011.

 

In addition to the MeeGo Core Software Platform, the development initiative has released several other tools that will speed the work of design teams. For example, there is a beta release of a full SDK (software development kit) for IA processors available for download. The SDK is based on the Qt Creator development environment and includes an emulator among other tools.

 

The community has also released a reference software platform that specifically targets automotive applications called MeeGo v1.1 for In-Vehicle Infotainment (IVI) systems. The IVI platform targets Atom-based designs and was built to support Automotive Center Console HMI requirements. The platform includes a home display screen and a taskbar. It also includes support for text-to-speech and speech-recognitions applications.

 

There are also a number of software and development-support products and technologies emerging, both from Intel and members of the Intel® Embedded Alliance, that can help embedded design teams develop systems based on platforms such as MeeGo and Android.

 

Wind River Systems,*, for example, offers development support for both Android and MeeGo. The Wind River Platform for Android includes a Linux core, middleware, an application framework, and a user interface. The company also offers the Android Compatibility Test Suite.

 

Wind River offers a similar platform for MeeGO. And the company has developed a Framework for Automated Software Testing (FAST) for both platforms. FAST helps teams both meet tight time-to-market goals and deliver quality software. The FAST suite includes individual test frameworks that are encapsulated within scripts and executed during test runs. The encapsulation or wrapper in Wind River terms allows tests to be executed across multiple product platforms.

 

WR_FAST.jpg

 

Virtualization technology may also come into play in MeeGo- or Android-based designs. For example, some products may require the security afforded by a proprietary operating system while relying on MeeGo or Android for hosting a user interface and applications.

 

Green Hills Software** has applied its experience with its Integrity Secure Virtualization software to develop the Green Hills Platform for Secure Mobile Devices. The offering includes the Integrity secure real-time kernel and the virtualization layer. Design teams can host multiple guest operating systems on top of the virtualization layer.

 

Intel, meanwhile, has provided one other missing piece of the MeeGo puzzle via the latest release of the Intel® Embedded Media and Graphics Driver (EMGD). Many of the target MeeGo applications including IVI require graphics and multimedia support. The new version of EMGD supports designs that use the Intel® System Controller Hub US15W, US15WP, US15WPT, or the Intel® Atom™ Processor E6xx Series processors with integrated graphics controllers.

 

How does the concept of extensible apps and a third-party ecosystem fit into you coming project plans? Are you currently using Linux-based operating systems? Please share you experience or opinions on open-source platforms via comments. Fellow followers of the Intel® Embedded Community would appreciate your insight.

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

Intel® first added support for simultaneous multithreading in the Pentium® 4 family of products, calling the technique Intel® Hyper-Threading (HT) technology. HT allows multiple-threaded applications to run simultaneously on a processor that has multiple parallel execution engines. The technology was not present in the Intel® Core™ microarchitecture but reappeared in the Nehalem microarchitecture that is the basis for the newest Xeon® and Core i7, i5, and i3 families. The Nehalem microarchitecture includes four separate instruction decoders that operate in parallel enabling a high degree of parallel instruction execution on a single core and maximizing performance of threaded applications.

 

Nehalem-based processors include three identical instruction decoders for simple instructions, and one for complex instructions. The processors can use the four decoders to support two threads on each core. And Intel offers Nehalem-based processors with two to six cores. That means the high end of the family, the new Xeon 5600 series, can handle 12 threads simultaneously.

 

Embedded design teams can leverage HT technology in several ways. For example, you can use HT for virtualization and run multiple operating systems on one core. Early  Atom™ family members included HT support although the microarchitecture only included two parallel instruction paths. I posted a blog last year about how you can use that HT support for virtualization even though the early Atom processors lacked Intel® Virtualization Technology (VT).

 

In the case of the Xeon family, the more likely use of HT is for performance since the processors integrate VT support. With HT, design teams can gain performance in compute-intensive application through threaded applications.

 

Radisys*, for example, performed a case study in conjunction with the Georgia Institute of Technology (Georgia Tech) on a medical imaging application hosted on a Xeon 5500 series system. Applications such as CT, MRI, and ultrasound are capturing an increasing amount of data that must be processed immediately. For example, CT scans now have sub-millimeter resolution. But doctors want to process the data in seconds. The case study set out to evaluate how parallel processing could accelerate such imaging applications.

 

Engineers ran benchmark tests on a dual-processor server, with quad-core-based CPUs. The implementation provided eight cores and the ability to process 16 threads simultaneously. The application was a Katsevich algorithm that is used for 3D CT reconstruction. As the below figure illustrates, execution time improved with the number of threads. Although as you might expect the biggest gain came going from one to two threads and the benefit diminished as the number grew to 16. Still, the performance gain essentially comes for free, so even the relatively small gains that come moving from 8 to 16 threads are worthwhile.

 

315iDEED53F34BA25D0B

 

RadiSys offers a range of HT-enabled platforms based on Intel processors. Back in March, for example, the company announced the Procelerant RMS420-5520DT embedded server based on the new Xeon 5600 series. The product supports as many as 12 cores in a dual-processor configuration. Medical imaging is a target market for the product along with video streaming, and high-performance test & measurement.

 

Have you experimented with threaded applications and a HT-enabled processor? If so, what kind of performance gain did you measure? If not, what are the obstacles to using HT? Please share you experience with fellow followers of the Intel® Embedded Community.

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

 

*RadiSys is a Premier Member of the Intel® Embedded Alliance

Embedded system designers today have a plethora of off-the-shelf computing platforms in a variety of form factors that feature multiple cores per processor, sometimes multiple processors per board, and in many instances the processors are capable of multithreaded operation. But how do embedded design teams easily leverage the available computing resources? Certainly teams can develop parallel structures in carefully-coded embedded applications, although many teams lack multithreading expertise. The broad proliferation of Intel Architecture multi-core technology has created momentum across the embedded space resulting in far simpler development options relative to other processor architectures. For Instance, a graphical tool such as National Instruments* LabView allows engineers that lack multithreading programming experience to quickly develop applications using a dataflow approach that takes advantage of the multi-core trend.

 

While LabView’s roots are in the test and measurement area, the graphical programming environment finds use today in a variety of embedded applications including real-time control and as a product development and prototyping tool. Teams can use LabView both with hardware offered by National Instruments and with board and system level products from a variety of third parties. Teams will find off-the-shelf hardware modules for applications ranging from data acquisition to visualization to communications. For instance, you can easily prototype a cellular base station using LabView software and off-the-shelf hardware modules.

 

National Instruments added multithreading support to LabView 5.0 back in the late 1990’s after the advent of multithreaded processors but before multi-core processors were available. Casey Weltzin, LabView Real Time Product Marketing Engineer, states, “We chose to support multithreading so that we could ensure both a very responsive user interface and very responsive program code.” From inception, LabView has combined a graphical user interface with the ability to create and execute software-based functions that might or might not rely on hardware that connects to the real world.

 

LabView is generally considered to be a dataflow programming language or environment. Users typically define a program based on inputs flowing into computation functions with the results driving outputs. Clearly that’s an over simplification because LabView certainly supports feedback, loops, and the ability to accept user input and display results. But the graphical development tool is generally based on dataflow (See the example screen shot below that shows part of the graphical program representation in the right portion of the screen and the UI in the left portion).

 

185i6181902EFF1168AF

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Both the internal multithreading support in LabView and the dataflow programming model combine to optimize code execution on multi-core targets. Weltzin points out that the data-flow code-definition process naturally results in users defining parallel software functions that operate on parallel data. And at run time, LabView attempts to find all parts of the code that can run in parallel. Weltzin relates that the LabView teams calls these objects lumps, although the industry might call them tasks. LabView identifies the number of available cores, examines the lumps, and breaks the code into the optimum number of threads for fastest execution.

 

Here are several additional resources if you would like to learn more about the multi-core capabilities in LabView. Learn why dataflow languages map well to multi-core processors. You can get more help on handling multi-core designs with LabView. And the latest revision of LabView offers some dedicated tools to boost multi-core performance.

 

You might also want to peruse some case studies that demonstrate the advantage LabView brings to multi-core designs. The Max Planck Institute in Germany has used LabView and multi-core processors in its Fusion research. The result was a factor five improvement in matrix multiplications required for real-time loop control.

 

Eaton, meanwhile, used a multi-core approach and LabView in a truck transmission testing system. The combination quadrupled the number of data channels that the system can examine in real time.

 

You might notice that both of the case studies have a real-time component inherent in the quick study. National Instruments does offer LabView Real-Time for applications that require such response. Moreover, you can combine the real-time software, and standard LabView in one system using Intel® Virtualization Technology (Intel® VT) and the National Instruments Hypervisor. Weltzin points out that typical implementations dedicate one core to the user interface on standard LabView and the remaining cores to LabView Real-Time.

 

For more information on Intel multi-core technology and processors, you can peruse a broad look at Intel’s multi-core technology and look specifically at multi-core technology for embedded and communication applications.

 

What approach do you take to multi-core designs to get the most out of the available compute resources? Have you used LabView in a multi-core system? Please share your experience via a comment so that fellow followers of the Intel® Embedded Community can learn from your experience.

 

Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

 

*National Instruments is an Associate member of the Intel® Embedded Alliance.

 

As you may have noticed, code and data security is a hot topic, and one covered by a number of the Intel® Embedded Communications Alliance (Intel® ECA) Roving Reporters. Indeed, security is important in protecting financial transactions, personal data such as medical records, and of course military systems dealing with national security. Today, security is tougher than ever because almost all systems, even embedded ones, connect to networks and the Internet. Moreover cost concerns result in the mix of secure and non-secure applications on the same multi-core and/or multiprocessor systems. The good news is that software technology such as separation kernels along with new security-centric features integrated in Intel® processors enable secure system design. Features such as Intel® Trusted Execution Technology (Intel® TXT) and Intel® Virtualization Technology (Intel® VT) - unique to Intel -- can be invaluable in building secure systems.

 

In this post, I'll dig deep into architectures and technology for maximum security. You might also review two recent Intel ECA blog posts on Intel TXT and Intel VT.

 

Let's start with formal security standards and specifications so you have some background. Most all security standards promulgate from military work. The commercial industry finds it easier and cheaper to leverage the military work even if the commercial systems require lower levels of security. The Common Criteria for Information Technology Security Evaluation (called Common Criteria or CC) defined in the ISO/IEC 15408 provides a framework for designers to specify security capabilities and for testing labs to validate. While conceived for IT applications, CC applies equally to embedded systems.

 

Products evaluated to CC get what is essentially a grade called the Evaluation Assurance Level (EAL) - EAL1 through EAL7. Commercial Operating Systems (OSs) such as some versions of Windows and Linux earn the moderately secure grade of EAL4.

 

Today, the most common approach to meeting CC requirements leverages the architecture called Multiple Independent Levels of Security (MILS). A MILS implementation relies on separation to meet security requirements. Most implementations rely on a separation kernel - a thin layer of software that emulates an environment where secure applications were isolated on dedicated hardware. But the separation kernel can actually mix secure and non-secure OSs and applications. MILS implementations also separate resources, such as memory or I/O, and ensure that secure and non-secure data are never intermixed.

 

LynuxWorks, an Affiliate member of Intel ECA,  is one of several companies offering separation kernels based on a MILS architecture. The LynxSecure Separation Kernel also embeds a hypervisor and relies on Intel VT. The diagram below depicts the kernel architecture.

 

 

87i0171E16BC52935D5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

LynxSecure partitions the secure LynxOS separate from other guest OSs such as Windows or Linux.

 

Steve Blackman, Director of Business Development at LynuxWorks, points out that the separation kernel concept relies on small modular code blocks that can survive scrutiny. Blackman states, "Achieving security relies on a combination of proofs used to prove a higher-level proof."

 

The design team behind LynxSecure developed the kernel based on the Separation Kernel Protection Profile (SKPP) developed by the National Security Agency. According to Blackman, SKPP requires at a minimum that a kernel separate partitions and control communications between the partitions. But SKPP also defines a more advanced implementation called a Least Privileged Separation kernel (LPSK) and LynxSecure was architected on the LPSK model.

 

LPSK introduces separation and control at a more granular level. Specifically LPSK defines the concepts of subjects, resources, and partitions. Executable code is an example of a subject. A resource might be a processor core, a section of memory, a network interface, or a disk drive. In the simplest case, a subject may be equivalent to a partition. Or a more complex partition might include multiple subjects.

 

LynxSecure relies on an XML configuration file to define the relationships between subjects, resources, and partitions. The design team can group objects using the file. For example, you might group one or more subjects along with the specific policies and privileges of those subjects. Blackman suggests that one way you might use such a group is to set scheduling policies for fast interrupt response.

 

LynuxWorks is preparing to submit LynxSecure for evaluation to both EAL7 and the SKPP -- related but separate evaluation criteria.

 

Have you mixed secure and non-secure applications on one system? How did you isolate the sensitive code or data? Do you have experience with the CC and EAL process? Design teams following the Intel ECA community would sure benefit from your comments. Let us know your thoughts

 

 

 

 

Embedded design teams often have legacy considerations that dictate system design with each revision or evolution. Software is a particular concern. Teams seek to preserve proven algorithms and code even while attempting to take advantage of the latest hardware. And the legacy code may well have been written for a number of loosely-coupled specialized processors either combined on one board or spread across multiple boards. Today's Intel® Architecture multi-core processors, based on the Core or Nehalem microarchitectures, provide a very cost effective platform that can easily replace a dozen or more processors in a legacy system. But how do you migrate the software?

 

Of course there is no simple answer. The design team could rewrite the software from the ground up and likely realize performance gains. Such a redesign, however, adds engineering costs and adds time to the design cycle resulting in undetermined but potentially significant opportunity costs. Teams looking to harness the latest processors do have options to maintain the legacy code and still consolidate their design to a single- or dual-socket multi-core design.

 

QNX Software Systems, an Associate member of the Intel® Embedded and Communications Alliance (Intel® ECA), offers one alternative for porting legacy code to a multi-core processor. The company supports a technique called Bound Multiprocessing (BMP) that offers many of the advantages of Symmetric Multiprocessing (SMP) on a multi-core platform, while simplifying the task of getting legacy code to work on the platform.

 

Multi-processor legacy systems often dedicated an instantiation of an operating system (OS) to each processor and dedicated each processor to a specific set of tasks or software processes - a technique often called asymmetric multiprocessing (AMP or ASMP). In a modern SMP system, a single copy of the OS hosts all processes and assigns any process needing service to the next available processor or core. But legacy code designed for a single-processor environment may not run without significant modification in an SMP system. The code may have no shared-memory mechanism and  in real-time implementations may expect direct access to the processor at all times.

 

BMP offers a compromise between the legacy AMP system and an SMP system. The BMP implementation available in the QNX Momentics Tool Suite relies on a single copy of the OS to handle all processes. But the design team can specify that a process or set of processes, and all associated threads, be dedicated to a particular core. The legacy code gets the equivalent of a dedicated processor. Meanwhile, SMP-aware code can take full advantage of SMP with the OS scheduling processes for the next available execution resource. Below you'll see a screen shot of the memory analysis feature in Momentics.

 

82iD731394FE927CA81

 

For a thorough look at AMP, BMP, and SMP, you might read the article, "Software migration strategies for multi-core processors" from Embedded Control Europe magazine. In addition, Intel multi-core expert Lori Matassa recently posted an SMP article to the Intel® ECA site.

 

As for your choices in multi-core platforms, Intel has a broad variety of platforms that are in the company's embedded program with guaranteed availability for many years. The platforms include both options optimized for performance and others optimized for low power. The available processors range from single- and dual-core Intel® Core TM-2 Duo processors to dual- and quad-core Intel® Xeon® processors. And in all cases, the platforms support dual processor sockets for as many as eight cores total. For more information on the Intel embedded platforms see the Hardware Platforms web page.

 

I'd like to hear other ideas on moving legacy multiprocessor applications to a modern multi-core platform. How did you migrate legacy software and how have you leveraged Multi-Core features? Please share your techniques with the Intel® Embedded Community via comments to this blog.

Generally, Intel® Hyper-Threading technology (Intel® HT technology) seeks to boost performance in systems by making the most efficient use of multiple execution units in a superscalar processor core. It turns out, however, that Intel HT technology can also be leveraged in a virtualization application -- running both a general purpose OS (operating system) like Windows and a RTOS (real time operating system) simultaneously on a single core. Such an implementation on an Intel® AtomTM processor allows the ultra-low-power and low-cost processor to respond to external events with latency under 10 microseconds - achieving what is commonly called hard real-time performance. In such an application, Atom can usurp the need for a processor such as a PowerPC that's often dedicated to the real-time task.

 

Hyper threading allows multiple software threads to execute in parallel on a single processor core. Generically in the industry you will hear this technology referred to as simultaneous multithreading. Parallel threads execute simultaneously and maximize efficiency by minimizing the time that an execution unit idles, waiting for work due to situations such as a thread waiting for external data.

 

RadiSys, a Premier member of the Intel® Embedded and Communications Alliance (Intel® ECA),  has leveraged Intel Hyper-Threading technology in multiple ways. In a future blog post I'll describe how RadiSys leveraged hyper threading to boost performance in an imaging application on a Nehalem-based system. But in this post, we will focus on how to implement a real-time embedded system such as an industrial controller on a single low-power processor while maintaining the convenience of a general purpose OS.

 

Real-time industrial controllers often rely on two separate, but closely-linked computers -- a ruggedized PC running Windows, or perhaps Linux, alongside an embedded system running an RTOS. The Windows system provides the convenience of the GUI for a robust user interface and of course support for networks like Wi-Fi and connectivity such as USB. The real-time systems often use processors such as the PowerPC that lack the high-end performance of the x86 family but that can provide the needed low-latency response. RadiSys Director of Software Marketing Linda Xiao states, "We regularly see demand for sub-10-microsecond level latency and sometimes for response in as little as 2 microseconds."

 

Clearly, a single-processor approach would provide cost savings in the hypothetical industrial controller that I've described. Moving to a single processor, however, would require loosing the convenience of Windows, or conceiving a way to run both OSs on the same processor while preserving the fast interrupt response time.

 

RadiSys found the answer in the Hypervisor technology that the company jointly developed with Real Time Systems. RadiSys supplies the Hypervisor as middleware with its OS-9 RTOS that's designed for hard real-time applications. Real Time Systems GmBH is an Affiliate member of the Intel ECA.

 

Hypervisor is virtualization technology that from a macro view may seem similar to Intel® Virtualization Technology (Intel® VT). Both allow multiple operating systems to run simultaneously with each fully isolated from the other. According to Xiao, however, Intel VT targets high performance applications ranging from servers in the IT space to compute-intensive embedded applications, whereas Hypervisor targets low latency. She states, "It's not about fast but about determinism." In the case of a real-time system, Hypervisor runs Windows on one thread and the RTOS on another.

 

According to Xiao, the Virtualization Manager (VM) software layer in typical virtualized implementations adds jitter to the RTOS response. The Hypervisor approach seeks to minimize the amount of OS code that runs through a VM layer by partitioning software access to processor hardware features whenever possible. You can access more details on the technology in the Hypervisor data sheet. The block diagram below shows how the RTOS is afforded direct access to hardware.

 

 

 

72iA034710F9CBCF3E1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Of course, the designers working on industrial control applications also care about preserving their prior investment in software. And many were not using x86 processors in real-time systems. Xiao claims that porting OS-9 to the x86 from the PowerPC took only three days. And the Hypervisor approach preserves compatibility with existing drivers and applications - all executing on the single Atom core.

 

Hyper-Threading Technology and the Hypervisor provide a good match for many applications. The user interface development gets the benefit of Windows and even the graphics accelerators that are widely available. And according to Xiao, Atom provides a good balance of cost, performance, and power consumption. She states, "You don't use Atom for performance, but for system consolidation, cost reduction, and software reuse."

I'd also point out an earlier blog post that covers the use of Hypervisor on a specific RadiSys multi-core board based on an Intel® CoreTM 2 Duo processor.

 

How have you combined general-purpose and real-time OSs in an application? The many followers of the Intel® Embedded Community would greatly benefit from comments sharing your techniques.