Skip navigation

During the course of this series, we’ll make full use of the Intel Embedded Alliance Program and other IA resources. In this installment, we’ll use Intel design decision tools that aid in choosing candidate embedded processors from among Intel Architectures. The second installment will explore Compiler Alternatives and the unique VirtuaLab access to real world development hardware. Intel, Green Hills Software, and Wind River provide mainstream development tools considered.    In installment three we explore evaluating the Atom processor for suitability as the central processor. During the evaluation, we’ll consider a half a dozen commercially available boards and performance for a realtime control application.  


In installment four we consider adding a two-way wireless communications channel for monitoring and controlling functions.


For the last several years the popular press has been touting solar photovoltaic (PV)electric as an answer to our electric needs. The idea of getting “electricity for free” is seductive. But the facts are very serious. In 2007 the average home in the US consumed 936 kilo-watt-hours (kWH) per month! That’s about 12 of the top-of-the-line solar photovoltaic panels per house assuming that the sunlight is never blocked from the panels, the sun shines every day, and there are no efficiency losses.  In sunny climates, residential 2kW peak usage solar systems will deliver electricity at about $0.35 per kWH and in a cloudy climate the cost is about $0.78 per kWH. Only the absolutely largest solar installations of 500kW get the cost of electricity under $0.20 per kWH, which in some locations is competitive.

As you can see from the variability in generated cost per kWH, designing a solar-electric system including controls can’t be met with a single design. Most people think of a solar system that augments the electricity that comes from an electric utility, delivered through the electric grid. That’s because the vast majority of the population is connected to “the grid.” For this type of system, there is no requirement for energy storage which vastly simplifies the design and lowers costs. As can be seen in Figure 1, in a grid-attached system a special inverter is used to interface directly with the electric grid. In all of these systems, the inverter that converts from the Direct Current (DC) produced from solar panels is converted to Alternating Current (AC) in phase with the grid and at the voltage that is provided to the home – most often 240 Volts (V) in newer homes but sometimes 120V in much older homes.




I live in rural Arizona completely off-the-grid. The nearest power line is 4.9 miles away, making the prospect of connecting to the grid remote at best. This type of solar system can be quite complicated with multiple energy sources: solar PV, wind powered generator(s), diesel or gasoline backup generator, hydroelectric,  and other less common power sources.  Figure 2, Courtesy of Outback Power Systems, illustrates a more complete off-grid residential system. While there are many soft realtime algorithms used in a fully configured off-grid, just one of the more intensive computationally intensive algorithms is the signal processing required for controlling the speed of the generator.




Modern solar components like those from Outback contain local communications between components they manufacture and feature control over all of those same components. Even with these manufacturer-provided monitors, monitoring and controlling all of the pieces that can make up a solar electric generation system still requires a master control system beyond what state-of-the-art solar components provide. 

My personal off-grid system is shown in Figure 3.What differentiates it from the system shown in Figure 2 is that there are actually two semi-independent systems. System 1 is the solar electric for my wife’s studio: 5 solar panels, 1 wind generator, an inverter/charger, 6 2V batteries and a 50 amp gasoline generator. System 2 is the system for our house: 2.4 kW of solar panels (12 panels), 2 inverter/chargers, 6 2V batteries, and a 10kW diesel generator custom built from a slow speed Lister engine without any electronic monitoring or controls whatsoever.


What isn’t apparent to most people is the amount of physical monitoring necessary to ensure that electricity is available when it’s needed and in the amounts required. Since my solar components are mounted 50’ away from the house, the inconvenience factor can become overwhelming at night, in the extreme cold, when it’s raining, or in the winter when snow is on the ground. My objective is to create a monitoring/control system that can provide me with details on the state of the system. And remotely turn on the generator if it’s needed.

Looking at the system as currently defined, there are three subsystems that have real time components, and a variable bandwidth demand for the two way radio communications. Given the complexity and potential future expansion of the solar components, the control system will require a processor. But the choice is large. Within the Intel lineup there are 15 processor families to select from:


To narrow the choices to a more manageable set, we need to find the lowest power consumption alternatives. After all, this system is intended to control an off-grid system.   Using the Intel® System Design feature only the ATOM processors are recommended for power consumption of under 5 watts. This narrows the selection to a total of six processors


Having selected six candidate processors, we now need to verify that the processors are capable of performing the necessary algorithms. This development task must be completed quickly, with minimal investment in hardware and software before the final selection is made. In a later blog we’ll use VirtuaLab to perform the selection with minimal investment of both time and money.


If you were choosing a candidate embedded processor what criteria would you use?Can we use one criteria for processor evaluation?

Cell-phone subscriptions are reaching saturation causing revenue from mobile voice services to plateau. However, mobile-data revenue is still in the early-adopter phase and is ramping up. Figure 1 below shows IDC estimates for mobile voice and data revenue growth and illustrates that voice revenue growth is stalling while there was good revenue growth for mobile broadband data service revenue between 2008 and 2009 and significantly more revenue growth predicted for 2010 and beyond.



Figure 1: IDC predicts little mobile voice services revenue growth but good growth for mobile broadband data services revenue


Consequently, the communication industry recognizes that mobile broadband services will be the engine for future revenue growth and the industry is now developing suitable infrastructure to better handle mobile broadband services. Existing 3G (third-generation) networks are increasingly strained from the increased data traffic so operators worldwide have announced plans to move from 3G to higher-speed LTE (long-term-evolution) networks, which deliver data rates in excess of 100 Mbps and promise a fivefold increase over 3G HSPA+ (high-speed-packet-access) networks. However, these network upgrades alone won't deliver sufficient data throughput to guarantee good service to all users based on real customer-traffic patterns.


Bandwidth improvement alone is not sufficient. Additional technologies such as DPI (deep packet inspection) will be required to ensure that prioritized and managed traffic optimizes the mobile user’s experience. DPI expands effective network capacity by managing and optimizing data traffic. It goes far beyond the simple examination of IP (Internet Protocol) headers to determine traffic routing by examining packet contents to determine the actual use for each packet. All packets are not equal; Is it an e-mail, Web, video, or P2P (peer-to-peer) packet? Some packets (such as video packets) demand shorter delivery latency while user experience suffers little from short delays in email and HTML Web packet delivery.


Developers of the 3G and LTE networks employ a single shared data channel for all subscribers in a given cell. Carriers assumed that users would employ mobile data primarily for “bursty” activities such as Web surfing and e-mail when these networks were first designed. A shared data channel provides the same high-bandwidth pipe for video downloads, voice, e-mail, and for static HTML Web page traffic. Shared data channels perform poorly for the large sustained transfers of streamed data such as video.


High-bandwidth streams overfill the communication channel beyond capacity, resulting in dropped packets and long latencies for all cellular traffic. The growth in mobile data use and broad adoption of video and P2P exchanges—with their large sustained transfer requirements—are increasingly common. According to a forecast by Cisco (see Figure 2), video and P2P traffic currently accounts for 60\% of all data and will grow rapidly.




Figure 2: Cisco predicts that video service traffic will continue to grow rapidly.


In the early days of the Internet, IP headers clearly designated the target application but most traffic today simply looks like Web traffic based on header information. By using DPI, networks can develop a better understanding of how customers are using the service and can deliver packets using the right QOS (quality of service) criteria for better end-user experience.


DPI technology is simple in concept but complex in practice. Conceptually, inspecting a packet to determine subscriber and application type and then acting on that information looks easy. However, network line rates and rapidly evolving applications add complexity. Based on present data rates, packet rates are already staggering. One 10-GbE (gigabit-Ethernet) channel can carry 30 million minimum-sized packets/sec. Even with a realistic traffic profile and 200-byte packets, a 10-GbE channel carries 10 million packets/sec. At that speed, there’s only 100 nsec to receive and inspect each packet, determine its application, modify it if necessary, and forward it to the next node in the network. A 3-GHz, single-core processor can only execute 300 instructions in that amount of time, which is not enough to receive the packet let alone inspect and process it. This harsh reality is driving the rapid adoption of multicore, multithreaded processors for packet inspection. A multicore processor running at a modest 1-GHz clock rate can handle the work load by using multiple threads to attack the problem.


ATCA board vendors are starting to release multicore server blades for LTE network development that incorporate multiple Intel® Xeon® multicore embedded processors based on the Nehalem architecture. For example, Continuous Computing’s multicore XE60 Dual Nehalem Blade (see Figure 3) sockets two high-performance Intel® Xeon® 5500 series  processors, permitting the board to carry as many as eight processor cores in two quad-core processors. The XE60 also accommodates as much as 64Gbytes of memory in eight DDR3 DIMM sockets and as many as four hard drives directly attached through on-board SAS/SATA interfaces.




Figure 3: Continuous Computing’s XE60 Dual Nehalem Blade supports as many as eight processor cores and 16 threads


Figure 4 shows a block diagram of GE Fanuc’s GEFIP A10200 ATCA Dual Nehalem Blade single-board computer. It too combines two Intel® Xeon® 5500 series processors for as many as 16 threads running on eight on-board cores using two quad-core processors. Each processor has direct access to four DDR3 DIMM sockets that can hold as much as 64Gbytes of DRAM in total.




Figure 4: GE Fanuc GEFIP A10200 Dual Nehalem ATCA Single-Board Computer


By using DPI, network operators can create and administer service plans with different delivery terms and rate structures that will attract a wider subscriber base. Operators can optimize some low-data-rate service plans for Web surfing and e-mail sessions using only 64-kbps channels with a tight bandwidth cap on faster traffic. Another service plan might offer 250-kbps, YouTube-style video streaming but limit high-definition, multi-megabit video streaming. Yet another service plan might be designed to attract multiplayer-game enthusiasts, offering low latency for gaming packets. Corporations might choose a premium service package with traffic priority for e-mail, CRM (customer-relationship management), and other corporate applications. A service for P2P users could offer unlimited bandwidth during off-peak hours but tightly cap P2P bandwidth during peak-usage times. By offering diverse plans and fee structures, network operators can manage network traffic while attracting more customers, which will accelerate the subscriber and ARPU (average revenue per user) growth they perpetually seek.


Are you designing systems for LTE networks or any other applications that make good use of multicore processing? What have you discovered about system design with multicore processors?


Note: Continuous Computing and GE Fanuc are Associate members of the Intel® Embedded Alliance.




Mike Coward, Deep Packet Inspection Optimizes Mobile Applications, Continuous Computing,



Steve Leibson

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

In a December 9th, 2009 E-cast on “Point-of-Care Terminals, Lab Equipment, Diagnostic and Therapeutic Devices:  Learn How These Low-Power Medical Devices and Others Benefit from Intel® Architecture”, sponsored by the Intel® Embedded Alliance, trends and ideas in medical designs using long-life Intel embedded processors were presented by a panel of guests with a Q&A follow up.


Don Dingee, editorial director of Embedded Computing Design, moderated the event for OpenSystems Media. Panelists presenting were:

  • Lisa Miller, Market Development Manager, Intel Embedded & Communications Group
  • Joseph Chung, Product Manager, Medical Computing Products, Advantech
  • Jack London, Product Manager, Kontron Embedded Modules Division

Don opened the E-cast with ideas from a Dec ‘09 interview with Rick Cnossen, from Intel’s Digital Health Group and the Continua Health Alliance. Cnossen speaks of the trend toward connected healthcare devices and solutions, and not only among healthcare providers trying to reduce costs and improve care.  “Connected health solutions also empower individuals to play a greater role in their own health and fitness by giving them the tools, knowledge and motivation required for the success of preventive measures,” he said. Don pointed out the standards are shaping up around Ethernet, Wi-Fi, Bluetooth, USB, and ZigBee as the networking technologies.


Lisa Miller of Intel identified several macro trends: a shortage of healthcare professionals, rapidly rising costs of healthcare, demographic shifts toward aging populations and those with chronic conditions, and the broader adoption of electronic health records (EHRs). “Continuous [patient] monitoring will become more prevalent, and that’s going to translate into real-time, robust, two way communication pathways between an individual and a healthcare practitioner. Devices are going to contain more intelligence, so they can scale and provide efficiency … They will also have more intelligence so they can be used by less-trained practitioners to help address that shortage.”  Lisa reinforced the point of connected, portable devices helping improve care, and how Intel Architecture processors can help build those devices.


Joseph Chung of Advantech, a Premier member of Intel® Embedded Alliance, opened by discussing how electronic health records are changing the way solutions are designed and used, with billions of dollars flowing into EHRs over the next five years. Connected devices will be critical to putting records, patients, and healthcare practitioners together. He also spoke of the trend toward more mobile solutions with wireless connectivity, including those for computerized physician order entry (CPOE), and the need for security such as called for by HIPAA to go along with this connectivity. He talked about how Advantech designs to meet IEC and FDA regulations. He then outlined examples of mobile clinical and beside devices designed by Advantech, looking at connectivity, user interface, and battery characteristics – and how these solutions are enabled by Intel technology.


Jack London of Kontron, a Premier member of the Alliance,  followed by describing the challenges in designing these new medical devices: new software requirements, higher resolution images and faster frame capture rates, long development and approval processes for devices, and security, integrity, and privacy requirements. He then outlined the digitization of healthcare, and a range of different applications emerging with more intelligent technology inside. He showed an interesting matrix of applications versus enabling Intel processor technology, and showed several proof points of those applications using Kontron modules based on Intel architecture processors.


After these overviews, the Q&A portion of the E-cast began. Following are excerpts from that session.

To see and listen to the entire event:

Register to view the archived event, with slides and audio:

Download the podcast with audio only:




DD: What’s really the better solution to deliver value into the medical market: is it bedside terminals, is it mobile clinical assistants – what are customers telling you?

LM:  I think it really depends on specifically what you are looking for. Bedside terminals offer value in being a “triple play” type of solution, where you can provide patient education, patient entertainment, as well as a clinical aspect. Mobile clinical assistants are really a personal device for a clinician to help enable EMR.

JC: Most of our customers are looking at mobile solutions, for field diagnosis and mobile in-facility applications.


DD: Most hospital or clinic IT systems are running on Intel Architecture. Ethernet and Wi-Fi can integrate hardware, but there’s more than just hardware connectivity – what are you seeing as far as integrating these devices with the rest of the hospital IT systems?

LM: A lot of the integration is based on software. Where I was coming from was if you’ve got more of a common hardware base, your validation and verification may be superior. When you have Intel Architecture, you’re going to get the same implementation of that architecture every time you see it. With Intel Architecture, you’ve got a lot of people using it, testing it, probing it – the probability of finding any issues or vulnerabilities that might exist, and finding them earlier, is higher. From a software perspective, with the common hardware architecture, your investment is somewhat lessened in terms of trying to integrate things together.


DD: Let’s let Kontron and Advantech go a little deeper into that question. We’ve got a question on ETSI compliance, and how maybe it would require recertification of a system depending on what’s going on in a specific application. How does Kontron support customers going through something like ETSI compliance?

JL: We try to pace with our customers to walk through with them and their certification process, providing information about particular modules, whether it has to do with power consumption, EMI, connectivity, whatever is needed from a system standpoint. Sometimes it can take a couple iterations, but we want to try and work with the customer to provide the maximum information for a successful certification.

DD: Advantech has a viewpoint as well …

JC: We have customers doing, for example, anesthesia, diffusion pumps, ventilators – we help them with passing ETSI approval by modifying some circuitry, and providing technical documentation. We also have solutions for EMC with FCC Class B and IEC certifications.


DD: What are you seeing as the operating system trend in medical devices?

LM: There’s a lot of Windows-based devices, there’s a lot of Linux, and more RTOS is coming on. When you’re talking life critical, an RTOS is probably the direction you’re going to go. When you’re talking about a very rich user interface, you may see something else. It depends on the specific application.

JC: Still the majority of our customers are using Windows Embedded, and some customers are trying to look into Windows 7 for embedded. About 30\% of our customers are using Linux, and something like Moblin or Fedora.

JL: We’re seeing a lot in Windows-based applications. We can support Linux and things like QNX as well, but the largest percentages of our customers are in the Windows environment.


DD: We know some of the big iron, like MRI machines, big expensive pieces of gear, have very, very long lifecycles – even longer than 10 years. But I imagine for the smaller mobile, lower powered gear, there is some trend toward shorter lifecycles. Talk about what you are seeing and how you are supporting those trends.

JC: Most of our customers are still looking for longevity, at least 5-7 years, and we select components including those off the Intel embedded roadmap. We do see more and more customers looking for longer lifetimes, and after 5 years we need to consider lifetime buys of components.

JL: We look at the component selections, and key off the Intel embedded roadmap to pick a processor to ensure long life. In parallel, there are other components on our boards, and we partner with other strategic suppliers, stable suppliers who have been around for a long time and also understand the medical environment and the need to support long life. We can extend the life of a product usually beyond component availability through a couple different methods: for example, through our program management, we’re able to extend life upwards of 7-10 years. It has to be a very disciplined effort to ensure long life. We are seeing a faster turnover because of the rapid pace of change in the technology, and for this reason we go with computer-on-module (COM) technology so the computer can scale with the application as the processor performance improves.

LM: There needs to be a balance for designers and manufacturers of medical devices between taking advantage of the latest technology and protecting the investment made in developing a particular device on that platform, and all the regulatory obstacles they need to overcome in getting to market. In some instances, the trend is getting longer, 10+ years. In some instances, where it may not be specifically a clinical device, like something not regulated by the FDA, you’ll see significantly shorter lifecycles.




As healthcare becomes more digital, and more and more devices become connected, the trends looked at today will become the reality of tomorrow’s low power medical devices. Designers can look to Intel and their Alliance partners for solutions and expertise in implementing these devices.


What are your ideas on this topic? We welcome comments and discussion on trends, needs, and unique applications in the medical field.



Filter Blog

By date: By tag: