1 6 7 8 9 10 Previous Next

Market Applications

319 posts

As factory infrastructures transition to cloud compatibility and the number of interconnected intelligent devices grows, a host of company benefits and new business opportunities are possible. For example, the large volumes of data generated from a fully connected enterprise can be used to predict failures, simplify management, reduce support costs, and even generate new revenue streams from smart services. In addition, with cloud-based communications, device manufacturers can offer a variety of after sale services including remote product updates, failure detection, and on-site repairs to reduce customer support costs and personnel.  However, in order to reach the goal of a fully unified manufacturing facility, design teams must overcome a variety of equipment fragmentation problems caused by incompatible connectivity, management, and security technologies employed in an assortment of embedded devices.


To address the fragmentation and product interoperability issues in the intelligent systems market Intel recently announced the Intel® Intelligent Systems Framework. The framework is a set of standard practices for hardware and software development to simplify connecting, managing, and securing embedded devices and data in a standardized, reliable, and scalable manner. With this new framework in place, designers can eliminate the detailed analysis and verification of connections and functionality as new devices are added to a network.  The basic capabilities are provided by software and middleware from Intel, Wind River, and McAfee plus the hardware features of supported processors including Intel® Xeon® Processors, 2nd and 3rd generation Intel® Core™ Processors with Intel® vPro™ Technology, and Intel® Atom™ processors. To verify adherence to the standard practices outlined in the framework, Intel validates hardware and software components from members of the Intel® Intelligent Systems Alliance to build an inventory of Intelligent Systems Framework-Ready Products.


Alliance member Eurotech offers a number of industrial-grade products targeting factory applications that have been tested for readiness with the Intel® Intelligent Systems Framework. For example the Eurotech Helios Programmable Edge Controller provides a flexible hardware platform with enhanced wired or wireless connectivity that enables simplified data acquisition and integration (See figure 1). The controller can be used to collect data from multiple distributed devices, summarize or condense the readings, and then transmit the information to a central processing center. The Helios platform is powered by the Intel® Atom™ Z5xx processor at 1.1 to 1.6 GHz and is programmable to run customer specific application software using Wind River Linux 3.0 or Windows Embedded Standard or Windows CE 6.0 operating systems. Connectivity choices allow users to select wired Gigabit Ethernet or pre-certified wireless modules for cellular, Bluetooth and others dependent upon technology or carrier preferences. Other I/O options include a multimedia card for display and audio, USB 2.0/serial ports, and a GPS system plus antenna.




Eurotech has also developed proprietary technologies that are compatible with the Intel® Intelligent Systems Framework and further simplify intelligent device development. From the communications perspective, the Everyware Cloud is a software platform that provides a simplified method to connect cloud-ready devices to enterprise systems and/or applications (See figure 2). The platform provides a low bandwidth, cloud-based communications, storage, and analysis capability allowing designers to lower network costs with an open and optimized protocol for device data transport. All the platform features are accessible through a standard application program interface (API) interface so that remote machine to machine (M2M) data can be easily integrated with information technology applications. The Everyware Cloud technology enables customers to make on the spot business decisions based on real-time data coming from the intelligence at the edge of the network.


The Everyware Software Framework (ESF) is another Eurotech technology that provides an integrated hardware and software infrastructure to enable application-ready platforms (See figure 3). ESF is an inclusive software framework that puts a layer between the operating system and the customer application, with industry standard interfaces that shorten custom development time, simplify coding, and deliver software that can be easily ported from one Eurotech hardware platform to another.  Device drivers are approached as services, implemented using simple APIs instead of complex code and removing the need to modify low level operating system code or configuration files. Because ESF is based on standardized open software backed by large industry companies, developers can easily port existing applications from third parties to their own devices.


The Eurotech Helios Programmable Edge Controller and software technologies discussed here are excellent examples of industrial products already available from alliance members that are compatible with the Intel® Intelligent Systems Framework. These products allow intelligent systems developers to immediately start extracting value from M2M data and bypass the lengthy verification of interconnectivity, system security, and remote management. You can keep up with the latest products and articles on the Intel® Intelligent Systems Framework at the Top Picks webpage.  If you are ready to start an intelligent industrial automation project requiring remote management and security, please share your questions and comments with fellow followers of the Intel® Embedded Community.


Learn More

Solutions in this blog:

Related topics:

Eurotech is an Associate member of the Intel® Intelligent Systems Alliance.

     Contact Eurotech>>


Warren Webb
OpenSystems Media®, by special arrangement with the Intel® Intelligent Systems Alliance

Between two and three hundred people attended the ETSI Network Functions Virtualization (NFV) meeting in Santa Clara last month. As expected, there was a strong turnout from the telecom service providers who started the NFV initiative in late 2012, while many other companies were also represented such as network equipment manufacturers, system integrators, software companies and processor suppliers.

The overall impression was that key service providers are strongly committed to NFV, with the potential of interesting new business opportunities for a range of other players in an ecosystem that will have to include providers of hardware, software and services.

In terms of the business benefits of NFV, most of the discussion to date appears to be around the CAPEX and OPEX savings that can be achieved through the virtualization of functions that have traditionally been implemented as stand-alone, dedicated, fixed-function equipment.

One use case that’s frequently discussed is a “virtual CPE”, with an architecture that relies on a very simple CPE (modem, switch and antenna) with all services relocated to virtual network appliances on a central server. Another example is the cloud RAN (Radio Access Network) concept, in which the eNodeB at the antenna is replaced by a low-cost Remote Radio Head, with digitized RF data backhauled over fiber to a centralized Baseband Unit pool in which the baseband processing is performed. Finally, Evolved Packet Core (EPC) functions for mobile networks can be run in Virtual Machines (VMs) on generic server platforms.

Each of these scenarios promises to reduce both CAPEX and OPEX for service providers, though there’s still healthy debate as to whether a simple substitution of virtualized functions for dedicated equipment really reduces costs enough, once new operational expenses are factored in.

I was surprised not to hear more at the ESTI meeting about how NFV can enable new services, especially cloud-based applications, which would deliver real value to subscribers. These would seem to be the key to increasing Average Revenue per User (ARPU) and driving top-line growth in the P&L, rather than just reducing expenses through virtualization. On reflection, though, carriers probably view new services and applications as their proprietary secret sauce and not the kind of topic that they want to talk about in a room full of their competitors.

At this point in the evolution of NFV, the working groups and “expert groups” are mainly focused on topics relating to network management and orchestration topics. This makes perfect sense given the extreme complexity of legacy telecom networks and challenges of migrating these control-related systems to a completely new software-based architecture. And, of course, the concept of “Carrier Grade reliability” reflects expectations that we all have, as subscribers with the freedom to switch providers as soon as our contracts are up.

Before too long, though, we’ll need to talk about a fundamental data plane issue that must be addressed in order for any NFV implementation to be cost-effective.

Packet processing is a key function that dominates the processing workload for many telecom network subsystems. These include the CRAN and CPE solutions mentioned above, as well as several key EPC functions such as the Mobility Management Entity (MME), Packet Gateway, Security Gateway and Serving Gateway. In the case of traditional non-virtualized equipment, standard Operating System networking stacks provide poor performance for packet processing because of overheads and latencies within the kernel, so many equipment suppliers have adopted solutions such as 6WIND’s software, which solves that problem through a fast path architecture.

Virtualized implementations present additional challenges for packet processing. Virtualizing hundreds of CRAN, CPE or complex EPC functions on a single server requires high performance network bandwidth to be distributed to the virtualized functions. Standard virtualized architectures that work well for computing applications are unable to deliver the required performance for these demanding network applications. Bottlenecks in the hypervisor, the virtual switch and the VMs themselves can degrade overall networking performance by an order of magnitude for virtualized functions when compared to physical implementations. This is a potential showstopper in terms of the cost-effectiveness of NFV architectures, for which cost-per-subscriber will be a critical metric.

Fortunately, this problem can be solved. Through the use of techniques such as IO Virtualization (IOV), virtual NIC (vNIC) drivers and direct VM-to-VM (VM2VM) communication, as well as a fast path data plane implementation, it’s possible to bypass the bottlenecks. Advanced software solutions can deliver the high networking performance that’s required in order to achieve the cost reductions that are the basis of the NFV concept.

As an example of this, at Open Networking Summit during the week before the recent NFV meeting, 6WIND demonstrated a solution that delivers a 10x improvement in the performance of the standard Open Virtual Switch (OVS), which performs the critical switching function required to deliver high-bandwidth network traffic to VMs.

So while most of the NFV attention and conversations are currently centered on management and orchestration issues, let’s not forget that sooner or later we’ll need to make sure the data plane performance challenges are addressed too. This will be a critical factor in ensuring that the NFV architectures do indeed deliver the CAPEX and OPEX reductions that are promised. And then we’ll have low-cost, high-performance platforms for the delivery of innovative new services for subscribers, which can be the basic of increased service provider revenues.


As industrial designers incorporate remote, fully interconnected factory equipment to reduce costs and simplify maintenance, the exposure to data disruptions or system cyber attacks becomes apparent.  According to the 2013 Internet Security Threat Report published by Symantec Corporation, manufacturing was the most-targeted sector in 2012 with 24 percent of all attacks, compared with 15 percent in 2011. In addition, the research detected a 42% increase overall in cyber attacks in 2012 compared to 2011. Similarly, Verizon recently released the 2013 Data Breach Investigations Report (DBIR) which found that manufacturing is the highest targeted industry (33%) by those motivated by industrial espionage. In these attacks, cyber thieves were trying to access intellectual property, product designs, and trade secrets. The DBIR analyzes data from 19 organizations — covering more than 47,000 reported security incidents and 621 confirmed data breaches from the past year. The manufacturing sector also is one of 18 critical infrastructure sectors established under the United States Department of Homeland Security National Infrastructure Protection Plan to enable assessment of national, cross-sector critical infrastructure protection and resilience programs.


To deal with these potential disruptions, designers are devising techniques to protect or harden important and vulnerable elements of the industrial infrastructure. In fact, security precautions have changed the basic design goals for many industrial embedded devices. Designers are no longer just motivated to produce the simplest, lowest cost device for each project but must now strengthen systems with faster, more capable processors, secure data storage, and tamper-proof communications to simultaneously protect the software structure and data while executing the manufacturing function. To assist in this new industrial design approach, Intel has devised an overall framework combined with state of-the-art architecture to provide multiple security strategies that designers can employ to defend individual devices and the overall plant infrastructure.

One of the most recent security enhancement announcements is the Intel® Intelligent Systems Framework  used to simplify connecting, managing, and securing embedded devices (See figure 1). The framework eliminates the man-hours spent analyzing and verifying interconnections and secure operation as new devices are added to an existing network. The framework combines processor architecture, operating systems, and other software to create ready-to-run, secure, and interoperable platforms for intelligent systems.  Addressing security, the framework provides flexible recipes using scalable, off-the-shelf elements for platform, software, and data protection. For example, the firmware BIOS must be based on the Unified Extensible Firmware Interface (UEFI) specification version 2.1 or greater to enable Extensible Firmware Interface (EFI) Development Kit support for measured boot and secure boot. The framework also includes software and operational middleware from Wind River, McAfee, and Intel® for secure communications and manageability without sacrificing performance.

ISF C.png


The framework leverages system processors with Intel® vPro™ Technology (Intel® vPro) providing built-in hardware support for remote management, virtualization, and platform security functions. This technology allows designers to activate, reconfigure, and if necessary, deactivate a remote embedded system. Intel® vPro™ technology capabilities are embedded in hardware so they can be accessed and administered independently from the operating system and software applications. Intel® vPro™ includes several features that provide hardware support for platform security. Intel® Active Management Technology (Intel® AMT) delivers certificate-based security allowing remote access to the embedded system for management and security tasks to minimize attack opportunities along with complete isolation for rapid repair of compromised systems. Intel® Virtualization Technology (Intel® VT) increases system performance and reliability with hardware support for virtualization software to safely separate and isolate security-critical applications running on the same system. In addition, Intel® Trusted Execution Technology (Intel® TXT) protects embedded devices against rootkit and other system level attacks with an industry-standard Trusted Platform Module device to store key encryption components and protected data.


A low risk way to take advantage of the security features of Intel® Intelligent Systems Framework and Intel® vPro™ technologies is to integrate products that have been certified or tested for readiness. For example, Avalue subsidiary BCM Advanced Research has announced a series of industrial computers that are designed to support the framework, Intel® AMT and Intel® TXT technologies.  Based on Mini-ITX form factor motherboards, the BI255-67QMD series is equipped with the Intel® QM67 Express Chipset and supports Intel® 2nd generation Mobile Core™ i7, Core™ i5, Core™ i3 or Celeron® processors (See figure 2).



Intel Intelligent Systems Alliance members also provide variety of off-the-shelf boards and systems that include the Intel® Intelligent Systems Framework and Intel® vPro™ technologies. For example, the Advantech AIMB-273 Mini-ITX motherboard and the Portwell PCOM-B219VG COM Express module are framework-ready products supporting industrial embedded applications. These products allow industrial designers to start new projects with pre-tested hardware and software components to ensure that devices and systems include security provisions for built-in threat management. If you are ready to start an industrial project requiring a protected infrastructure, please share your questions and comments with fellow followers of the Intel® Embedded Community.


Learn More

Solutions in this blog:


Related topics:


Advantech is a Premier member of the Intel® Intelligent Systems Alliance. Contact Advantech>>

Portwell is a Premier member of the Intel® Intelligent Systems Alliance. Contact Portwell>>

Avalue-BCM is an Associate member of the Intel® Intelligent Systems AllianceContact Avalue>>

Wind River is an Associate member of the Intel® Intelligent Systems AllianceContact Wind River>>

McAfee is an Associate member of the Intel® Intelligent Systems AllianceContact McAfee>>


Warren Webb
OpenSystems Media®, by special arrangement with the Intel® Intelligent Systems Alliance

This week at a customer event co-located with Interop, Dell and Intel announced that Dell OEM Solutions is now a Premier member of the Intel® Intelligent Systems Alliance. Troy Smith, Director of Intel® Intelligent Systems, welcomed the company to its new role, saying “As a member of the Alliance, Dell OEM has been a valuable partner in supporting Intel’s efforts to enable smart, connected systems. We are excited to promote Dell OEM to our highest level of membership and look forward to growing our collaboration with them.”


As a Premier member, Dell now has privileged opportunities to work with Intel through executive, architectural, joint marketing, and sales relationships. In addition, Dell OEM gains the highest level of access to Intel design and development expertise, helping align its technical roadmaps and programs with Intel’s. By optimizing the entire development lifecycle of Intel® architecture-based products, Dell OEM offers reduced risk and lower development costs, improved access to leading-edge solutions, and faster time to market.


Intel and Alliance members like Dell OEM are focused on three key areas:


  • Systems: Billions of devices are connecting to the Internet, each other and the cloud, transforming the industry from fixed function and isolated embedded systems to a new category of intelligent systems. These systems are secure, connected, managed and demand greater performance.
  • Experiences: Intel and the Alliance view 2013 as the year of accelerated transition to intelligent systems. As connectivity becomes even more pervasive, intelligent systems will continue to demand more performance to bring richer experiences and become more fundamental to our daily lives.
  • Data: Intel and Alliance members are helping to simplify the deployment of the Internet of Things (IoT) to enable customers to shift investment from interoperability to extracting greater value from their data.


This three-pronged approach will lead to lower development costs, easier integration, and greater ability to enable innovative services. The end goal is an easier, faster path to making IoT and Big Data a reality.


Dell OEM is a division of Dell that specializes in helping global customers accelerate their time to profit with customizable tier-1 OEM technologies developed by their dedicated team of experts. Dell supports OEMs who build a wide array of their own hardware-based products and need high-quality, stable and well-engineered technology to run their solutions. Dell OEM markets to a wide range of industries, including aerospace and defense, telecommunications, healthcare, industrial, retail, and consumer interaction. Customers trust Dell OEM to deliver world-class hardware and services to serve as the backbone of their solution.


Dell OEM was most recently an Associate member of the Alliance. The Alliance is made up of four tiers, with the highest being “Premier,” reflecting the closest alignment with Intel. From modular components to market-ready systems, Intel and the 250+ Member companies of the Alliance provide the performance, connectivity, manageability and security developers need to create smart, connected systems. There are four Premiers today, including Dell OEM.


This is an exciting and pivotal time for the industry. Watch this space, as well as Alliance publications, particularly the Embedded Innovator magazine and newsletter series, for all the great things to unfold from this latest addition to the Premier tier of the Alliance.

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Editor-In-Chief Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

I love this disclaimer that is often included in many data sheets for computers, “No computer system can provide absolute security under all conditions.” How true this statement is, but yet many military and aerospace computing platforms are frequently embedded in very mission critical applications that must be secure. Invasion from interlopers can bring a mission down. Increasing mobility of electronic platforms makes the challenge even more daunting. What are the major security issues designers need to be concerned with when developing embedded platforms? How can you reduce the risk of security intrusions when developing embedded platforms?


Computers are used in military and aerospace applications from IT to UAVs. The IT applications use server technology that has well established security protection built into the architecture but the UAV application has many embedded computing platforms that are not able to use much of the IT technology. In past years, protecting real-time, embedded platforms was a lower priority as each system was very isolated, but now, security is of paramount importance in embedded computing platforms, as more and more data is being transmitted and stored electronically. As platforms become more intelligent and connected, connected via wireless networks that are especially vulnerable, the concern for security has gained even more prominence.


Security Issues

Security is such a huge topic that it is hard to decide where to begin. I won’t be able to completely do the topic justice but at least I would like to be sure that your eyes are open to the basics. Risk is everywhere, hence the disclaimer at the introduction. Just when you think you might have a secure system, someone finds a vulnerability and brings down your platform. Short of a totally isolated system built with the highest degree of caution, a secure system is nothing but a dream. But understanding the types of security threats can at least help set some priorities and manage the risk.

In his book, “Security for Ubiquitous Computing”, John Wiley & Sons, Inc., Frank Stajano uses a well-established taxonomy to subdivide computer security threats into three categories, according to whether they threaten confidentiality, integrity or availability. Breaking security into these elements makes the evaluation of potential solutions easier and more effective.


Stajano describes confidentiality as the property that is violated whenever information is disclosed to an unauthorized principal. That may be a person or another computing device, either is relevant. Integrity is violated whenever the information is altered in an unauthorized way. It may be altered at a host or in transit between devices. Availability is the property of a system which always honors any legitimate requests by authorized principals. It is violated when an attacker succeeds in denying service to legitimate users, typically by using up all the available resources.


Security Management.jpg

Security Risk Management.


Security is critical from the hardware through the layers of software all the way to the end application. Each is important to ensure the most secure system possible. To manage your risk, you must be sensitive to security threats through the entire system architecture. Security starts at the processor. A foundation or root-of-trust must be established to provide security services upon which to build a robust security environment. Intel vPro technology was first introduced in 2006 and has been upgraded with several new features since then. It offers a strong foundation upon which to build a secure system. Much has been published on the Intel vPro technology and I leave that to you for further research.


Technology Overview




Intel® Trusted Execution Technology (Intel® TXT)

Establishes hardware-based root of trust to defend against software attacks at launch

Intel® Virtualization Technology (Intel® VT)

Works with Intel TXT to deliver built-in protection against malware and rootkit attacks

Intel® OS Guard

Delivers automatic “blanket” protection to defend against escalation-of-privilege attacks

Intel® Identity Protection Technology (Intel IPT) with public-key infrastructure (PKI)

Protects access points by working with authentication solutions to support hardware-based storage of tokens or certificates inside the platform

Intel® Advanced Encryption Standard New Instructions (Intel AES-NI)

Encrypts data up to four times faster without slowing performance or interfering with user productivity

Intel® SecureKey

Generates high-quality random numbers to enhance encryption for online transactions

Intel® Anti-Theft Technology

Offers tamper-resistant security to detect potential theft with the capability to automatically disable PCs

Intel® Advanced Management Technology (Intel® AMT)

Provides remote support for proactive threat management and diagnosing, isolating, and repairing an infected PC, regardless of operational state

Intel® Platform Protection Technology with BIOS Guard

Provides authentication and protection against BIOS recovery attacks; BIOS updates are cryptographically verified to ensure malware stays out of the BIOS

Intel® Platform Trust Technology and Intel® Boot Guard

Designed to work with Microsoft* Windows* 8., Intel Platform Trust Technology supports secure and measured boot, and Intel Boot Guard protects again boot block-level malware



The next layer of defense is at the BIOS level. The threat of attacks on the BIOS is growing with reports of intrusions at the BIOS level becoming more common. The National Institute of Standards and Technology (NIST) has released new security guidelines for updating the BIOS, the point at which the security threat is the greatest. Through new security guidelines – NIST SP 800-147, NIST is setting standards that require authentication of BIOS upgrade mechanisms.


BIOS providers have taken the security challenge seriously and offer suites of products providing multiple levels of security. They support the latest Intel vPro technology which allows users to manage, inventory, diagnose, and repair their systems in efficient, remote, and streamlined ways all without compromising system security. The BIOS providers support the NIST SP 800-147 guidelines and they offer multiple other security options to protect FLASH and other storage devices. Users prefer to keep as much of the security responsibility at the hardware and BIOS level as possible because that is where the defense is strongest.


Phoenix Technologies’ Phoenix SecureCore has an impressive suite of security support. American Megatrends (AMI) with its Aptio V UEFI BIOS compliant products features UEFI Secure Boot for enhanced platform security.


Operating System

Operating systems play many roles in providing increased levels of security. The most recent advancement became more feasible with the introduction of multi-core processors that enable the ability to run multiple instances of operating systems on one multi-core processor. This has led to hypervisor architectures that can protect key elements of the software environment. Real-time operating system suppliers have made hypervisors a key part of their product portfolios.


Green Hills Software demonstrates an excellent example of how important security is to an operating system supplier. They have a dedicated business unit, INTEGRITY Security Services (ISS) business unit, to address the growing need for more secure software. ISS packages the INTEGRITY operating system with a toolkit to provide reliable and authenticated security software that can ensure these ubiquitous embedded devices remain secure.


ISS toolkits are based on the Green Hills proprietary Federal Information Processing Standards (FIPS) compliant Cryptographic Toolkit. The ISS Cryptographic Toolkit meets the latest government standards and also provides the underlying FIPS compliant cryptographic primitives for use with a comprehensive compliment of security protocols. The toolkits are designed to be small, scalable and certifiable by the US Government. Support is available for Windows, Linux, VxWorks, INTEGRITY and general-purpose operating systems on Intel processors.


ISS focuses on addressing:

  • Authentication - the process of ensuring that users, devices and software on a network are correctly identified.
  • Authorization - grants users and devices the right to access resources and perform specified actions.
  • Network Access Control - mechanisms that limit access to the network to authenticated and authorized devices, software and users.
  • Confidentiality - using ciphers to transform data to make it unreadable to anyone except those authorized and authenticated to view the data.
  • Integrity - checking mechanisms are designed to detect unauthorized changes to transmitted data through the lifecycle of a device, software and data.
  • Remote Management - a method to monitor, update and manage remotely manufactured and fielded devices.


Delivering end-to-end security solutions

A connected battlefield is critical to our Armed Forces' success. As multiple classifications of data are transmitted to more devices and vehicles on the battlefield; the requirement for a high assurance, portable encryption toolkit is critical.


Attacks on computers and networks continue to proliferate in spite of extensive software approaches to prevent these attacks. Establishing a strong digital identity for both the user and the computer system through hardware-based security is a significant step beyond software-only strategies.

The defense community has made cybersecurity a top priority. In the push for defense programs to use more COTS, the dependencies on Intel Intelligent System Alliance members and their products only becomes greater. Security never rests as defense systems shift to Intelligent Systems. New breaches are discovered and technology shifts to address the breaches.


I would love to hear more about your experiences as a system designer and what you have done or plan to do to manage security risks.



Learn More


Solutions in this blog:


Related topics:


American Megatrends (AMI) is an Affiliate member of the Intel® Intelligent Systems Alliance.

    Contact American Megatrends>>
Green Hills Software, Inc. is an Affiliate member of the Intel® Intelligent Systems Alliance.

    Contact Green Hills Software>>
Phoenix Technologies is an Affiliate member of the Intel® Intelligent Systems Alliance.

    Contact Phoenix Technologies>>

Jerry Gipper

OpenSystems Media®, by special arrangement with Intel® Intelligent Systems Alliance

Editorial Director, OpenSystems Media, VITA Technologies

With consumers expressing a greater desire to watch content from mobile gadgets like smartphones and tablets while cruising down the road, the automobile is fast becoming a state-of-the-art mobile device. Today you can find displays mounted in the center console, in or above the dashboard, on the backs of the front seats and/or overhead in entertainment panels.  According to Staci Palmer, general manager of the automotive solutions division at Intel®, about 40 percent of the estimated 80 million cars shipped in 2012 had some form of integrated in-vehicle infotainment (IVI) system in it to run computer applications for entertainment, information or Internet connectivity. Palmer expects that by 2020 that rate will reach 75 percent, or around 85 million IVI systems.

These IVI systems will have to handle various data streams including high definition multimedia and 3D graphics, wirelessly transmitted information connecting the car to local area or wide area (the "cloud") networks and Advanced Driver Assistance System (ADAS) data showing lane-departure warnings and other content (e.g., vehicle system checks) that impacts safety.

All of this will require the aid of advanced Intel® processors and for that reason industry analyst organizations such as IHI iSuppli predict automotive IVI systems are heading toward a PC-like architectural approach in which more functionality is dependent on a powerful main central unit.  In that sense the auto will start to resemble a display-heavy aircraft cockpit with multiple screens for the driver and passengers. iSuppli and other industry watchers see software’s role as being a differentiator among brands with some applications previously implemented via hardware reconfigured instead into simpler programs reliant on a heavily centralized unit marked by strong processing power, significant memory and an effective human-machine interface.


This last item should not be discounted. All of the computing "horsepower" that can be brought to bear will come to naught unless drivers can interact with these systems in a natural and intuitive way. And that is precisely the goal of Intel’s® Perceptual Computing campaign: providing direct, natural human-to-computer interfaces encompassing next-generation voice command and control and gesture recognition. Voice systems using technologies such as voice recognition, text-to-speech and speech-to-text will enable drivers to control entertainment and navigation systems simply by using their voices to operate infotainment and other vehicle controls without taking their eyes off the road or their hands off the wheel. Advances in voice-recognition technology will allow drivers to use spoken words to manipulate an increasing number of functions in the future, including controlling the HVAC system, and sending text messages.


The Intel® Perceptual Computing SDK 2013 Beta 2 with support for Nuance Dragon Assistant Core speech recognition is now available as a free download. Voice is firmly in Nuance’s wheelhouse:  More than 70 million cars and 50 million portable navigation systems are equipped with Nuance speech technologies.  According to an IMS Research report more than half of new automobiles in 2019 will integrate voice recognition, up from 37 percent in 2012, as car manufacturers increasingly seek safer ways for drivers to interact with navigation, music or phone calls.


Developers will now be able to add voice commands, short sentence dictation and text-to-speech synthesis to their applications (Fig. 1). Eye tracking and gesture recognition also are in the near future. Cameras mounted in the cabin can see where passengers are sitting in the car, and, more importantly, where they’re looking. If the driver takes his or her eyes off the road, the car would recognize that and could sound a warning. .



Figure 1. Speech Use Cases and Technologies for In-Vehicle Infotainment


The Intel® Perceptual Computing SDK APIs can be used with the Creative Interactive Gesture Camera Developer Kit, employing a small and light-weight USB-powered, depth sensor camera tuned for close-range interactivity (meaning it can pick up the movement of each finger). The kit includes an HD webcam, depth sensor and built-in dual-array microphones for capturing and recognizing voice, gestures and images.


On the hardware side using Intel® Atom™ processors as part of an IVI solution enables automotive developers to create a single, common hardware and software platform that:


  • Reduces development costs
  • Simplifies design complexity
  • Speeds time to market
  • Enables more choice and innovation in software
  • Simplifies product upgrades
  • Has strong interaction ability and compatibility with software.
  • Enables the use of low-power components that can fit into small footprint boards.

For example, Axiomtek’s NANO830 is equipped with the Cedar Trail platform, dual core Intel® Atom™ processor N2000/D2000 series and the Intel® NM10 Express chipset providing premium sound and delivering advanced features such as multiple audio streams.  At a mere 120 x 120 mm footprint, system developers have more opportunities for developing scalable and ultra-compact embedded applications and its low power consumption enables fanless designs.  The NANO830 provides IVI designers with a better choice for higher level graphics-intensive and system performance.  Designed to support the Intel® Atom™ processor D2550 running at 1.86 GHz it integrates Intel’s® Graphics Media Accelerator 3650 engine to support HD-rich content and provides an array of display interfaces including DisplayPort, VGA, and 24-bit dual channel LVDS with dual-view supported. Moreover, it supports a DDR3 SO-DIMM with a maximum of 4 GB of memory.


Similarly,  Eurotech's Catalyst CV (CPU-300-10) module delivers dual-core Intel® Atom™ processor N2x00 performance at CPU speeds up to 1.6GHz and extensive multimedia capability, all while maintaining a very low power profile. On this platform, Eurotech delivers up to 4GB of DDR3 DRAM memory, high-speed PCIe lanes, and USB ports.  Also employing the Intel NM10 Express chipset the Catalyst CV is a compact 67 x 100mm module for applications such as IVI where superior graphics performance is of high value. The Catalyst CV supports not only traditional graphical interfaces such as VGA and LVDS but also offers HDMI, full 1080p, DisplayPort with optional configuration and Blu-ray capabilities to support demanding graphical applications.


We mentioned earlier that iSuppli anticipates software as being a differentiator among brands in IVI systems. One example of this comes from another Intel Alliance partner, Clarinox.  Its Clarinox Blue Bluetooth solution is an embedded protocol stack software that has the ability to run multiple, simultaneous profiles with multiple smartphones. This combination of profiles enables the building of a state of the art automotive Bluetooth infotainment solution by providing access to the phone book of the driver or the passengers, streaming music to the car stereo system from any of the phones, display (or read via text-to-speech) of incoming SMS messages on the console, and to make and receive calls and announce details of the caller. It is an embedded protocol stack that also addresses the main issues confronted by engineers in Bluetooth projects: lack of flexibility, high level of complexity, and difficulty of debugging.


What’s next?  At the IDF 2013 in Beijing Intel® Sr. Technical Marketing Engineers Chris Norman and Horace Li noted that Intel® Alliance partner Red Flag was working on a Tizen 2.0 based IVI solution (Tizen is an open source standards based HTML5 platform for IVI and portable device applications).  According to Norman and Li the Red Flag Software release will include a traffic sign recognition engine, a voice recognition engine, navigation, and other proprietary components.

Finally, and without revealing too much we can safely say that devices built around the soon to be released 4th generation, 22-nm Intel® Core™ processor family (formerly codenamed "Haswell") will offer significant media and 3D performance improvement and will facilitate development of facial recognition, gesture interaction and voice assistance.  Stay tuned!


Learn More

Solutions in this blog:

Axiomtek Nano 0830

Eurotech Catalyst CV

Clarinox Blue


Related topics:

Axiomtek is an Associate Member of the Intel® Intelligent Systems Alliance

Eurotech is an Associate Member of the Intel® Intelligent Systems Alliance

Clarinox is a General Member of the Intel® Intelligent Systems Alliance

Red Flag Software is an Affiliate Member of the Intel® Intelligent Systems Alliance

Murray Slovick

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

On the surface, a digital signage system seems simple. You have an embedded computer designed to be a media player, a monitor to display images and video, and software to enable users to create and manage their digital content. What’s difficult is when you try to design a digital signage network with tens, hundreds or thousands of these systems and attempt to connect them to the back end or cloud for data collection and management. It’s so complicated that developers and system integrators end up doing a lot of the heavy lifting in figuring out how to find interoperable components, connect them, and make the entire signage network manageable and secure.


This much work doesn’t make much sense on such sought-after technology as digital signage systems. Why should everybody have to invent their own wheel when so many retailers, airports, museums, and other organizations are purchasing these systems?


This is the problem that Intel and many Intel® Intelligence Systems Alliance members decided to solve when they came up with the Intel® Intelligent Systems Framework that I discussed in a previous blog. Now one of these members, Arrow Electronics, a global distributor of electronic components and computer products, has taken this all a step further and is offering a comprehensive framework-ready ingredient store for edge-to-enterprise digital signage systems based on Alliance offerings.


Headquartered in Melville, New York, Arrow serves as a supply channel partner for more than 600 suppliers and 200,000 original equipment manufacturers and commercial customers through more than 225 sales facilities and 23 distribution centers in 39 countries. It has the scale, product diversity and knowledge base to be a powerful ally in making a big play in the digital signage space.


Recently, Arrow was part of a webinar with Intel and Kontron entitled “A Framework for the Internet of Things,” where Arrow talked about their Intel Intelligent System Framework stack and its application in the digital signage space. By collecting all the components in one place and ensuring all the pieces fit and work together, Arrow saves integrators from having to do work at a micro level. This means more time to focus on tailoring the end application to the customer to ensure the right technology fit, solution, and ability to scale.


Arrow carries edge-to-cloud ingredients based on Alliance offerings that include interconnects, SBCs, box-level products, aggregation gateways, operating systems, connectivity, cloud services, and displays (see Figure 1). In addition, Arrow offers extensive technical and value-added services to help reduce time to market, lower total cost of ownership, and enhance the overall competitiveness of a solution. This includes help with system architecture, operational efficiency, standard procedures, data collection and analysis, data revenue models, ROI, and other needs a developer, integrator, or end customer might have.


Framework plus Arrow.JPG

Figure 1. Arrow makes assembling digital signage systems from COTS components easy by basing solutions on Intel® Intelligent System Framework products and then providing value-add services covering selection, installation and integration.


Through framework-ready Alliance solutions, Arrow provides a common base of connected, managed, and secure capabilities that simplify the Internet of Things transformation across the digital signage value chain (see Figure 2). Whether your concerns are the fine points of connectivity, security, or manageability, Alliance members providing framework-ready solutions have already done the homework.



Figure 2. Arrow’s Intel® Intelligent System Framework Stack gives developers a fast track for assembling digital signage solutions that connect back to the enterprise or cloud.



Taking this a step further, Arrow brings together the necessary component groups—sensors, embedded intelligence for collecting sensor information, secure wired and wireless connectivity, operating systems, middleware that bridges the system onto the IP network, display technology, content creation software, and more. This means if you’re wondering something like how to add cellular connectivity for remote manageability, it’s already been worked out and is ready to go. Instead of having to focus on such basics, developers and system integrators can work on customization, such as applications needed by a client and the scripts for connecting to the enterprise.


This kind of one-stop shopping for matched ingredients using COTS components really shortens the design window. And at the end of the day, that’s what it’s all about—getting your solution in place quickly, efficiently, with minimum design time, so you can move on to the next project. The example Arrow gives in their presentation is the system integrator who came to them on a short deadline and wanted to assemble a fully designed intelligent digital signage solution to display at a trade show. The solution designed by Arrow using framework-ready components met the deadline and included:


  •       Multiple options for hardware and other components through the Alliance 
  •       Bundled cloud and data services provided with the hardware
  •       Multiple new SKUs supported in manufacturing
  •       Ability to achieve fast time to revenue with minimal design risk


One thing not mentioned in the Arrow presentation is that the use of framework-ready media players and boards ensures the ability to deliver a key requirement for most large digital signage systems today: viewership metrics and targeted content. Such players and board boards can run Intel Audience Impression Metrics Suite (Intel® AIM Suite), an anonymous viewer analytics (AVA) software product. Easily deployed in digital signage systems, Intel AIM Suite installs on nearly any device based on Intel® processors and utilizes off-the-shelf consumer web camera sensors and cloud-based reporting. By providing the ability to recognize viewer gender and approximate age bracket, Intel AIM Suite enables delivery of viewer-targeted content and can collect viewership data such as dwell time and user interactions to report back on advertising effectiveness.


What excites me about what Arrow has done is how it has built an entire store around the Intel Intelligent System Framework and what it can do for making digital signage systems something you can put together from off-the-shelf components. This is definitely taking the Internet of Things a full step further. No longer do you have to do deep thinking on the things themselves. Instead, comprehensive solutions with all the pieces are available. If this is the way all systems are going, then sometime in the future, instead of the Internet of Things, we’ll be talking about the Internet of Solutions.


Learn More

Solutions in this blog:

·      Arrow Integration Services


Related Topics

·      Interoperability - Top Picks (blogs, white papers, and more)

·      Retail - Top Picks (blogs, white papers, and more)

       ·     Digital Signage - Top Picks (blogs, white papers, and more)



Kontron is a Premier member of the Intel® Intelligent Systems Alliance. McAfee, Microsoft and Wind River are Associate members of the Alliance. Arrow is a General member of the Alliance.


Mark Scantlebury

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Associate Editor, Embedded Innovator magazine

In a recent post, we summarized a number of announcements that 6WIND recently made involving key industry partners. We also announced that 6WIND will lead other major industry initiatives to bring compelling solutions for the industry to accelerate networking and telecom transformation.


6WIND is proud to announce the establishment of the dpdk.org open-source project to accelerate development of Intel® Data Plane Development Kit (Intel® DPDK) architecture-based networking and telecoms products.


In telecom, enterprise and cloud applications, the Intel® DPDK, which is a set of optimized libraries and drivers provided to OEMs under a BSD licensing model, is an increasingly viable foundation for packet processing. However, the development of a complete networking and telecom application requires strong expertise.



As a major contributor to the Intel® DPDK, 6WIND has developed valuable enhancements to the software including crypto support via AES-NI, virtualization support, Mellanox 40GbE ConnectX®-3 EN series and Cavium Nitrox® support.


6WIND also developed strong expertise in leveraging the Intel® DPDK as a result of optimizing its 6WINDGate™ data plane solution (comprising virtual switch acceleration, IP forwarding, virtual routing, IPsec, firewall, Large Scale NAT, TCP termination etc.) to run at maximum performance on the Intel® DPDK platform. 6WIND also provides professional services and technical support to developers. As a result, 6WIND enables use of Intel® DPDK in Virtual Network Appliances, Software-Defined Networks and Network Function Virtualization.


At dpdk.org, engineers can find libraries, drivers, documentation, development mailing list, git repository and support relevant to high-performance packet processing using the Intel® DPDK software.


The dpdk.org open-source project is active now and openly available to any developers using the Intel® DPDK. It will provide an openly-accessible repository for released versions, while encouraging innovative collaboration and enabling participating companies to offer specialized support.

By Elizabeth Press, Global Business Intelligence Lead for Telecom, Media & Entertainment and Web


application-modernization-604x200Mobile operators are under pressure to prove that their services are superior to those of over-the-top (OTT) and other non-traditional challenger service providers. Many operators, especially in Western Europe, are investing in a rich communications suites (RCS) as a means of positively differentiating their services through sticky content and evolved communications abilities such enhanced instant messaging, video calling, document sharing and social media connectivity.Application modernization will be critical for technology vendors and mobile operators as they adopt RCS. Many mobile operators have grown through mergers and acquisitions, creating hundreds of duplicate applications on various infrastructures. Rationalization and modernization of legacy applications will be important for deploying RCS efficiently and cost effectively. Mobile operators have been offering RCS in the form of enterprise software packages, as well as downloadable applications. Moreover, applications supporting RCS are run both in Cloud-hosted environments and on-premise. A solid application modernization strategy will need to take all of these considerations into account.How RCS will create a competitive advantage for mobile operators:

  • As LTE networks get rolled out, operators will look to recoup their investments by targeting additional value-added services and niche products to a growing base of LTE end users. Voice over LTE (VoLTE) will become an important offering. Mobile video services will become the hook, attracting users and enabling operators to compete with challenger voice over IP (VoIP) providers.
  • The first wave of operators, mostly in Western Europe, have launched RCS services and are bundling them into their full service offerings. RCS strengthens operators’ platforms and allows them to compete as a premium service provider.
  • The ability to integrate value-added services into a strong communications platform is necessary for satiating next-generation user demand. End users expect seamless interactive abilities and a broad choice of applications. Application Programing Interfaces (APIs) will be available to 3rd party vendors to enable them to offer their services to end user through the RCS platform to meet the growing end user expectations.

Interested in learning more about Dell Application Modernization Services? Please click here to contact our team.

Follow me on Twitter at @ElizabethAtDell.

By Marcel Moelands, OEM Enterprise Lead EMEA, Dell OEM Solutions

When I read about the Telecommunications market, I notice

For instance, LTE (Long Term Evolution or 4th Generation Networks) is setting a new standard in transporting voice calls. Mobile phone users are accessing rich content, which in turn creates a balancing act between OTT(Over-the-Top) traffic and players on one side and carriers looking to monetize OTT on the other.

The proliferation of smart devices has led to a new ecosystem of machine-to-machine platforms and applications, which contributes to data growth explosion.

Evolution in Telecommunications happens faster with each generation.  The cool factor of fashionable features, generational performance leaps in handset equipment, increased bandwidth and ease of ‘getting connected’ all contribute to the speed of evolution.

This is unlike traditional network functions like SMS relays, content delivery, and support functions like OSS (Operations Support) and BSS (Business Support) that serve individual functions and aren’t scalable.

As a result, the Telecom Cloud is increasingly appealing. It allows one shared pool of resources capable of running ‘softwarized’ network and support functions while meeting the service level agreements (SLA) of the Telecom network.

It’s like the difference between a buffet restaurant and having a personal gourmet chef with a fridge full of ingredients. The buffet restaurant offers you what has been prepared for the day — no more, no less. The chef can make you whatever you want — in specific portions — whenever you like (and more efficiently)


The “Cloud” remains a word with many definitions and implied uses.  There are many different cloud deployment models and, it can be argued, not all are really leveraging the cloud.

True cloud deployments should meet several criteria. I would describe the ideal (API) cloud as: controlled, scalable, flexible and pay-per-use resources offering a fully-transparent platform for virtual machine and software deployments.

Key players in the Telecom market envisioned this a long time ago and have been vocal on such cloud-oriented topics as the socialization of communities through technology adoption, the future growth of networked devices and the introduction of the Telecom cloud infrastructure.


Dell’s cloud offering is very broad, consisting of Public Cloud Services, on- and off premise Private Cloud Solutions and Services, Converged Infrastructure Solutions, Cloud Infrastructure Design, Software, Consultancy, Security, Integration platforms (Boomi).

These are supported by a range of products, engagement practices, reference designs and programs to offer our customers the best experience in their paths to transformation. The common philosophy amongst the offerings is the Open Standards based approach to deliver capable and affordable solutions that suit our customers’ needs.

Open Source and OpenStack

Back to looking at the Telecommunications market, especially the equipment providers in this market.

I foresee a large-scale adoption of open source components on standards-based hardware to offer Software-as-a-Service solutions and network components as well as turnkey Infrastructure-as-a-Service solutions. Carriers will eventually leverage existing network, business and operations- support assets, allowing them to further enhance offerings.

The choice of leveraging open source platforms makes a lot of sense as it allows flexibility without affecting the ubiquitous nature of a true cloud. OpenStack is a great example of this. OpenStack is on its 6th release and offers all of the building blocks required for the Telecom cloud, especially when taking the ecosystem of value-add software vendors into account. OpenStack is already widely used in test plans and conceptual designs despite its absolute need for teams of skilled software engineers to create the environments.

Cloud Hardware

Making the right choices when selecting hardware is important as hardware will greatly influence operational expenses. Dell has been a very active player in datacenter solutions for years. More specifically, Dell leads in market share in the hyperscale server category (according to IDC).

With the experience gained in designing datacenter and high-performance servers, Dell started designing and delivering server products specifically-featured for Cloud deployments with PowerEdge C products. Additional components, such as Force10 and PowerConnect Networking complete the offering for end-to-end, turnkey cloud infrastructure solutions.


Alongside the development of cloud server products, a dedicated team was assembled with the sole purpose of developing enablers for OpenStack design and deployment like reference designs, integration of Cloud based Block Storage, Software Defined Networking and Big Data Solutions based on Hadoop. Today, this group of experts is represented in the OpenStack Foundation board of Directors and has submitted components to the OpenStack Community such as the Crowbar software framework.

Rob Hirschfeld offers a non-technical overview of how OpenStack offers a unique combination of features for building hyperscale clouds.

Application overhaul

For OEM customers in the process of moving to the cloud, there are some unique considerations for a successful deployment. From designing for elasticity and scalability to ensuring fault-tolerance, license and infrastructure management, security,  multi-tenant readiness and more, Dell’s OEM Solutions team works closely with the broader Dell teams to deliver revenue-generating end-to-end cloud solutions.

Have you initiated any cloud deployments in your organization? If not, is it something you’re considering over the next 12-18 months?

High performance embedded computing (HPEC) has made tremendous leaps in performance and capability, with new implementations adding to the advancements. Processors like the 3rd generation Intel® Core™ i7 are making HPEC a powerful and cost-effective solution, eliminating the compute performance bottleneck of previous generations.  However, a major challenge for HPEC platforms remains, “How do I get more high-bandwidth data to the processors?” Applications such as radar, sonar, and other signal processing systems generate gigabytes of data that need high GFLOP computational power.





This week Kontron took a major step towards solving this problem with its HPEC proof-of-concept (PoC) platform, which delivers 40 Gigabit Ethernet (GbE) equivalent data transfer rates via PCI Express 3.0. The platform delivers breakthrough I/O data bandwidth in a small footprint for next-generation embedded radar/sonar applications. The PoC platform is based on VPX, which is known for its high performance, harsh environment capability, and small size. Designed as a complete system, the Kontron HPEC PoC integrates their 3U VPX SBC VX3044, the VPX PCIe Switch VX3905 and the high-end L2/L3 Ethernet switch VX3910 into one platform.

The Kontron PoC is unusual for its use of PCIe instead of 10 GbE, which is a popular serial fabric option for high performance computing platforms. Ethernet has the benefit of widespread adoption and excellent software support for the TCP/IP protocol. However, some applications require even higher throughput that is available through serial fabrics like PCIe 3.0, but these solutions have been hindered by their programming challenges and more limited support of communication protocols.


What sets the Kontron platform apart is the use of PCIe 3.0 to deliver 40GbE equivalent data transfer rates using common TCP/IP protocols. The combination of PCIe 3.0 and TCP/IP is achieved through Kontron’s VXFabric* middleware, which implements the TCP/IP protocol over the PCIe infrastructure to boost transmission bandwidth to nearly 40 GbE speeds. This allows the I/O data bandwidth to match up nicely with the capabilities of 3rd generation Intel Core i7 processors on the VPX blades while running a well established transfer protocol, minimizing software impact during system development, and enabling the quick transfer of legacy applications to the new platform with little or no modification. This technology makes it easier to fully utilize the processing potential of the 3rd generation Intel Core i7. Routing of the PCIe fabric is provided by the Kontron VX3906, one of the industry’s first PCIe 3.0 VPX switches. This switch roughly doubles the per-lane throughput compared to PCIe 2.0, providing a major performance boost.



With VXFabric, the use of standard communication protocols, TCP/IP or UDP/IP, based on the socket API protects the application software investment. Legacy software can operate now and new software based on TCP/IP is ensured to be supported for years to come. OEMs and developers can enjoy an optimized total cost of ownership (TCO) and have a direct migration path from their existing application deploying today. VXFabric addresses all fast and low latency peer-to peer inter computer node communication within a chassis. VXFabric can deliver up to 4.2 gigabytes per second (GB/s) (over PCI Express) in data throughput between VPX boards in a rack.


In addition to the high performance interconnection between blades, there is the tremendous amount of processing power provided with the 3rd generation Intel Core i7-3612QE processor (4M cache, 4 execution cores, 8 threads) with the integrated graphics core, the Intel® HD Graphics 4000. The Intel HD Graphics 4000 provides 16 graphics execution units (EUs) that produce noticeable improvement in 3D performance, of as much as 2x. This combination, in the 22nm process, makes for low power consumption and the integration enables even smaller packages helping to increase the blade functional density that is so important to HPEC.


HPEC applications require high GFLOP performance. This is achieved with the Core/GPU combination of the 3rd generation Intel Core i7-3612QE processor and Intel HD Graphics 4000. This low voltage processor has a very favorable GFLOPS/watt ratio with the Intel® Advanced Vector Extensions (Intel® AVX) technology.


Graphics computing offers unprecedented application performance by offloading compute-intensive portions of the application to the Intel HD Graphics 4000 execution units, while the remainder of the code still runs on the CPU cores. The CPU cores are optimized for serial processing, while the graphic EUs are more efficient for parallel process performance. Many radar and sonar applications can be broken down into serial and parallel algorithms that can take advantage of this combination. The graphics EUs provide a massively parallel processing subsystem that can focus on many threads and parallel large data sets boosting the GFLOPs to high performance levels.


Looking forward, the performance story will get even better with the 4th generation Intel® Core processor family. These chips introduce the Intel® Advanced Vector Extensions (Intel® AVX) 2.0 instruction set, which doubles peak floating-point throughput, enabling a quad-core mobile class processor to achieve up to 307 GFLOPS at 2.4 GHz. The graphics engine is also upgraded, offering another 352 GFLOPS of the raw performance through OpenCL 1.2 programming—more than doubling overall compute potential—while adding only a few watts of power consumption.


The Kontron PoC platform is also notable tor integrating a wealth of hardware and software that simplifies development. The PoC platform includes a Linux distribution, diskless node support and parallel workload management software. Also integrated are compilers, optimized FFT benchmark code samples and a stress test application framework for benchmarking. Computer health management is enabled through a chassis management board content, system-wide PBIT and power/performance management at the system level.


Processor Graphics add GFLOPs to HPEC.png

Processor Graphics add GFLOPs to HPEC


The goal of the Kontron HPEC PoC is to help developers dramatically streamline the process from design to field deployment of next-generation radar and sonar that are expected to make a tremendous jump in a processing power and bandwidth.


The VPX-based PoC is a flexible design that can be specifically optimized for the most demanding applications. The configurable PCI Express switch fabric interconnect can be routed in ways most appropriate for the data transfer needs of HPEC platforms. Legacy application support with TCP/IP make the platform even more attractive, leaving it to the imagination of designers to utilize the 10x increase in bandwidth. Based on mainstream IT technology (TCP/IP, PCIe, Intel® processors), the Kontron HPEC PoC is also developed to address the U.S. military’s smart procurement initiatives that put into place more rapid and agile purchasing processes.


Related Topics


Kontron is a Premier member of the Intel® Intelligent Systems Alliance.

     Contact Kontron>>


Jerry Gipper

OpenSystems Media®, by special arrangement with Intel® Intelligent Systems Alliance

Editorial Director, OpenSystems Media, VITA Technologies

By Jeffrey Volkert, OEM Solution Consultant BeNeLux, Dell OEM Solutions

Today’s dynamic healthcare environment relies heavily on the latest technology to increase the speed and accuracy of patient diagnosis and treatment. Physicians are nowadays required to interact with a wide range of end user devices to access their medical applications. From a Dell precision workstation with locally hosted applications to latitude tablets with applications running in a cloud, healthcare professionals may have to wait before using a specific application at an end-point, delaying medical treatment.

For these physicians it would be most beneficial if they could have an environment where they can access their applications from any device with a single sign-on, reducing time consuming actions during the course of a normal workday.

Dell’s Mobile Clinical Computing (MCC) solution understands these issues and mitigates many of today’s Hospital key challenges:

  • Limiting the amount of time clinicians and nurses spend
    searching for available end points as well as the time spent accessing and then re-accessing applications
  • Providing a single sign-on instance that clinicians and nurses can use to authenticate seamlessly against endpoint
    and multiple applications
  • Providing quick and secure access to patient information on-demand using proximity and/or biometric authentication methods
  • Reducing and limit the time spent by IT staff managing client systems in their environment

According to recent trials held in Europe, Dell has proven that our MCC solution can deliver:

  • Up to 215+ minutes per user per week productivity gain (9% improvement)
  • Economic value of productivity can be up to £10,000 per user per annum
  • Appropriate Information Security delivered that’s workable for users
  • Improved patient safety, quality of care and patient satisfaction


I think you would agree with me that the latter is the most important one.

Dell Mobile Clinical Computing Solution is a desktop virtualization and identity access management solution built on top of the DVS Enterprise 6020 Solution and powered by the latest Dell 12th generation servers. The Dell MCC Solution is offered in both Rack or Blade server configurations with an option for storage on Dell EqualLogic or Compellent solutions which allow for automatic load balancing, tiering, snapshots and replication.

Dell PowerEdge 12th generation servers are designed with more memory capacity and more integrated I/O (input/output) than the previous generation and can provide up to 18 times more Microsoft® SQL Server® transactions per second when using PowerEdge Express Flash Storage solid-state drives (SSDs) (directly connected to the CPU and memory bus).

Healthcare environments requires end user devices to be up and running 24/7 as there is no timeframe on patient care. Dell WYSE thin client have no moving parts that can brake down, like fans or hard drives. WYSE thin client have a lifecycle that is twice the lifecycle of a traditional PC, reducing validation and transitioning costs. An added bonus is that with a thin client environment, centralized IT management is now possible to ensure optimal system performance.

A solution is never complete with out the right support for your environment. Dell offers customers the Dell ProSupport for MCC. Dell ProSupport MCC solution helpdesk and field support technicians are highly-trained specialists who are knowledgeable of every component of the lab validated MCC solution configurations including enterprise hardware and software technology, endpoint devices, desktop virtualization software, and identity access management software. These technicians are certified in VMware vSphere & View, Imprivata, and Wyse technology and receive ongoing training.

To learn more contact our OEM team.

This week, 6WIND announced extensions to the 6WINDGate™ networking software that deliver a 10x performance improvement for virtual switches. These new capabilities address three critical scalability challenges faced by service providers operating public/private cloud data centers as well as telecom infrastructure architected around Network Functions Virtualization (NFV) concepts.

The first scalability challenge relates to the number of Virtual Machines (VMs) per server blade. This VM density is increasing rapidly, leveraging on-going improvements in the performance of the x86 processors used on those blades. Today, a typical server blade in a service provider data center hosts at least 50 VMs, with that number expected to grow to hundreds within a few years.

Because of this growth in the number of VMs running on each server blade, the data center network needs to expand beyond its current limit at the Top-of-Rack (ToR), to a model where a virtual switch on each server blade is used to distribute the increasing volume of network traffic to virtualized applications. This function is typically implemented using the open-source Open vSwitch (OVS) or an equivalent proprietary virtual switch.

The second scalability challenge is the network bandwidth required by VMs. With the constant growth in rich media applications, individual VMs can require sustained network bandwidth of 1Gbps or more. As VM density increases, this bandwidth requirement can quickly outstrip the capacity of a standard virtual switch, constraining either the number of VMs that can be instantiated on a blade or the performance seen by the users of those VMs.

The final major challenge is a result of evolving traffic patterns within data centers. In traditional client-server data centers, network traffic was predominantly “North-South”: from the Internet, through the core switch and aggregation layer, to the ToR switch and then to the server blades. Within multi-tenant Web 2.0 data centers, however, VM mobility and VM sprawl cause the majority of traffic to be “East-West”: between VMs located on different physical server blades.

This growth in East-West traffic means that high-bandwidth VM-to-VM communication (VM2VM) is mandatory. To isolate and secure VM2VM, however, requires secure tunneling services, running on the virtual switch, that extend beyond the basic Layer 2 features provided by a typical virtual switch. Within a cloud, these ensure that only specific VMs can access the applications and data owned by other VMs. Between clouds, policies are enforced to ensure that data and traffic for one cloud is not visible to another.

Adding to the overall security- and networking-related workload that must now be supported on server blades is the increasing trend towards the use of overlay network technologies which avoid the 4,094 tunnels limitation of traditional VLANs. Data centers are now adopting VXLAN and NVGRE protocols, with 16-bit IDs that allow for 16 million tunnels.


The new extensions to the 6WINDGate networking software provide solutions to these three challenges, delivering a data plane solution that achieves 5x - 10x acceleration for the baseline Layer 2 switching function.


Thanks to this increase in Layer 2 switching performance, data center operators can achieve the increases in VM density that are enabled by on-going improvements in the performance of server processors. They can also deliver high network bandwidth to individual VMs, addressing the performance needs of users running streaming media applications or other bandwidth-hungry workloads.

At the same time, 6WINDGate delivers high performance on the necessary secure tunneling protocols such as IPsec, GRE, NVGRE, VLAN and VxLAN.

Within the virtual switch, the 6WINDGate data plane spies on 6WINDGate spies on configuration messages from the virtual switch control plane to its data plane, intercepts the appropriate packets and processes them within the 6WINDGate fast path, thereby achieving the 5x – 10x performance acceleration.

These high-performance switching, tunneling and security functions are achieved with no changes required to the virtual switch software itself.

6WINDGate is also fully-compatible with OpenFlow and completely transparent to the applications running on the platform, which do not need to be recompiled or re-verified in order to work with this high-performance solution.

We will be demonstrating the use of 6WINDGate to accelerate the standard Open vSwitch, in partnership with Big Switch Networks and Ixia, in our booth number 401 at the Open Networking Summit in Santa Clara, CA from April 15th through 17th. We hope to see you there!

If the lights in your home have ever flickered briefly during a storm—and didn’t then stay out—a lot of Smart Grid technology kicked in between the time the lights blinked out and the few hundred milliseconds later when they came back on again.


During that brief time an operation control center detected the loss of power to your area and redirected power from another substation to compensate, perhaps diverting power from a third source to compensate for the additional load on the second substation. When the break was repaired, the transformer replaced, or the breaker in your local substation automatically reset after a lightning strike, the control center automatically brought your substation back online and rebalanced the loads between all substations. In that way the Smart Grid is said to be self-healing, though humans will always have to repair line breaks and blown transformers.


Figure 1: Electrical power distribution and transmission.


There are two types of substations: primary and distribution (Figure 1). Primary substations work on the supply side, taking power from a variety of primary sources—hydroelectric, solar, wind, geothermal, and nuclear—and putting it out on the grid. This involves synchronizing highly variable inputs such as solar—which is clearly only available during the day—with wind power, which peaks at night. The substations must also regulate the loads on the power sources, which may vary considerably in capacity.


For each primary substation there may be dozens of distribution substations, which work on the demand side, ensuring load sharing between residential, industrial, and transportation end users. When a substation starts nearing its peak capacity it signals the control center to bring other sources online to get it through peak demand, avoiding the ‘rolling blackouts’ that preceded the Smart Grid.

The Smart Grid works because substations can all communicate with each of the elements under their control, sending that information back to a master control center that controls all the substations. IEC 61850 is the IEC standard for substation automation, replacing a myriad of proprietary protocols whose lack of interoperability delayed the advent of the Smart Grid.


On the Level

There are three different levels in Smart Substation architecture: the Station Level, the Bay Level, and the Process Level. Advantech provides numerous Intel-based IEC 61850 certified Smart Substation solutions in each of these areas. Its UNO-4600 series Substation Automation Computers can operate as HMI/SCADA, Terminal (serial-port) Servers, Protocol or Communication Gateways, Cyber Security Servers (UTM), and Substation/Networking Recorders.


At the Station Level the Advantech UNO-4683 provides the communication gateway between the remote control center and all the environmental monitoring and control devices at the substation; it also provides cyber security for the substation. The UNO-4683 Automation Computer is based on an Intel® Core™ i7 running at 2.0 GHz with 4 GB of DDR3 SDRAM. It provides two RS-232/422/485 isolated serial ports with automatic flow control; 2 x 10/100/1000Base-T and 4 x 10/100Base-T Ethernet ports; and six USB 2.0 ports with three domain I/O expansions.


At the Bay Level (Figure 2) the Advantech UNO-4673A protocol server provides a data gateway between intelligent devices and the station-level controller. The UNO-4673A is based on a 1.66 GHz dual-core Intel Atom processor with 2 GB of DDR2 SDRAM. Sitting on the Ethernet backbone the Advantech UNO-4672 acts as a network recorder and analyzer, passing device data back up to the station level. The UNO-4672 is powered by either an Intel® Pentium® M running at 1.4 GHz or an Intel® Celeron® M at 1.0 GHz, each with 1 GB of on-board DDR DRAM.

bay level.jpg

Figure 2: Substation automation at the Bay Level.


Finally, at the Process Level either the Advantech UNO-4671A (Intel® Atom™ D510 @ 1.66 GHz) or UNO-4673A (dual-core Intel®  Atom™D510 @ 1.66 GHz) acts as an Intelligent Electronic Device (IED) that continuously monitors the status of transformers, circuit breakers, and switch gears, warning of excessive temperature, vibration, leakage or other issues that could cause device failure.


Getting Smarter

When the lights go out they don’t just blink for everyone—sometimes they go out for hours. The basic design of the electrical power grid is over 100 years old, and it’s only gradually being computerized. Most utilities have begun to automate the restoration process by installing supervisory control and data acquisition (SCADA) systems that monitor and control line reclosers and switches, but the system is still a long way from being completely automated. Smaller cities and other customers are usually connected to their local substation by a single radial feeder. Outages to these feeders are called in by a customer to the control center, which then dispatches a person to the area to manually restore power to customers.

Implementation of automated devices such as SCADA-enabled switches and line reclosers would cut outages. Distribution circuits could also be sectionalized with SCADA-operated devices between each section. Open points that connect to other circuits could be replaced with SCADA-enabled switches. Then in the event of a failure the system could automatically isolate the problem, opening adjacent switches and rerouting power to unaffected sections by closing connections to adjacent circuits.

The Smart Grid is getting smarter, and substation automation is the key element to its success. Advantech already has a wide range of Intel-based products that can provide a complete, automated solution. It’s just a matter of time before the Smart Grid all comes together and your lineman will need to find another line of work.


Solutions in this blog:

Related topics:


Advantech is a Premier member of the Intel® Intelligent Systems Alliance.


John Donovan
Roving Reporter (Intel® contractor), Intel® Intelligent Systems Alliance
Low-Power Design
Follow me on twitter: @jdonovan43

New machine-to-machine (M2M)-based cloud applications for home monitoring can help patients manage their health conditions at home, while giving healthcare providers access to up-to-date patient information. But these professional monitoring devices have consumer-like demands. They need to be easy to set up, connect, and use. They need to protect patients’ sensitive personal health information. And developers of these systems need to be able to quickly respond to market changes with new products, using standards-based platforms and network protocols. Our roundtable discussion explores how the Intel® Intelligent Systems Framework supports remote monitoring systems that deliver the management, connectivity, and security features required for effective patient care. Our roundtable participants are:




Intel Roving Reporter: The market for remote medical monitoring devices is heating up, putting additional pressure on developers to get to market faster with competitive products. How can you help them get there?

Satish Ram, Kontron: Kontron provides a suite of application-ready platforms that are Intel Intelligent Systems Framework (ISF) ready, by offering various connectivity, scalability and security options as well as meeting healthcare-specific compliance requirements (PCI, HIPAA, EN50134, etc.). Our human-machine interface (HMI) and M2M products are examples of such products targeted at the healthcare vertical. These platforms come with a development platform that is already pre-integrated to connect up to other ISF-ready systems.

Kontron M2M platform.png


The Kontron M2M Developer Kit is a compact platform that integrates its COM Express® mini Pin-out Type 10 compatible COMe-mTT family of Computer on Modules (COMs) based on the Intel® Atom™ processor E640 1 GHz. The external USB port simplifies use of M2M SDKs, and offers sufficient storage for M2M smart service applications, middleware and OS on the 4 gigabyte (GB) internal MicroSD card. The kit also supports a full range of wireless interfaces, protocols and network topologies.


Mike Ueland, Telit Wireless Solutions: Telit offers an extensive portfolio of quality cellular, short-range, and GNSS modules, available in over 80 countries. By supplying business-scalable products that are interchangeable across families, technologies, and generations, Telit is able to keep development costs low and uniquely protect customers' design investments. Telit provides customer support, design-in expertise through its sales and support offices, and offers a global distributor network of wireless experts in over 30 Telit-designated competence centers that work with customers to develop concepts and ideas to realize their solutions.


Telit offers a form factor family concept, allowing developers to develop one application for different markets. M2M modules belonging to a family have the same form factor and functionalities—the same size and shape, same pin-outs, and same AT command set. Design once, deploy globally. The advantage for remote medical monitoring device developers and electronic manufactures is that all modules in a family are interchangeable. When the time comes, developers can easily replace family modules with successor products such as would happen in a switch from 2G to 3G. The unified form factor feature reduces development effort, time, and costs, protecting investment in core designs.

Telit xE910 Family.jpg


Steve Popovich, Digi International: Digi provides several easy-to-integrate connectivity products and services used for connecting medical devices and sensors in thousands of hospitals and home healthcare applications. We have commercial medical device customers that create prototype remote monitoring systems in just a few days.

Digi hmhealthmonitor.jpg

RR: What do designers need to know about connectivity standards—from wireless to cellular—as they develop these devices that may be sold for use anywhere?

Satish Ram, Kontron: Solutions designers need to become familiar with WPAN, WLAN, and WWAN connectivity options and the current adoption trends towards Bluetooth Low Energy for medical devices. In addition, for use of cellular WWAN solutions, designers need to ensure compliance with regulatory certifications such as FCC, CE, PTCRB, GCF or others depending on the country and deployment regions. Also, wireless data services can vary significantly from one country to another as do wireless frequency bands for cellular networks. Kontron can help customers navigate this complex ecosystem and make it easier for deploying solutions quickly.


Mike Ueland, Telit Wireless Solutions: There are a number of requirements to consider, including speed, coverage, deployment plans, costs , data plans and hardware, carrier relations, and the expected product lifecycle. In the majority of cases, the cellular module is the most important component in the process of integrating M2M into electronic devices like those used for remote medical monitoring.


Cellular data transmission speeds go up from 2G where GPRS speeds are rated in a few tens of Kbps to the current release of LTE (4G) of 100 Mbps. There are a number of trade-offs. The higher the connection speed you require, the higher the price of the module and associated connectivity data plans. Today, 3G enjoys nearly as much geographical and population coverage as 2G. 3G will soon have a larger footprint than GSM when U.S. carriers start sun-setting their 2G service. Therefore, 3G will continue to be the long term “everywhere” cellular coverage for many more years.


Steve Popovich, Digi International: One of cool benefits of our iDigi connectivity architecture is we can support ALL of the domestic and international wireless standards and little or no wireless experience is required by the developer! Developers using our iDigi architecture only need to create the business application and not be concerned with the details of what wireless technology is needed for deployment. For example we have healthcare customers that create a single business application that can incorporate several different wireless connectivity technologies to economically support multiple unique end customer use cases.


RR: Since these remote devices are being used by consumers, not healthcare or technical professionals, what are some strategies you advise developers to use to make sure these are easy to set up, connect, maintain, and use?

Satish Ram, Kontron: Focusing on ease of use and ‘ready out of the box’ solutions is key for consumer adoption. Leveraging frameworks such as ISF will help developers ensure seamless integration to various subsystems and automate much of the functionality to improve user experience. As an example, home healthcare devices should automatically detect new devices and ‘pair’ with them without any user intervention. So, the consumer can purchase a BLE weight scale that auto-connects and displays the data on his/her smartphone with a simple downloaded app.


Mike Ueland, Telit Wireless Solutions: There are development tools that can aid or make it easier for developers to design applications. Many firms have been successful using the Agile software development process. The Agile process promotes adaptive planning, evolutionary development and design with an interactive approach that encourages faster responses that are flexible to change. The Agile manifesto establishes that developers, in their approach to application development, limit the amount of work in progress to reduce the impact of sudden business changes.


Customer collaboration and the user experience with the application are critical in the development process. Therefore, continuous customer and stakeholder involvement is required. In order to deliver on this key aspect, Agile establishes smaller, faster development cycles that deliver functioning software sooner, which in turn limits the total body of work in progress. This has the effect of testing hypotheses about required functionality much earlier than other methods allowing developers and business sponsors to quickly prove or disprove them. The bottom line is that developers need be strategic about developing applications for the consumer.


Steve Popovich, Digi International: Our iDigi Manager Pro services provide an almost infinitely scalable connectivity platform for our customers. Our iDigi Manager pro services enable our customers to automate the remote configuration, commissioning, and ongoing management of millions of devices. Via our iDigi APIs developers can also elegantly integrate our features into custom applications.


RR: Security is always an issue when discussing the management of personal health information. How are you helping developers address this in their products?

Satish Ram, Kontron: Security is of paramount importance in many applications, and especially so in healthcare applications. Kontron’s products leverage the ISF security framework components, including Trusted Platform Module (TPM) and WindRiver Secure Remote Management (SRM) software, to ensure data privacy and security.


Mike Ueland, Telit Wireless Solutions: Cellular data communication is inherently very secure—substantially more so than traditional data communication. In addition to the security already built into cellular data, we provide the remote medical monitoring device integrator with a comprehensive set of IP protocols designed to enhance this security. A key element of this toolkit is the embedded SSL protocol available in all our modules. The use of SSL over cellular IP data, augmented by best practices in two-way challenges and other identification, authentication and security measures ensures that personal health information is communicated in the most secure manner possible.


Steve Popovich, Digi International: Digi provides several security options for both wired and wireless products and services. For example even our most basic wireless modules integrated inside of medical devices support complex encryption methods that prevent data from being read by network analyzers. For security purposes medical device and patient identification information is only combined beyond connectivity services platform.




Learn More


Solutions in this blog:

•            Intel Intelligent Systems Framework

•            Kontron M2M Smart Services Developer Kit

•            Telit Wireless Solutions HE910 Family

•            Telit Wireless Solutions GE910 Family

•            Digi International x2E Gateway

•            Digi International iDigi Manager Pro



Related topics:

•            Medical - Top Picks (blogs, white papers, and more)

•            Connectivity - Top Picks (blogs, white papers, and more)



Kontron is a Premier member of the Intel® Intelligent Systems Alliance.

    Contact Kontron>>

Digi International is an Associate members of the Alliance.

    Contact Digi>>

Telit Wireless Solutions is an Associate members of the Alliance.

    Contact Telit>>


Cheryl Coupé

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Freelance technology writer & editor (scribo.net)

Filter Blog

By date:
By tag: