Skip navigation
1 2 Previous Next

Software & Bios

24 Posts authored by: kentonwilliston

The elevator pitch for Network Functions Virtualization (NFV) is simple: why make massive investments in custom equipment when standard servers can do the job – and with far better flexibility? The reality, of course, is much more complicated. To deploy an NFV solution, developers must answer questions including:

 

  • Which servers should I use?
  • What is the right operating system (OS) and virtualization solution?
  • How do I virtualize switching and packet processing?
  • How can I ensure sufficient performance, availability, reliability, and cost-efficiency?

 

Two new NFV platforms from Red Hat* and Wind River* address these questions with highly-integrated OpenStack* packages that are optimized for high performance and rapid deployment. Each platform combines a variety of software and hardware from the Intel® Internet of Things Solutions Alliance – let’s start by taking a look at the NFV platform from Red Hat, a General member of the Alliance.

 

Red Hat*/6WIND* NFV Foundation

As shown in Figure 1, Red Hat’s offering is built on Red Hat Enterprise Linux* and Red Hat Enterprise Virtualization. These components provide the fundamental reliability and efficiency needed to decouple network functions from the underlying hardware. Red Hat Enterprise Linux OpenStack* Platform controls the overall solution and provides an interface for a wide variety of network orchestration software tools.

 

red hat 6wind.png

Figure 1. The Red Hat*/6WIND* NFV platform is optimized for reliability and performance.

 

To ensure optimal performance, the solution incorporates the 6WINDGate* networking stack from Affiliate member 6WIND. Compared to standard Linux* distributions, this software delivers a 10x performance boost to both data plane applications and the virtual switch. For the data plane, 6WINDGate uses the Data Plane Development Kit (DPDK) library to process network packets outside the Linux kernel. This fast-path processing can deliver over 10 million packets per second per core on the latest Intel® Xeon® processors. (See the Red Hat/6WIND brief for more details.)

 

For switching, the 6WINDGate Open vSwitch Acceleration module runs through 6WIND’s fast path architecture. It delivers 200 Gbps throughput transparently without having to recompile the standard Open vSwitch, change anything in the OpenStack platform, Kernel-based Virtual Machine (KVM)/ QEMU hypervisor, or OpenFlow controller. (The 200 Gbps performance was recently verified on an HP ProLiant server– see 6WIND’s product brief for details. HP is a Associate member of the Alliance.)

 

To demonstrate their platform’s value, Red Hat and 6WIND recently created a vEPC orchestration proof-of-concept (PoC) illustrated in Figure 2. This PoC runs on PowerEdge* C6220 servers from Premier member Dell. Other collaborators on the PoC include Intel, Cyan, and Connectem. It is worth noting that the Dell PowerEdge C6220 offers a number of carrier-grade features such as certification to NEBS Level 3 and ETSI standards (including extended operating temperature range and shock resistance) and DC power supplies. Thus, the PoC is comprised of deployment-ready components from the hardware all the way up to the service orchestration layer.

 

vEPC PoC stack.png

Figure 2. Key elements of the vEPC PoC Stack.

 

This PoC nicely demonstrates how developers can deal with the complexities of NFV. I strongly recommend reading the related white paper on the Intel® Network Builders site– it describes the challenges and the solutions of vEPC in considerable detail and is well worth your time.

 

Wind River* Carrier Grade Communications Server

Wind River has also developed its own fully-integrated NFV solution, the Wind River* Carrier Grade Communications Server. As illustrated in Figure 3, Wind River has built this platform up over time, offering progressively higher levels of functionality over the last couple of years. The company’s latest offering integrates Wind River Open Virtualization with Wind River Linux, optimized real-time Kernel-Based Virtual Machine (KVM++), carrier grade enhancements to OpenStack, and DPDK accelerated vSwitch technologies. The solution is designed specifically for NFV deployment, and to seamlessly integrate with existing networks.

 

stair_step.jpg

Figure 3. Wind River’s NFV platform builds on previous offerings.

 

The most obvious benefit of the platform is its high level of integration. But what really makes the platform noteworthy is its emphasis on carrier grade reliability. According to Wind River, the software supports “six 9s” reliability, translating to downtime as low as 30 seconds a year. It is hard to overstate the importance of this claim – an NFV design is only useful if it delivers carrier-grade reliability, and working out reliability issues can be a huge development hurdle. Thus, the availability of six 9’s reliability in a pre-integrated NFV software platform is a major asset.

 

The Wind River Carrier Grade Communications Server is optimized for and running on HP ProLiant servers. Given the platform’s emphasis on carrier-grade reliability, it is worth noting that the HP ProLiant line itself offers carrier grade features, including NEBS and ETSI certifications and extended lifecycles. It is also worth noting that 6WIND used this same HP server line to demonstrate its Open vSwitch performance. This fact that illustrates the fundamental value of NFV, namely the ability to use standard servers and software with a high level of interoperability.

 

Returning to Wind River’s NFV platform, the reliability of the software is complemented by its high performance. Particularly notable is the accelerated virtual switch software, which delivers 20x the performance of the standard Open vSwitch with no hardware acceleration required. This switch can deliver 12 million packets per second to guest virtual machines (VMs) using only two processor cores on an industry-standard server platform, in a real-world use case involving bidirectional traffic. What’s more, this cutting-edge performance is achieved using up to 33% fewer CPU resources than alternative solutions – see Wind River’s press release for details.

 

A Foundation for the Future

The Red Hat/6WIND and Wind River NFV platforms address some of the most pressing issues facing telecom developers today. By providing highly integrated foundations for virtualized equipment, these platforms promise to dramatically lower time-to-market while delivering industry-leading performance, efficiency and reliability. And it’s clear that these companies are fully committed to NFV, meaning that we can expect even greater integration and higher levels of performance in the future. I can’t wait to see what’s next!

 

Learn More

Contact featured members:

 

Solutions in this blog:

 

Related topics:

 

Dell OEM is a Premier member of the Intel® Internet of Things Solutions Alliance. Hewlett-Packard and Wind River Systems are Associate members of the Alliance. 6WIND SA is an Affiliate member and Red Hat is a General member.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Internet of Things Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

As the march towards Network Functions Virtualization (NFV) continues, many telecom equipment manufacturers (TEMs) are revisiting the role of Deep Packet Inspection (DPI). On one hand, NFV presents new opportunities to optimize performance and revenue with DPI. On the other hand, the complexity of NFV can make these benefits harder to achieve.

 

In this blog, I will highlight some of the challenges of DPI in an NFV environment. I will also examine a reference architecture that uses NFV solutions from the Intel® Internet of Things Solutions Alliance. I will show how this architecture overcomes key design challenges and realize the benefits of NFV.

 

What Makes Virtualized DPI Different?

As I discussed in my last blog, NFV has the potential to improve network flexibility and cost structures, allowing operators to innovate more quickly and efficiently. However, the transition from fixed-purpose, hardware to virtualized applications requires careful attention to availability, performance, and scalability.

 

The scope and magnitude of these challenges is motivating TEMs to seek third-party software for their virtualized designs. Heavy Reading recently issued a detailed study on virtualized DPI which found that:

 

  • More than 90 percent of respondents said ETSI NFV would affect next-generation product design, and more than half said availability of standardized virtualized network function (VNFC) components would likely lead them to source more third-party components, including the proposed ETSI DPI VNFC.
  • Two thirds of vendors now believe DPI is a must-have technology.
  • The proportion of vendors choosing to source DPI from a third party is gradually rising, and a majority of those doing so prefer to use a pure-play supplier of DPI components.

 

In response to these trends, Associate Alliance member Wind River and Affiliate members Tieto and Qosmos recently teamed up with Intel to build a NFV DPI reference architecture. This design demonstrate how to overcome the common challenges associated with vDPI. The team has written up a detailed white paper explaining their work. I will touch on just a few points here; I recommend reading the paper for more details. And if you are new to the topic, check out the Qosmos/Intel NFV backgrounder for a good intro.

 

nfv reference design.png

Figure 1. The reference architecture shows how to implement virtualized DPI.

 

Carrier-Grade Availability

Availability is the cornerstone of any telecom solution. Thus, the reference architecture starts with high-reliability components like carrier-grade Wind River* Linux*. These carrier-grade components are coupled with high-availability design techniques such as redundant virtual machines (VMs) with failover capabilities.

 

Of course, reliability and redundancy are hardly new concepts for telecom design. What may be less obvious is the way performance can impact availability. Virtualized designs must contend with issues like context switching that can impact availability. Thus, NFV designs must maximize performance not only to achieve sufficient throughput but also to ensure availability.

 

Optimized Performance

The reference design tackles performance issues with a variety of design techniques. These start at the hardware level, where the Intel® Xeon® processors, Ethernet controllers, and Ethernet switch are all optimized for virtualization – and in the case of the switch, specifically optimized for virtualized networking. (More background on the switch here.) The key to this optimization is minimizing hypervisor overhead. Context switches are computationally expensive, and introduce unwelcome timing variability. Thus, the Intel hardware has been equipped with a variety of virtualization accelerators that offload hypervisor tasks, leading to considerably improved performance and determinism.

 

Building on this hardware foundation, the reference architecture uses the Data Plane Development Kit (DPDK) to accelerate packet processing across the virtual switch and within various applications. DPDK uses Data  Direct Input Output (DDIO) capability on Intel Xeon processors to make the data packets directly available in CPU Cache for processing, which avoid time consuming memory read operations.  I haven’t seen performance specs for this reference architecture, but similar designs can achieve 200 Gbps through the Open vSwitch – an impressive number, to say the least.

 

Moving up the software stack, the Tieto IP (TIP) Stack is also optimized for virtualized environments. For example, the TIP Stack implements Single Root I/O Virtualization (SR-IOV), which eliminates hypervisor involvement in data movement by giving each VM independent memory, interrupts, and DMA streams – all of which can be directly accessed by the Intel® Ethernet controllers.

 

Zooming a bit further out, there is the larger question of how a system can minimize VM-to-VM communications through the hypervisor. And here at last we hit upon the importance of DPI in NFV. By deploying DPI in front of incoming traffic, flows can be tagged to convey the protocol and application ID to other nodes (Figure 2). By identifying the needed processing up front, this approach eliminates the need to pass flows between VMs. In addition, it allows nodes like the Traffic Detection Function (TDF) to be stateless, simplifying their design.

 

dpi vnf.png

Figure 2. DPI can be used to tag incoming flows.

 

The reference architecture implements this flow classification with Qosmos DPI VNFC. Based on the popular Qosmos ixEngine*, this VNFC enables many functions, including quality of service (QoS) management, traffic shaping, throttling, and gating. In addition to supporting high availability and performance, the DPI VNFC is critical to enabling scalability, as we will see in our final section.

 

Flexibility and Scalability

One of the main selling points for NFV is the ability to rapidly scale and adapt to new needs. DPI is key here. Information extracted by the a DPI VNFC can be passed to the orchestrator/analytics system, which in turn can trigger VMs to scale capabilities up or down as needed. Of course, this requires that the DPI engine itself be scalable. In particular, it must be possible to add new instances of the VNFC without dropping packets or disrupting existing DPI instances.

 

One way the reference architecture addresses these issues is by separating DPI from the Packet Data Network Gateway (PDN-GW). This simplifies DPI scaling and enables increases in capacity without impacting the PDN-GW.

 

It is also worth noting that each DPI VM can be independently configured to meet specific needs. For example, a particular DPI instance could be configured to classify 100 selected apps at 40 Gbps with a particular latency window. To support such a configuration, the orchestrator can provide an appropriate number of CPU cores and memory. A different VM with different needs might receive greater or lesser resources.

 

Powering NFV with DPI

Overall, the reference architecture demonstrates three key points:

 

  • DPI has the potential to significantly extend the benefits of NFV.
  • The challenges of implementing DPI in an NFV context can be mitigated with third-party software.
  • Collaboration between third-party NFV software suppliers is essential for addressing the many complexities of NFV.

 

I expect to see much more progress on NFV software in the coming months. I look forward to bringing you the latest coverage as events unfold!

 

 

Learn More

Contact featured members:

 

Solutions in this blog:

 

Related topics:

 

Wind River Systems is an Associate member of the Intel® Internet of Things Solutions Alliance. Tieto and Qosmos are Affiliate members of the Alliance.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Internet of Things Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

The telecom industry is clearly excited about Network Functions Virtualization (NFV), but this excitement has been accompanied by major concerns. One big worry is the risk involved. To realize the benefits of NFV, the industry must move to a mixed supplier environment – one where systems from different providers work together smoothly. Service providers are understandably skeptical about this possibility. Given the history of the industry, how can they be sure that multi-vendor solutions can achieve interoperability?

 

Wind River is answering this question with its Titanium Cloud ecosystem. The ecosystem is designed to ensure interoperability with the Wind River* Titanium Server, a highly integrated NFV platform (Figure 1). The ecosystem includes both hardware partners like Kontron and HP, as well as software providers like Brocade, GENBAND, Metaswitch Networks, Nakina Systems, and Overture. Because these hardware and software products have been pre-integrated and validated with Wind River’s NFV solutions, service providers and telecom equipment manufacturers (TEMs) can confidently select them for deployment in live networks.

 

CGCS_webgraphics.jpg

Figure 1. Wind River* Titanium Server is highly integrated.

 

I recently had a conversation about the program with Charlie Ashton, senior director of business development for networking solutions at Wind River. Charlie pointed out that a full NFV solution requires many components, like Operations Support Systems (OSS) and service orchestrations (Figure 2). He explained that the goal of the program is to provide a complete, end-to-end stack that provides all of these components – and gives service providers and TEMs the ability to choose from multiple vendors.

 

CGCS_webgraphics_small.png

Figure 2. NFV requires many components.

 

To achieve this goal, Wind River is working closely with its partners to ensure interoperability.

On the software side, engineers from Wind River and its partner companies are working together closely to validate correct operation on the NFV platform, and to optimize the software where possible. Similarly, Wind River is working closely with its hardware partners to validate and optimize the NFV platform on the target hardware.

 

The ecosystem has already lined an impressive array of hardware. For example, Wind River has pre-validated the Kontron Symkloud MS2910, which is designed with 10G OpenFlow redundant switching. This high density 2RU modular platform compacts multiple Intel® processors with dedicated memory and storage, and is optimized for stacking multiple platforms for massively scalable virtual applications. Notably, the Symkloud platform can share virtual resources across physical nodes without additional management software. Kontron recently collaborated with fellow Titanium Cloud members GENBAND and Wind River to demonstrate a NFV-enabled Session Border Controller.

 

Wind River also validated and optimized its software for the HP ProLiant DL380p and DL360p servers, with the BL460c to follow. HP recently collaborated with Brocade and Wind River to create a series of proof of concepts (PoCs) targeted at ETSI-NFV’s Reliability and Availability Work Group.  These demos featured seven instances of Brocade’s Vyatta vRouter 5400 running in different configurations on  HP DL360p rack mount servers.

 

The collaboration between HP and Wind River also extends to the newly announced HP OpenNFV Program. This program is designed to help the telecommunications industry accelerate innovation and launch services faster, more easily and more cost-effectively. There is considerable overlap between the membership of HP and Wind River’s programs – HP’s partners include not only Wind River but also 6WIND, Brocade, GENBAND, Intel, the Israel Mobile & Media Association, Mellanox Technologies, Spirent, and SK Telecom.

 

As part of their partnership, HP and Wind River announced plans to develop NFV solutions that combine Wind River Titanium Server with HP Helion OpenStack*. This combination of technologies is expected to provide communications service providers (CSPs) with an open-source-based cloud platform that meets reliability requirements and accelerates NFV deployments.

 

In the midst of all of these announcements, it is worth noting that Wind River Titanium Server is now commercially available, so you can put the platform to work immediately. It is also worth noting that the Wind River NFV platform was previously known as Wind River* Carrier Grade Communications Server (Wind River* CGCS). If you’ve been scratching your head wondering where why you hadn’t heard of the platform before, now you know – it has a new name!


I also think these announcements are noteworthy because they strongly suggest that NFV is transitioning from an exciting idea to a solid technology. It will be interesting to see how the industry continues to develop and mature in the coming months.

 

Learn More

Contact featured members:

 

Solutions in this blog:

 

Related topics:

 

Kontron is a Premier member of the Intel IoT Solutions Alliance. Hewlett-Packard and Wind River Systems are Associate members.6WIND is an Affiliate member.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® IoT Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

As high-performance embedded computing (HPEC) becomes a mainstay of military computing, the industry is increasingly interested in adapting Linux cluster architectures from the IT world. These architectures have obvious appeal for their low cost and high performance, but they are not generally designed for deterministic performance. Thus it is not clear that these architectures are suitable for real-time defense applications.

 

Members of the Intel® Internet of Things Solutions Alliance (Intel® IoT Solutions Alliance) have been hard at work answering this question. For example, Associate member GE-Intelligent Platforms recently published an excellent white paper that investigates interrupt response times and Message Passing Interface (MPI) latencies using OpenMPI on three different versions of Linux*. The Linux builds were provided by Concurrent Real-Time and Red Hat (both General members) and are all based on Red Hat* Enterprise Linux* 6.3:

 

 

All three kernels were tested on the GE-IP DSP280, a dual-socket, eight-core platform based on the Intel® CoreTM i7-2715QE processor. To characterize these platforms, a series of measurements were made with and without a background CPU load. The tests found that properly tuned systems could achieve interrupt response latencies of less than 10 µs and MPI latencies of around 1 µs, which is sufficient for many embedded applications.

 

Even standard Red Hat Enterprise Linux met soft real-time criteria with some tuning, such as disabling hyperthreading and isolating the real-time application on a specific core. However, interrupt latencies could still exceed 120 µs, making this standard distribution unsuitable for hard real-time applications.

 

In contrast, GE achieved highly deterministic behavior with the other flavors of Linux. For example, it got consistent sub-10 µs interrupt response times on Concurrent RedHawk Linux by applying real-time priority to the critical application, and by “shielding” the core running that application (see Figure 1). Red Hat* Enterprise MRG Realtime was not far behind, with optimized response times below 15 µs. These fast, reliable response times are an excellent fit for hard real-time applications.

 

redhawk latency.png

Figure 1. Concurrent RedHawk Linux* can provide real-time determinism. Note that all results for the “CHRT+CORE3” optimization (light blue) are under 10 µs.

 

Interestingly, MPI latencies were not a major problem on any of the kernels. Simply assigning real-time priority to the critical application produced low latencies and low jitter on all three versions of Linux. This leads to a second question. If OpenMPI is good enough for hard real time, should developers consider other options?

 

As it happens, GE has another great white paper on this very topic. This paper compares OpenMPI to GE’s AXISFlow in an impressively detailed and even-handed analysis, and summarizes the pros and cons of both options on five key criteria:

 

Programming – GE argues that OpenMPI is overly complex, with 300+ APIs that may have hidden side effects. In comparison, AXISFlow has only 42 APIs for a user to learn. On the other hand, OpenMPI’s main APIs are conceptually simple, and they hide many details from the developer. This makes it easier to get started than with AXISFlow, which exposes more interfaces to the developer.

 

Hardware Mapping – OpenMPI was designed for homogenous HPC systems with one MPI task per processor. AXISFlow was designed for heterogeneous embedded systems using multiple threads of execution on each core and multiple interconnects between CPUs. Thus, GE sees AXISFlow as a better fit for HPEC systems.

 

Performance – According to GE’s tests, OpenMPI offers lower latency for smaller message sizes and when there is no conflict with other CPU tasks. In contrast, AXISFlow outperforms in multi-tasking applications that need determinism (see Figure 3).

 

openmpi-vs-axisflow.png

Figure 3. Relative performance of the MPI alternatives depends greatly on the conditions at hand.

 

Distributions – As the name says, OpenMPI is an open standard and is thus available for free – or with paid support from a variety of vendors. In contrast, AXISFlow is proprietary.

 

This last point deserves a bit of exploration. OpenMPI is popular not only in the IT world but increasingly in embedded applications. GE itself just added OpenMPI to its AXIS 6.0 design suite, which already supported AXISFlow. As another example, General member Mercury Systems supports the API in its Performance Enhanced OpenMPI/OFED.

 

On the other hand, there are plenty of popular propriety solutions, like Curtiss-Wright Continuum IPC and the Gedae Software Development Platform. (Curtiss-Wright and Gedae are General members of the Alliance.) A full analysis of HPEC architectures is not complete without careful consideration of these alternatives – something that is well beyond the scope of this blog.

 

Having said that, I think the overall picture is clear: Linux clusters are unquestionably suitable for military HPEC applications. This is an important development, because it allows embedded developers to leverage the cost and performance advantages of these architectures. I have no doubt that we will see IT-like Linux clusters continue to make inroads into military applications as the HPEC market matures.

 

Learn More

Solutions in this blog:

 

Related topics:

 

GE Intelligent Platforms is an Associate member of the of the Intel® IoT Solutions Alliance Gedae, Inc., Concurrent Real-Time, Curtiss-Wright Controls Defense Solutions, Mercury Systems, Inc., and Red Hat are General members of the Alliance.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® IoT Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Digital signage is already a vital tool for retailers, boosting sales by up to 33 percent. Now businesses can make their signage even more effective with Intel® Retail Client Manager  (Intel® RCM). This updated content management system (CMS) adds features including:

 

  • Remote Control, for browser-based management of both ad content and signage hardware
  • Audience Analytics, which enables signs to sense their audience, deliver custom messages, and measure the results
  • Localization, with new support for Spanish, Japanese, Russian, Simplified Chinese, German and Portuguese

 

These new features can help boost sales, cut costs, and simplify operations. Best of all, it’s easy to get started with Intel RCM thanks to support from Arrow, Avnet, and Seneca, all Associate members of the Intel® Internet of Things Solutions Alliance (Intel® IoT Solutions Alliance).

 

Before we get into the new features, a quick review of Intel RCM is in order. As shown in Figure 1, the basic components include:

 

  • Intel® RCM Campaign Manager, for creating and updating content
  • Intel® RCM Service, which distributes content and stores all related information (such as play history)
  • Intel® RCM Player, which displays content on the screen

 

Together, these components enable non-technical personnel to create, distribute, and manage multiple campaigns across digital signage, kiosks, and point-of-sale (POS) terminals, and other digital touch points. (See our earlier coverage for more details.)

 

rcm overview.png

Figure 1. Intel RCM enables distribution across a network.

 

With that background in mind, let’s take a peek at the new features:

 

Remote Control

The new Remote Control interface gives operators have a number of “in-band management” capabilities. Most notably, they can watch what is playing on up to six screens in real time (Figure 2).  Operators can also remotely power off or restart a media player, or take direct control of any player. For platforms equipped with Intel® Active Management Technology (Intel® AMT), operators can also exercise “out-of-band management” functions like waking up a device or resetting the BIOS.

 

remote control.png

Figure 2. Remote Control employs a browser-based interface.

 

These remote management tools can be a major money saver. According to Global Retail Insights (an IDC Company), remote management can reduce support costs by approximately $400 per device per year. What’s more, these tools can reduce downtime by enabling remote technicians to troubleshoot malfunctioning screens – thus ensuring that audiences get maximum exposure to ad campaigns.

 

Intel® RCM Audience Analytics

One of the main advantages of digital signage is its flexibility. A given sign can support many different campaigns, switching between campaigns as needed to meet various business goals.

 

The new Audience Analytics feature in Intel RCM takes full advantage of this flexibility by allowing signs to adapt to the audience. When a sign is paired with a digital camera, Audience Analytics can detect audience demographics like gender and age, allowing digital touch points to deliver the most relevant message to audience at hand. For example, a sign could advertise sport drinks to young males and diet soda to mature women.

 

Just as importantly, Audience Analytics enables a sign to measure the success of each campaign. Specifically, the software can measure interactions including dwell time, time of day, and viewed content. Figure 3 illustrates how this data can be used to evaluate the success of different campaigns.

 

dwell time.png

Figure 3. Audience Analytics can measure views and dwell time.

 

It is worth noting that most of the new capabilities in Intel RCM were previously available in separate product offerings. For example, the features of Audience Analytics were offered in the Intel® Audience Impression Metrics Suite (Intel® AIM Suite). However, Intel RCM better integrates these features and improves the user experience, makes signage networks easier to control, analyze, and maintain.

 

Getting Started with Intel® RCM

Intel RCM is available from Arrow, Avnet, and Seneca, as well as a variety of other distributors. In addition to offering Intel RCM as stand-alone software, several distributors are pre-packaging it with media players for easier deployment. A good example of this latter approach is the Seneca HDN illustrated in Figure 4. This small form factor media player offers a turnkey signage solution with features including:

 

  • High-performance, low-power 3rd or 4th generation Intel® Core™ processors
  • Support for Intel® AMT (and thus support for the full suite of Remote Control features)
  • Front USB port to expand I/O capability and simplified servicing
  • Serial port for central management and monitor control
  • Enhanced thermal efficiencies for extreme environments

 

product_hdn-wide-480.jpg

Figure 4. The Seneca HDN is a turnkey media player.

 

Many other players have been bundled with Intel RCM, or certified for use with Intel RCM. For a full list, see the Solutions Directory. Here you will find solution for small signs, large, signs, and even multi-screen installations. And as always, don’t hesitate to contact me for more details!

 

Learn More

Contact featured member:

Solutions in this blog:

Related topics:

 

Arrow - North America, Avnet - North America, and Seneca are Associate members of the Intel IoT Solutions Alliance.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® IoT Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Secure software design depends on six key strategies. Wind River just published an article that explains these strategies and provides advice on how to implement them, citing practical applications of Wind River VxWorks, Wind River Hypervisor, and the 3rd generation Intel® Core™ processor family.

 

txt.png

 

security.pngFor more articles like this, subscribe to the Embedded Innovator. Subscribers receive a bi-annual magazine and quarterly newsletter with the latest industry trends and design ideas. For more on securing connected devices, see intel.com/go/embedded-security.

 

 

 

Wind River is a Premier member of the Intel® Intelligent Systems Alliance.

 

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Testing for security flaws is difficult because it fundamentally differs from the functional testing performed on most applications. Most functional testing involves “positive testing” to ensure correct responses to expected inputs. In contrast, security testing revolves around “negative testing,” which subjects the system to unexpected inputs and conditions to see if it will fail (Figure 1). The inputs needed for this negative testing can’t easily be derived from requirement documents alone.

 

negative.png

Figure 1. Security testing focuses on the unexpected and unplanned.

 

To make things more challenging, security vulnerabilities may not be apparent outside a real-time environment. Consider the denial-of-service (DoS) attack as a simple example. These attacks saturate a system with inputs (typically network traffic) so that the system resources are overloaded and cannot function as intended. The key point about DoS attacks is that each individual input may appear to be a valid, expected event. It’s only when you accumulate the tasks in real time that the problem becomes visible.

 

Security testing is also hindered by the fact that testing tools are typically designed for developers, not testers. For example, code coverage tools used by developers are often too “heavy,” complicated, and slow for quality assurance (QA) testing. What’s more, these tools typically require instrumented code rather than production code. On the other hand, QA-oriented testing tools often treat code as a black box, making it difficult to determine exactly where the security vulnerabilities are.

 

These problems are exacerbated for projects that employ agile or iterative development, where code is updated frequently, often close to the launch date. Simply testing the entire code base on every revision can be impractical under these circumstances, yet testers must maintain comprehensive coverage to ensure that updates to one part of the code have not exposed new vulnerabilities in other functions.

 

Wind River Test Management was designed to address challenges like these. It provides three capabilities that are critical to catching security flaws:

  • Code coverage analysis of real-time production code
  • Automated performance regression testing
  • Binary code change analysis to minimize re-testing and flag suspicious code
  • The ability to simulate attacks thorough both pinpoint fault injection and “brute force” Fuzz testing

 

Let’s start with code coverage. Unlike typical code coverage tools, Wind River Test Management does not require a special build. This is a critical distinction, because most coverage methods work by inserting special instructions into the code. This instrumentation can cause a large performance hit. In addition to slowing testing—potentially stretching run times from hours to days—instrumentation can cause time-sensitive tests to fail, mask problems, or introduce bugs.

 

In contrast, Wind River Test Management takes production binary code—no special builds required—and analyzes the structure to instrument the code on the fly. Once a section of code has been covered, the instrumentation for that code is removed. Thus, the performance impact of instrumentation is minimized—in principle, there is no performance hit to steady-state execution.

 

This low-impact approach makes it practical to measure code coverage on production code. This is critical for negative testing, because non-covered code is inherently suspicious. Unexecuted functions might be “dead code” that contains security vulnerabilities. For example, developers can forget to remove test or debug APIs. Leaving this code in is a bad idea. Debug code is usually subject to less scrutiny—and thus more likely to have holes—when the development team assumes it will be removed.

 

It’s also fairly common for developers to leave debug code in intentionally, e.g., to aid in diagnostics after shipping. This is also a bad practice because hackers can take advantage of this back door. Finally, unexecuted functions could be malicious code inserted by a disgruntled coder, an infected development machine, etc.

 

Next, let’s consider performance profiling. Wind River Test Management’s real-time testing capabilities enable realistic performance profiling. Among other benefits, this feature enables testers to identify performance degradation across builds. Such anomalies can indicate that good code has been replaced with malware. To illustrate the significance of this feature, consider StuxNet. This worm overwrote code that was supposed to monitor motor speeds, replacing it with code that oscillated the motors from 2 Hz-1 kHz. The design of the worm made it difficult to detect in the field, but oscillating motors requires significantly more processor cycles than just monitoring them. Thus, performance regression can help testers identify malicious code.

 

Code change analysis is another key feature. Wind River Test Management also automatically inspects the binary code to identify which sections changed between builds. The system can use this “binary build differencing” and its structural code analysis to determine which test cases must be re-run, and to flag tests that may need to be enhanced to verify the changed code (see Figure 2). This information helps minimize testing on new builds, which is critical to achieving comprehensive coverage. It can also alert testers to unauthorized changes. For example, suppose functions A, B, and C all change, but only A and B are on authorized change list. This throws up a red flag—why did C change? Has the code been compromised?

 

build.png

Figure 2. Automated change-driven testing.

 

Finally, the tool includes features that help simulate attacks, including both pinpoint and brute-force method. The pinpoint method uses fault injection to mimic a hacker forcing the system into edge condition. For example, fault injection can be used to simulate a DoS attack that fills memory, causing the system to crash and leave vulnerable files on disk. Rather than force the tester to actually fill memory, the tool lets testers simulate the condition by injecting fault signals into the system (Figure 3). These faults could include conditions like memory full, database full, or no network connection. Because the tool gives testers white-box visibility, they can watch how the system responds to make sure any failures are graceful and do not expose vulnerabilities.

 

fault.png

Figure 3. Fault injection can simulate an attack.

 

Wind River Test Management’s Security Pack extends attack simulation with fuzz testing licensed from Codenomicon. This technology deliberately feeds the software malformed inputs. Unlike fault injection, which requires knowledge of fault conditions, fuzz testing doesn’t require knowledge of the code. Instead, it uses a brute-force approach in an attempt to locate unknown risks.

 

The bottom line for testers is that security is an increasing concerns for embedded software, but test software is often poorly suited to detecting vulnerabilities. Wind River Test Management provides an alternative approach that both addresses security concerns and solves many common problems in QA testing.

 

 

Of course, even the best testing may not reveal every vulnerability. That's why it is wise to chose a hardware platform with built-in security features. One such platform is the 2nd generation Intel® Core™ processor family, which features the Intel® vPro™ suite of hardware-assisted security and management technologies. These technologies include:

 

  • Intel® Active Management Technology (Intel® AMT), which provides remote diagnosis and repair
  • Intel® Trusted Execution Technology (Intel® TXT), which supplies security protection over and above ordinary software solutions
  • Intel® Virtualization Technology (Intel® VT), which enables secure task separation, e.g., for systems that mix open-source and high-security software.

 

Together, these technologies provide a foundation for proactive threat deterrence – an approach that stops threats before they breach your system and isolates compromised systems so the damage is contained. See my recent Intel Vpro overview for details.

security.pngFor more on securing connected devices, see intel.com/go/embedded-security.

 

Wind River is an Associate member of the Intel® Embedded Alliance.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Software development isn’t for wimps. Today’s developers have to deal with an intensifying pace of innovation, ever-growing code bases, and increasingly complex hardware. One of the best ways to deal with these challenges is to focus on software interoperability. A highly interoperable platform can make it much easier to start coding faster, migrate legacy code, and future proof software. For example, a platform with a strong history of interoperability will gather OS, tool, and middleware support that can make it much easier to start coding.

 

Intel® architecture (IA) processors are an excellent example of a platform that emphasizes software interoperability, as well as a great example of a well-supported platform. In the embedded space, IA processors are supported by the Intel® Embedded Alliance, whose 200-plus members collaborate closely with Intel to create optimized software, tools, and services that can speed and simplify software developments.

 

To see how the Alliance ecosystem support benefits software development, I recommend the white paper Five Keys to Software Success. This paper shows how developers can:

 

  • Take advantage of solutions for GUIs, connectivity, business integration, security, cloud computing, and more.
  • Apply pre-integrated solutions to shorten time to market.
  • Use virtualization to preserve legacy software.
  • Employ tuning and debugging tools for faster development and better results.
  • Engage integrators and services companies to speed up product delivery.

 

Let’s explore a few of these points in detail, starting with integrated solutions. The growing complexity of embedded applications is leading more developers to look for off-the-shelf, commercial software components that they can use as the foundation of their design. The benefit of this approach is that it can save developers from doing a lot of redundant, non-value-added work. However, it can still be a major challenge to get all of these components integrated and working smoothly on your chosen hardware. In fact, just getting your hardware to boot can be a major challenge!

 

Fortunately, the Alliance offers a number of integrated solutions that make it easier to start development. For example, Wind River has partnered with several board vendors to create out-of-the-box starter kits that integrate the key hardware, software, and tools you need to get started (see Figure 1). It’s worth noting that these kits do more than simply package components together – instead, they pre-integrate the components so that you can start development in a matter of minutes.

 

http://embeddedinnovator.com/out-of-the-box-productivity-for-embedded-developers/index_files/dec2010_windriver_fig2_thumb.png

Figure 1. The Wind River Embedded Development Kits offer a fully configured solution to help developers begin development quickly.
(Click image to enlarge)

 

Another major hurdle for embedded software is the need to incorporate legacy code. Virtualization can make this challenge significantly easier. This technology enables multiple OSs to run on a single hardware platform, creating interesting possibilities for software migration and hardware consolidation. For example, you could use virtualization to combine legacy software written for a single-threaded OS with new software written to take advantage of multi-core IA processors. To dive deeper into this topic, check out my recent round-up of virtualization articles.

 

Regardless of your starting point, testing and debugging are huge parts of most embedded projects – so much so that they are usually the most time-consuming element of the development process. This is a particularly important challenge for open-source projects, which often contain thousands of packages. Developers can overcome this challenge with automated testing tools for open source. Whatever OS you choose, you can also take advantage of a broad range of software tools and JTAG debuggers from Alliance members like Arium, Green Hills Software, Lauterbach, Macgraigor Systems, and Wind River.

 

Finally, let’s take a moment to consider the future – more specifically, the merits of future-proofing your code. Most designs undergo multiple updates after their initial deployment to accommodate new technology and customer demands. For example, designs are often updated with new features over time. Software changes are easier to deal with when you base your design on a flexible software framework that uses standards-based interfaces, as explained in the Eurotech article Future Proofing through Portable Software. It’s also a good idea to follow programming best practices, as outlined in our blog Structure Software to Ease Porting.

 

interoperability.pngThe links I’ve listed here only scratch the surface of what the Alliance has to offer. To learn more about building interoperable, standards-based solutions, visit intel.com/go/embedded-interoperability.

 

 

 

 

Eurotech and Wind River are Associate members of the Intel® Embedded Alliance. Green Hills Software, Lauterbach, and Macgraigor Systems, are Affiliate members of the Alliance.

 


Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Advanced IVI systems must deliver a “living room on wheels” infotainment experience along with driver assistance and instrumentation, all while staying within budget. Green Hills Software has just published an article showing how to meet these needs with a new platform that consolidates IVI features on a single ECU, flexibly combining full-featured OSs with a high-reliability RTOS.

 

For more articles like this, subscribe to the Embedded Innovator. Subscribers receive a bi-annual magazine and quarterly newsletter with the latest industry trends and design ideas. Subscribe today!

 

virtualization.pngTo learn more about virtualization for embedded devices, visit intel.com/go/embedded-virtualization

 

Green Hills Software is an Affiliate Member of the Intel® Embedded Alliance.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

M2M applications can be overwhelmingly complex and difficult to deploy. ILS Technology has just written an article explaining how a new kit simplifies matters by integrating client hardware and software, cloud services, enterprise integration, and more. With this kit, cloud connectivity is possible in a matter of hours and production deployment in a matter of weeks.

 

For more articles like this, subscribe to the Embedded Innovator. Subscribers receive a bi-annual magazine and quarterly newsletter with the latest industry trends and design ideas. Subscribe today!

 

connectivity.pngFor more on extending the Internet to embedded devices, see intel.com/go/embedded-connectivity

 

 

 

 

ILS Technology is an Affiliate Member of Intel® Embedded Alliance

 

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Most embedded devices are updated over time to accommodate new technology and customer demands. As Eurotech explains in its latest article, these changes are easier to deal with when you base your design on a flexible software framework. This article looks at a framework that uses standards-based interfaces to simplify coding as well as porting between Intel® Atom™ processor-based systems.

 

For more articles like this, subscribe to the Embedded Innovator. Subscribers receive a bi-annual magazine and quarterly newsletter with the latest industry trends and design ideas. Subscribe today!

 

interoperability.pngTo learn more about building interoperable, standards-based solutions, visit intel.com/go/embedded-interoperability

 

 

 

 

Eurotech Inc is an Associate Member of the Intel® Embedded Alliance.

 

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Deep Packet Inspection (DPI) solutions for LTE must deliver both high performance and high flexibility. 6WIND shows how specialized packet processing software meets the need in its latest Embedded Innovator article.

 

Today’s security markets are seeking multi-function solutions that consolidate a full suite of networking and security functions in a single platform. NORCO shows how to deliver such a solution using Intel® Virtualization Technology in its latest Embeddd Innovator article.

 

You can access more articles like this by subscribing to the Embedded Innovator.  Subscribers receive a quarterly newsletter and an annual magazine that bring you the latest industry trends and design ideas.  Get your subscription today!

 

workload_consolidation.pngFor more on building flexible networking solutions, see intel.com/go/embedded-consolidation

 

 

6WIND is an Affiliate Member of the Intel® Embedded Alliance

 

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

 

Follow me on Twitter: @kentonwilliston

To ensure applications operate as expected in a virtualized environment, software architects need a platform optimized for virtualization—one that allows assignment of hardware resources to specific operating systems. Wind River shows how Intel® Virtualization Technology meets this requirement in its latest Embedded Innovator article.

 

You can access more articles like this by subscribing to the Embedded Innovator.  Subscribers receive a quarterly newsletter and an annual magazine that bring you the latest industry trends and design ideas.  Get your subscription today!

 

virtualization.pngTo learn more about virtualizaiton for embedded devices, visit intel.com/go/embedded-virtualization

 

 

Wind River is an Associate Member of the Intel® Embedded Alliance

 

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

 

Follow me on Twitter: @kentonwilliston

Machine-to-machine (M2M) communications is an emerging concept for connecting embedded devices to one another and to back-end systems using open technology such as cellular networks. This connectivity enables smart services that can lower costs, raise revenues, and improve service for end customers.

 

To get a better understanding of this technology, I asked a panel of industry experts to share their perspectives. Here’s what I learned.

 

 

What is M2M communications? How does this technology differ from traditional approaches to networking embedded devices?

 

Fred Yentz, President and CEO of ILS Technology: I think Peggy Smedley defined M2M quite succinctly for Connected World Magazine’s July/August issue when she wrote, “M2M provides machines with the ability to communicate data, connecting people, devices, networks, and everyday objects, while interpreting much-needed information that can be acted upon in a timely manner.”

 

Kevin Rhoads, Vice President Embedded Products, Kontron: The simplest definition for M2M is any technology that supports wired or wireless communication between machines. M2M streamlines communications for a network of machines that traditionally would have to route information back to a central hub for analysis and then be rerouted into another system or PC.

 

Peter Thompson, Chief Scientist, GoS Networks : The term 'M2M' is very broad, but the main sense in which it is being used today is to mean autonomous communications between embedded devices (smart meters, refrigerators, cars etc.) and some central platform, across a telecoms operator's network. Traditional approaches used only local interconnections and local management, and thus had no economy of scale.

 

Kurt Hochanadel, Product Manager, Eurotech Inc.: M2M communications happens transparently to the people or systems that consume the data. The technology is moving away from strictly embedded computing and connectivity into more of an IT solution that allows data to move freely to consumers and applications.

 

Traditional approaches were very focused and distinct, with protocols, interfaces, applications, and delivery methods tied to a specific vertical market. M2M communication in general is building common standards so devices can communicate information more freely.

 

 

What are the benefits of M2M? What applications are taking advantage of this technology, and how?

 

Kurt Hochanadel: M2M benefits depend on the application, but in essence the benefits of relaying actionable data in near real-time cost-effectively throughout an ecosystem of consumers allow for increased efficiencies and services. M2M allows companies to be more situationally aware, so they can take actions as an agile organization based on near real-time data. M2M also allows companies to manage their devices, configure and provision them, and track those enterprise assets remotely and cost effectively.

 

Applications such as Telehealth, Asset Management, Location-based Services, Security and most any telemetry applications are the initial benefactors of this technology. For instance, a security company is using the Device Cloud offering from Eurotech to monitor and manage cash transfers, to effectively “follow the money” for greater accountability to their banking customer.

 

Kevin Rhoads: M2M offers increased computing, networking and data aggregation efficiencies. The advantages of M2M networks are that collected data becomes actionable data that can be used in real time. Industries that can truly leverage M2M capabilities:

  • Industrial automation -- overall equipment effectiveness (OEE) is frequently used as a KPI (key performance indicator) by manufacturers
  • Building automation, water and wastewater management -- equipment fault, detection and diagnostics
  • Energy – analytics can be used so a business can view the cost, carbon footprint of their energy usage
  • Medical – keeping accurate track of patient information from in-home care to hospital care or assisted living

 

Fred Yentz: The objective is to create additional business value for machine owners, machine makers and 3rd party service providers. Today, M2M is allowing applications such as performance based services and predictive maintenance solutions to expand rapidly. For example, equipment service routes can be modified to address only the field assets which are in need of support or service.

 

Peter Thompson: Another benefit of M2M is economies of scale: one platform can manage a large number of embedded devices; one operator (e.g., a cellular network) can manage different types of devices; one provider (e.g. a utility company) can manage devices connected to a number of different operators. Further benefits are: mobility (if connecting via a cellular network); and separation of concerns (the operator provides the network and makes sure it works while the M2M provider focuses on the application). Early applications include vehicle fleet management and energy management, but many more will follow.

 

 

Is M2M the same thing as embedded cloud computing? How are these concepts related?

 

Kurt Hochanadel: M2M is not entirely the same thing as embedded cloud computing. Embedded cloud computing, or the Device Cloud as Eurotech refers to it, is one way to accomplish M2M using IT centric software tools and frameworks.

 

M2M is the broader term since encompasses all types of M2M applications including the so-called traditional approaches. Embedded cloud computing talks about M2M in a more IT centric mindset. In essence, embedded cloud computing is a subset of M2M.

 

 

 

Fred Yentz: Cloud Computing can be leveraged in an M2M solution. Any well-designed M2M solution will have a balance of edge-based intelligence in combination with Centralized intelligence and control. The Centralized intelligence can reside in a private network, a public network or “Cloud” depending on the needs of the customer. For example, our M2M Application Platform is capable of hosting applications in the cloud such as SCADA, HMI, ERP or PLM applications from a variety of 3rd party vendors, but the solution is also available in a Customer Located Managed Service Model for those not ready to move to a full Cloud implementation.

 

Kevin Rhoads: Cloud computing is just one technology that allows M2M communication between devices. The growth for M2M connected computing is accelerating as OEMs launch smart services that utilize cloud computing to communicate and aggregate data.

 

 

What should developers look for in an M2M solution? What role does Intel technology play in enabling these solutions?

 

Kurt Hochanadel: Enterprises want their M2M communications system to be IT-centric, low-cost, flexible, open source, data agnostic, and application agnostic. IT developers are looking for security, scalability, redundancy, configurability, flexibility – everything they demand of a traditional IT data center. To meet these goals, device developers should look for application portability for reusability of code, a software framework for simplicity of development, built in applications and testing tools.

 

Intel plays a role in providing chipset functionality at a cost affordable to the M2M market. Low power, long life devices need low power, long life processors and chipsets to support M2M applications, which Intel provides. Intel’s role in promoting a new M2M paradigm is also very valuable.

 

Peter Thompson: Reliability and flexibility are important, especially since many M2M applications will have a long replacement cycle. Intel technology offers the capacity to grow the infrastructure and add new capabilities as time goes on.

 

Kevin Rhoads: OEMs should look for production-ready solutions that help them accelerate smart services deployment opportunities. For example, Kontron is providing M2M intelligent devices, as well as the infrastructure building blocks to enable M2M technology data from the point of collection through the cloud to the point of aggregation and decision making.

 

A major benefit of selecting an open Intel Architecture platform is the rich ecosystem of software partners that have already developed solutions that support M2M smart services. Intel has a support partner for every aspect an OEM would need from OS through middleware, software companies have ready proven OS and the horizontal hosting services needed for provisioning or updating an M2M system.

 

Fred Yentz: Business and technical innovators should look for a few key attributes when evaluating M2M solutions. These attributes include an integrated solution with compute and communications resources on board, in addition to a solution which can quickly tie into your existing back end systems. This combination of Hardware, Software and Service defines the ILS Technology “M2M Deployment Kit”. This kit combines the Intel M2M gateway with ILS Technology’s deviceWISE M2M agent, deviceWISE M2M Application Platform Subscription and a broadband subscription from one of the ILS Technology Carrier partners, enabling a solution proof point to be completed within days.

 

 

M2M solutions can provide infrastructure, applications, and even field devices as a service. This can greatly speed deployment, but it also gives developers less control over the system details. What would you say to developers who are worried about relinquishing control?

 

Peter Thompson: The essence of M2M is to pass responsibility for the communications infrastructure to someone else and to concentrate on developing the application.

 

Kurt Hochanadel: From an IT perspective, if the administrators and programmers are no longer responsible for a data center for a specific M2M application because they are using the Device Cloud, they do give up control of things like server purchasing, maintenance, programming, and other data center functions. Frankly, few enterprises, if any, can put anywhere near the resources into securing and building their data center than cloud companies like Amazon, Google, and Microsoft can. What the enterprise IT team gains is the ability to focus their attention on what brings the most value to the organization, what interfaces they can build to internal systems, how they can make the best use of the data they’re bringing it through the Device Cloud, and how they’re saving money for their company.

 

Kevin Rhoads: Each individual developer will have a distinct value proposition in the M2M deployment of smart services. The systems that are deployed include hardware gateways, OS, M2M software, security features and transactional revenue from device to cloud and cloud to client business models. The control of the systems details will be dictated by the total solution proposed to each customer.

 

 

If a developer wants to explore the possibilities of M2M, where can they start? How can you help developers get up to speed on this new technology?

 

Kurt Hochanadel: Device developers want to work with M2M reference platforms as a starting point, which Eurotech is working with Intel and others to provide. It has several I/O options to start with, along with a software framework and Wind River operating system on a hardware reference platform.

 

For the developers who are ready to explore the benefits of the Device Cloud as a way to perform M2M communications, Eurotech offers an Intel-based development kit that includes a 90-day trial of the Eurotech’s Everyware Device Cloud for connectivity between the device and enterprise. Developers can quickly try their application on the device and share data over the device cloud.

 

Fred Yentz: I would recommend picking up a deviceWISE M2M Deployment Kit. It’s like a pilot in a box that allows you to get your asset connected and have a glimpse into the potential of full-scale M2M solutions in hours or days rather than weeks or months.

 

Kevin Rhoads: Kontron has developed a white paper for the purpose of helping designers better understand M2M market trends and usage models, available hardware and software platforms, industry challenges and how Kontron’s M2M solutions can facilitate development of Internet smart services applications. This useful white paper can be downloaded at us.kontron.com/whitepaper-M2M.

 

Kontron also has additional details on all these subjects on its website us.kontron.com/simplify-and-speed-your-entry-into-the-m2m-marketplace.

 

Peter Thompson: You can also find joint whitepapers we’ve created with Intel at our website www.gosnetworks.com.

 

 

More information

 

connectivity.pngFor more on extending the Internet to embedded devices, see intel.com/go/embedded-connectivity

 

For more information on the solutions from our panelists, please visit the following sites:

 

 

Kontron is a Premier member of the Intel® Embedded Alliance. Eurotech is an Associate member of the Alliance. ILS Technology is an Affiliate member of the Alliance. GoS Networks is a General member of the Alliance.

 

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

 

Follow me on Twitter at twitter.com/#!/kentonwilliston

Virtualization is one of the hottest technologies in embedded design. By enabling multiple operating systems (OSs) to run on a single hardware platform, virtualization opens up powerful new design options. To get a better understanding of this technology, I asked a panel of industry experts to share their perspectives. Here’s what I learned.

 

Why does virtualization matter for embedded applications? How does it benefit embedded applications?

Chris Ault, Senior Product Manager, Virtualization, Wind River: Some of the benefits of virtualization in embedded include the opportunity and ability to a) consolidate and reduce the number of system components, b) migrate legacy apps to new hardware, and c) separate or isolate safe or secure applications from standard applications. This offers the ability to rapidly d) innovate by adding new functionality incremental to the existing application and operating system stack. New functionality can be offered by utilizing the appropriate OS for the functionality.

 

Embedded virtualization also offers the opportunity to separate the lifecycles of the isolated partitions; one OS can be upgraded with no impact to the other.

 

David Kleidermacher, CTO, Green Hills Software: There are many use cases driving adoption of embedded virtualization. However, there are some common benefits across verticals, including:

 

  • Consolidation: Saving size, weight, power, and cost.
  • IP protection: push proprietary software and data outside of licensing-encumbered operating systems (GPL, Google)
  • Enable mixed criticality operation: rich general purpose OS alongside hard real-time, security-critical, and/or safety-critical functions
  • Open and extensible: run any OS personality at any time – Linux, Android, Windows, RTOS, etc. You can think of hypervisors as providing the ultimate open environment.

 

Kim Hartman, VP of Marketing and Sales, TenAsys: One has to look at the overall embedded application being made up of a mixture of different functions. Embedded virtualization allows a system to be built using a blend of application-appropriate operating systems, instead of having to rely upon any single OS to serve all needs. Furthermore, a successful embedded virtualization initiative incorporates the re-use of existing application & OS intellectual property without requiring modification, or a port of field-proven code.

 

Where are you seeing your customers employ virtualization? What applications are using virtualization, and why?

David Kleidermacher: Some examples we are seeing at Green Hills Software include:

  • In-vehicle infotainment: consolidate general purpose head unit (e.g., MeeGo or Android), real-time (e.g. cluster displays), and/or instant-on/safety-critical (e.g., rear-view camera)
  • Software-defined radio: real-time/safety-critical alongside Linux
  • Networking/telecom: consolidate control and data plane; use virtualization as a means of achieving higher-availability (e.g. detect guest OS failure and restart VM)
  • Health: isolate and secure sensitive patient information per HIPAA on mobile devices, e.g., electronic medical records management by physicians

 

Arun Subbarao, VP Engineering, LynuxWorks, Inc.: One early adopter has been the U.S. Department of Defense (DoD). This is not surprising since mission critical systems are often extremely complex, trying to integrate multiple different operating environments and applications in very small form factor hardware designs. There is also a need to keep highly sensitive environments completely isolated from less sensitive ones, while still allowing the authorized personnel appropriate access. Mission-critical applications also have a key real-time response requirement and a high security requirement. In addition to the DoD, we are also seeing government IT, financial and medical organizations rapidly adopt virtualization technology to help safeguard sensitive data.

 

Chris Ault: We are seeing increasing deployment of embedded virtualization across all industries, most notably Aerospace & Defense, Industrial, and Networking. There is increasing interests in the Automotive In-Vehicle Infotainment, Mobile, and Medical sectors as well.

 

One example is using virtualization to add general-purpose operating systems with enhanced graphics interfaces to existing real-time robotic controllers. Another is migrating existing legacy applications from older single-core processors to multi-core CPUs while avoiding the need to ‘port’ application software. By using virtualization, customers can run multiple copies of the single-threaded environment, thus realizing the benefit of the increased performance of the multi-core CPU.

 

Kim Hartman: One area we are seeing interest in is embedded applications that are adding M2M, enterprise, or cloud services. There are usually not real-time needs and can easily be addressed by leveraging off the shelf software on the Microsoft Windows platform. It’s no longer required to re-invent everything in the RTOS with custom or redundant hardware sub-systems.

 

What are the key challenges to implementing virtualization? How can developers overcome these challenges?

Arun Subbarao: The key challenges to implementing virtualization are preserving native performance, allowing for higher security and providing an appropriate device management model. However, it is an area of intense focus by processor vendors with Intel being a pioneer in this area. The technology advancements afforded by Intel® VT-x provides significant hardware assists that has closed the gap between native operating system performance and its virtualized counterpart. The trend towards multi-core computing is a natural technology enabler for allowing multiple operating environments to execute without loss of functionality and performance.

 

David Kleidermacher: By far the biggest challenge is I/O virtualization, and in particular, there are decisions/configurations in three dimensions: how a peripheral is virtualized/interfaced to physical hardware; how a peripheral is shared; and how quality-of-service and security can be assured across all VMs and all system bus masters. There are many choices and tradeoffs with respect to performance, maintainability/time-to-market, and robustness/security, and this is the place where vendor experience and technology flexibility is absolutely critical. There is no one-size-fits-all.

 

Intel VT technologies, especially Intel® VT-d and Intel® VT-c, dramatically simplify many use cases. However, here again is where experience and technology flexibility is critical. Intel VT-x, Intel VT-d, Intel VT-c and other Intel technologies must be applied the right way depending on the aforementioned tradeoffs.

 

[Editors’ note: Intel® Virtualization Technology (Intel® VT) for IA-32 Intel® Architecture (Intel® VT-x) traps certain instructions in hardware, relieving the hypervisor of these duties. This greatly reduces the overhead of virtualization.

 

 

Intel® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d) adds hardware accelerators that enable secure assignment of I/O devices to specific OSs. This decreases the load on the processor and accelerates data movement.

 

 

Intel® Virtualization Technology (Intel® VT) for Connectivity (Intel® VT-c) improves the network performance of virtualized devices.]

 

Chris Ault: Developing a virtualized embedded product presents challenges and risks with regard to configuring partitioned memory regions, the allocation of CPU cores to partitions, presenting specific devices to the appropriate virtual partition using the appropriate device sharing model, etc.

 

Mitigating these risks and challenges can be achieved when selecting a software vendor that offers a complete embedded virtualization platform for development, one that consists of operating systems pre-integrated with a hypervisor, along with development tools and BSPs also integrated with the guest operating systems and hypervisor. Point solutions and non-integrated open source software have their place in the software development lifecycle, but embedded virtualization development requires coordinated and integrated products in order to minimize risk and development difficulties.

 

What would you say to developers who are worried about the difficulty and complexity of deploying a virtualized solution? Is adding this new software layer going to make their jobs harder?

Kim Hartman: Embedded virtualization done right makes the task easier, not harder. Having a clear understanding of how the solution actually works for you makes the process go smoothly and predictably.

 

David Kleidermacher: Ultimately, the hypervisor should be deployed like the BIOS – it’s always there and simple to configure; end users are mostly unaware of it. The server virtualization market has proven system virtualization practical, from both a technological and business perspective, to deploy on common hardware; while many embedded SoCs are quite a bit more complicated from the standpoint of I/O choices, fundamentally the hypervisor is a thin layer that can be easily ported and managed. Green Hills has proven this in the embedded space since we first shipped the INTEGRITY Multivisor in 2003. Eight years of experience across a wide range of processor technologies brings a lot of confidence to customers that embedded and mobile virtualization is practical.

 

Arun Subbarao: There is some level of additional complexity in deploying a virtualized solution. However, this is no way greater than the adoption of any other newer disruptive technologies. The separation kernel and hypervisor software is intended to abstract this complexity and allow developers to avail of the enormous flexibility and compelling business advantages this technology has to offer.

 

If I do need help implementing a virtualized solution, how can I get assistance? What kinds of services do you offer?

Arun Subbarao: We would recommend that developers who are creating their first virtualized solution take advantage of the assistance that software vendors have to offer in this area. Vendors, such as LynuxWorks, have training, support and consulting services to provide additional assistance to enable developers to adopt this technology and become productive in a very short period of time.

 

Kim Hartman: With over 20 years of experience, TenAsys has been providing solutions that mix GPOS and RTOS application needs. Our INtime for Windows RTOS is a para-virtualized solution deployed for 14 years, applied to the absolute fastest multi-core processing needs.

 

Chris Ault: Wind River offers industry-specific Professional Services practices, with focused offerings that help developers meet strict market deadlines while keeping development costs down. We also offer extensive training and education services and rapid integration mentoring programs aimed at helping our customers kick-start their development activities. Finally, Wind River’s Online Support website offers a comprehensive knowledge base and various subscription features.

 

David Kleidermacher: We recommend you Contact the Green Hills Software INTEGRITY Secure Virtualization (ISV) business unit experts for education and design advice in this area.

 

What are the most important trends you see developing in embedded virtualization? What does the future hold for this technology?

Arun Subbarao: The most important trends we see developing in embedded virtualization are rich user experience, miniaturization, and high awareness of security. The sensitivity to security will be the new dominant aspect of the three trends in the near future, given the emerging threats of identity theft and malware attacks. Luckily, virtualization technology can address all these requirements simultaneously. In fact, we believe it is the only technology at the intersection of all these emerging market trends.

 

Kim Hartman: Multi-core proliferation will continue making more discrete processing services available to embedded applications at lower cost. Admitting that a single SMP approach is not the universal solution, AMP and mixed-OS solutions emphasizing utility become attractive. Embedded virtualization is the natural solution to partitioning these resources for directed control.

 

Chris Ault: Increasing core counts for multi-core CPUs is certainly an important trend, as well as advances in hardware accelerators that further assist with the implementation of virtualization. As core counts increase, the effectiveness of SMP operating systems tends to decrease, thus driving more demand and relevance for embedded virtualization with hardware-assist features.

 

David Kleidermacher: Embedded microprocessors only recently have enough hardware assistance features to make system virtualization practical in embedded, resource-constrained electronics. New use cases and benefits are popping up on a daily basis. That this technology will be successful is a foregone conclusion. However, we still need the CPU vendors to improve hardware support, per my earlier comment on I/O virtualization. I have no doubt that Intel will continue to lead in this area.

 

 

More information

virtualization.pngTo learn more about virtualization for embedded devices, visit intel.com/go/embedded-virtualization

 

For more information on the solutions from our panelists, please visit the following sites:

 

Wind River Systems is an Associate member of the Intel® Embedded Alliance. Green Hills Software, LynuxWorks, and TenAsys are Affiliate members of the Alliance.

 

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

 

Follow me on Twitter at twitter.com/#!/kentonwilliston