Skip navigation
2014

Intel® System Studio Webinar Series | Intel® Developer Zone

 

Join us in an engaging series of hands-on webinars covering:

  • Learn how to apply the newest Intel® System Studio features to real-world development challenges.
  • Learn how to build and optimize imaging, in-vehicle infotainment (IVI), long-term evaluation reference design, and surveillance applications on Intel® architecture using different components of Intel® System Studio.
  • Get a close look at software-based power analysis solutions on Intel® architecture-based systems.
  • We’ll showcase how the Intel® System Debugger functions to debug a kernel module on Yocto*.
  • Examine how the C++ offload compiler enables you to fully utilize Intel® processors through easy access to the GPGPU capabilities of Intel Graphics Technology. 

 

Webinar DetailsDescription

Accelerate Development for Embedded, Mobile, and the Internet of Things

10/29/2014 9:00 AM PST

Presenter:
Robert Mueller

 

Join us for an in-depth look at software development for embedded, mobile, and the Internet of Things. Learn how to apply the newest Intel® System Studio features to real-world development challenges. We’ll look at examples and demos of how these features speed up and streamline the development cycle based on the MinnowBoard MAX and Intel® Atom™ Processor E3800 platform.

New Intel System Studio features include:

  • Support for the Microsoft Windows* target platforms for build, performance analysis, and energy profiling
  • Use of Intel® Processor Trace (Intel® PT) on the brand-new 5th generation Intel® Core™ M Processor to more easily identify runtime issue root causes
  • Tight Intel System Studio integration into Wind River Workbench* for a seamless development, debug, and analysis experience when targeting the latest Intel® architecture designs running Wind River Linux*
  • Compiler integration with the latest Android* NDK and tools support for the very latest Android OS features

Build and Optimize Embedded, Mobile, and Internet of Things Applications11/05/2014 9:00 AM PST

Presenter:
Naveen Gv

Learn how to build and optimize imaging, in-vehicle infotainment (IVI), long-term evaluation reference design, and surveillance applications on Intel® architecture using different components of Intel® System Studio.
  • Perfect printer and image processing performance on Intel architecture
  • Enhance IVI application performance and CPU utilization
  • Boost long-term evolution (LTE) application performance
  • Optimize OpenCV* based video surveillance applications

Optimize for Energy Efficiency: Software-Based Power Analysis

11/12/2014 9:00 AM PST

Presenter:
Kevin O’leary

 

A close look at software-based power analysis solutions on Intel® architecture-based systems. A key consideration when tuning for power is whether the system is waking up unnecessarily and what is causing the system to wake up from a low power state. Another factor is whether your system is operating at a high frequency—thereby consuming power at an accelerated rate. We will examine several system power issues that can greatly degrade a system and discuss potential solutions. We’ll cover power analysis on both Android* and Windows* systems. Learn the different power and frequency states of Intel® based architectures, as well as different software techniques for tuning for power.

Increase Insight into Kernel Module Development with JTAG Debugging

11/19/2014 9:00 AM PST

Presenter:
Paul Farquhar

 

A JTAG debugger can be a great tool in kernel module development. Advantages include halting the complete system during debugging (not only the thread being debugged), and making it easy to examine the entire CPU and memory.

We’ll look at how the Intel® System Debugger functions to debug a kernel module of Yocto* built Linux*. After loading the module symbols, we’ll debug in source code. As a target, we will use the Intel® Quark™ based platform Galileo, often used in Internet of Things (IoT) solutions. We’ll use the open source software OpenOCD* to connect to the JTAG probe.

Using a JTAG debugger is easier than you might expect and the benefits far outweigh the learning curve. Learn the steps necessary to prepare for JTAG debugging with the Intel System Debugger and the hardware support requirements.

Maximize Graphics Processor Compute Power for General Purpose Computing

12/03/2014 9:00 AM PST

Presenter:
Anoop Madhusoodhanan Prabha

 

Processor graphics hardware occupies almost 30% of the processor silicon real estate. This makes it all the more important to expose these computation units to developers for general-purpose computing and unlock the idle GFLOPS in Intel® Graphics Technology. We’ll examine how the C++ offload compiler enables you to fully utilize Intel® processors through easy access to the GPGPU capabilities of Intel Graphics Technology. This compute offload feature is accomplished using the C/C++ Intel® Cilk™ Plus parallel programming paradigm with small extensions for kernel and data offload.

While it’s essential to provide compelling heterogeneous programming models for general-purpose computing, it’s also critical to simplify porting existing C/C++ applications for efficient execution on processor graphics. The Intel Cilk Plus programming model helps express the potential parallelism and vectorization opportunities in the source code. This capability is available on Win32*, Win64* and Linux64* (Ubuntu* 12.04 and SLES* 11) platforms.

Earlier in the week on October 28, the latest release of Intel® System Studio was launched.

This is Intel's solution for system software developers in Mobile and Embedded including the Internet of Things.

It is a complete system software development tool suite. It consists of compilers, libraries, analyzers and debuggers including the Intel® System Debugger. It is designed for cross-development integrating with existing environments and IDEs for various Embedded Linux* flavors and Windows*.

Intel® System Studio 2015 has added exciting new features such as

New Platform Support For Faster Time-To-Market

  • Windows* target support to optimize Windows* powered Embedded Applications
  • Low-overhead processor trace in new Intel® Core™ M processor for easier defect isolation
  • Android 64-bit ready

Enhanced Tools Usability For Improved Developer Productivity

  • Improved Eclipse* IDE and Wind River* Workbench integration for more efficient development
  • OpenOCD* JTAG support for cost-efficient in-depth debugging of Intel® Quark based systems
  • Improved BIOS/EFI debugging for reliable and optimized system boot

Additional Enhancements For Software Power And Performance Optimization

  • Latest Intel® C++ Compiler & libraries for better device & application performance
  • Compute-intensive code offloading to Intel® HD Graphics for fast image and signal processing
  • New Energy Profiler for Windows* targets for better power optimization

To find out more please follow the links below

There is also a new engaging series of hands-on webinars covering:

  • Learn how to apply the newest Intel® System Studio features to real-world development challenges.
  • Learn how to build and optimize imaging, in-vehicle infotainment (IVI), long-term evaluation reference design, and surveillance applications on Intel® architecture using different components of Intel® System Studio.
  • Get a close look at software-based power analysis solutions on Intel® architecture-based systems.
  • We’ll showcase how the Intel® System Debugger functions to debug a kernel module on Yocto*.
  • Examine how the C++ offload compiler enables you to fully utilize Intel® processors through easy access to the GPGPU capabilities of Intel Graphics Technology.

At the recent SDN and OpenFlow World Congress in Dusseldorf, I was invited to give a talk about the cost of downtime in telecom networks and how this financial impact might be affected by NFV. This seemed to be a topic of wide interest, at least within the NFV-focused audience at the event. So in this post, I’ll summarize some of the information that I covered and suggest how we as an industry can address this challenge.


In October 2013, Heavy Reading published a comprehensive analysis titled “Mobile Network Outages and Degradations”. You can download a short version here and it provides excellent information on this topic

.

The report contains some thought-provoking numbers, starting with the fact that network outages cost service providers approximately $15 Billion a year, generally representing between 1% and 5% of their annual revenue. That’s a massive impact on their P&Ls, especially at a time when network infrastructure costs are exploding because of the growth in video traffic while per-subscriber revenues are flat to declining.

 

There’s a fascinating chart in the report that illustrates how many “major” outages are suffered by service providers in a typical year. While 27% of operators said they average only one to three major outages per year, as many as 12% suffer between 15 and 20, while 20% suffer more than 20. Clearly, major outages are not infrequent events.


It’s also interesting to read about the financial impact of these network outages. The largest impact is the increase in subscriber churn and of course it’s always more expensive to acquire new customers, especially high-revenue enterprises, than to retain existing ones. Other significant impacts are the operational expenses to fix the problems as well as the loss of ability to capture revenue from billable services. Slightly lower in terms of direct financial impact, but still significant, are the cost of refunds paid directly to customers and, inevitably, the legal costs relating to Service Level Agreement (SLA) issues.


With this report being published in October 2013, it’s safe to assume that it reflects traditional physical infrastructure incorporating a negligible amount of network virtualization. The networks from which these numbers were derived would have been based on fixed-function, vertically-oriented equipment, typically developed by a Telecom Equipment Manufacturer (TEM) employing their proprietary technology at every level of the architecture. Evolved over many years, this physical infrastructure typically delivers six-nines (99.9999%) reliability, which enables the services running on it to deliver the five-nines (99.999%) uptime expected by customers, and especially the high-revenue enterprises with stringent SLAs.


So what happens when the industry moves to NFV and we start replacing this fixed-function equipment with horizontally-oriented, multi-vendor solutions based on open hardware and software standards? From the perspective of service reliability, NFV has the potential to make the situation a lot worse (although there is a way to solve it).


As an example of new challenges, the services provided by NFV-based infrastructure will be delivered by Virtual Network Functions (VNFs). In some cases these will be virtualized implementations of existing software and in others they will be brand new applications. Either way, though, they will lack the proven track record of the applications running in today’s physical infrastructure, they will incorporate the added complexity of virtualization and we can be sure that they will fail more often.


Similarly, a core principle of NFV is the dynamic reallocation of VMs across servers, racks and data centers. This brings improved operational efficiency and enables seamless scale-up and –down of applications as traffic patterns change. It also increases the number of potential failure points.


Likewise, the traffic flows through new, virtualized systems will be complex and extremely hard to debug, even with the advent of innovative testing and monitoring applications that themselves run as VNFs. Regardless of the sophistication of these new tools that are in development, it’s a safe bet that when outages do occur that require manual intervention, it will take a lot longer to debug them, at least in the early years of NFV.


So how do we address this problem and ensure that service providers can maintain the traditional, expected level of service uptime?


The key is that, even with the move to NFV, the network infrastructure needs to provide the six-nines reliability that enables it to detect and respond to both hardware and software problems quickly enough that the services can maintain five-nines uptime. This is what’s meant by “Carrier Grade” reliability and it requires the implementation of a number of critical functions, such as:

  • At least 500km geographical redundancy for continued operation in natural disaster scenarios, such as earthquakes;
  • The detection of failed Virtual Machines (VMs) in less than one second, with automatic restart and no silent failures;
  • A deterministic interrupt latency of 10µs or less in the hypervisor, allowing the virtualization of CPE and access functions;
  • Automatic restart and recovery from host failures;
  • A fully-redundant, auto-synchronized network control plane;
  • Accelerated live VM migration to ensure minimal downtime during planned maintenance;
  • Telecom-grade AAA (Authentication, Authorization and Accounting) security;
  • A host of other complex features too numerous to list here.


You can’t achieve these challenging requirements by starting from enterprise-class software that was originally developed for IT applications. This type of software usually achieves only three-nines (99.9%) reliability, equivalent to a downtime of almost nine hours per year. That’s only one-thousandth of the reliability that’s needed for telecom.


Fortunately for the industry, a full Carrier Grade NFV infrastructure solution is now commercially available and was demonstrated at SDN and OpenFlow World Congress, with a great reception from service providers, TEMs and analysts. This is the kind of solution that’s required to ensure that the OPEX benefits of NFV aren’t wiped out by the financial impact of network outages resulting from the complexity of this new architectural concept.

At the recent Intel Developer Forum in San Francisco, there was lots of discussion about the tradeoffs associated with various approaches to virtual switching. In this post, we’ll outline the pros and cons of the most common solutions and show that it’s possible to meet aggressive performance targets without compromising on critical system-level features.


Virtual switching is a key function within data centers based on Software Defined Networking (SDN) as well as in telecom infrastructure that leverages Network Functions Virtualization (NFV).  In the NFV scenario, for example, the virtual switch (vSwitch) is responsible for switching network traffic between the core network and the virtualized applications or Virtual Network Functions (VNFs) that are running in Virtual Machines (VMs). The vSwitch runs on the same server platform as the VNFs and its switching performance directly affects the number of subscribers that can be supported on a single server blade. This in turn impacts the overall operational cost-per-subscriber and has a major influence on the OPEX improvements that can be achieved by a move to NFV.


Because switching performance is such an important driver of OPEX reductions, two approaches have been developed that boost performance while compromising on functionality: PCI Pass-through and Single-Root I/O Virtualization (SR-IOV). As we’ll see, though, the functions that are dropped by these approaches turn out to be critical for Carrier Grade telecom networks. Fortunately, there is now an alternative solution that provides best-in-class performance as well as these key functions, so the compromises turn out to be unnecessary.


PCI Pass-through is the simplest approach to switching for NFV infrastructure. As explained in detail here, it allows a physical PCI Network Interface Card (NIC) on the host server to be assigned directly to a guest VM. The guest OS drivers can use the device hardware directly without relying on any driver capabilities from the host OS.


Using PCI Pass-through, you can deliver network traffic to the Virtual Network Functions (VNFs) at line rate, with a latency that is completely dependent on the physical NIC. But NICs are mapped to VMs on a 1:1 basis, with no support for the sharing of NICs between VMs, which prevents the dynamic reassignment of resources that is a key concept within NFV. Each VM requires a dedicated NIC that can’t be shared, and NICs are significantly more expensive than cores as well as being less flexible. 

 

This white paper provides a good explanation of the concept behind Single-Root I/O Virtualization (SR-IOV). Basically, SR-IOV, which is implemented in some but not all NICs, provides a mechanism by which a single Ethernet port can appear to be multiple separate physical devices. This enables a single NIC to be shared between multiple VMs.


As in the case of PCI Pass-through, SR-IOV delivers network traffic to the Virtual Network Functions (VNFs) at line rate, typically with a latency of 50µs, which meets the requirements for NFV infrastructure. With SR-IOV, a basic level of NIC sharing is possible, but not the complete flexibility that enables fully dynamic reallocation of resources. NIC sharing also reduces net throughput, so additional (expensive) NICs are typically required to achieve system-level performance targets.


For NFV, though, the biggest limitations of PCI Pass-through and SR-IOV become apparent when we consider features that are absolute requirements for a Carrier Grade telecom networks:

  • Network security is limited since the guest VMs have direct access to the network. Critical security features such as ACL and QoS protection are not supported, so there is no protection against Denial of Service (DoS) attacks.
  • These approaches prevent the implementation of live VM migration, whereby VMs can be migrated from one physical core to another (which may be on a different server) with no loss of traffic or data. Only “cold migration” is possible, which typically impacts services for at least two minutes.
  • Hitless software patching and upgrades are impossible, so network operators are forced to use cold migration for these functions too.
  • It can take up to 4 seconds to detect link failures, which impacts required link protection capabilities.
  • Service providers are limited in their ability to setup and manage VNF service chains. Normally, a chain would be set-up autonomously from the perspective of the VNF (perhaps by an external orchestrator), whereas if the VNF owns the interface (as in the case of PCI Pass-through or SR-IOV), it has to be involved in the set-up and management of the chains, which is infeasible or complex.


For service providers who are deploying NFV in their live networks, neither PCI Pass-through nor SR-IOV enable them to provide the Carrier Grade reliability that is required by telecom customers, namely six-nines (99.9999%) service uptime.


Fortunately, though, the telco-grade Accelerated vSwitch (AVS) within Wind River’s Titanium Server software provides a solution to this problem.


Picture1.png

In terms of performance, AVS delivers 40x the performance of the open-source Open vSwitch (OVS) software, switching 12 million packets per second, per core (64-byte packets) on a dual-socket Intel® Xeon® Processor E5-2600 series platform (“Ivy Bridge”) running at 2.9GHz. AVS performs line-rate switching using significantly fewer processor cores than OVS, enabling the remaining cores to be utilized for running VMs that comprise revenue-generating services. (See this earlier post for more on the benefits of this high-performance virtual switching.)


At the same time, AVS provides the Carrier Grade features that are absent from the other two solutions that we’ve discussed:

  • ACL and QoS protection, providing protection against DoS attacks and enabling intelligent discards in overload situations.
  • Full live VM migration with less than 150ms service impact, instead of the limited “cold migration” option.
  • Hitless software patching and upgrades.
  • Link protection with failover in 50ms or less.
  • Fully isolated NIC interfaces.


With AVS now available as part of Wind River’s NFV infrastructure platform, there’s no need to forsake critical Carrier Grade features in order to meet performance targets. AVS delivers performance that is equivalent to or better than either PCI pass-through or SR-IOV, while at the same time enabling service providers to achieve the telco-grade reliability as they progressively deploy NFV in their networks.


As always, we welcome your comments on this topic. And if you’re going to be at SDN & OpenFlow World Congress in Dusseldorf in October, please do stop by the Wind River booth to see a demonstration of our NFV infrastructure solution and talk about your requirements in this area.

Filter Blog

By date: By tag: