Skip navigation
1 2 3 Previous Next

Software & Bios

125 posts

The elevator pitch for Network Functions Virtualization (NFV) is simple: why make massive investments in custom equipment when standard servers can do the job – and with far better flexibility? The reality, of course, is much more complicated. To deploy an NFV solution, developers must answer questions including:


  • Which servers should I use?
  • What is the right operating system (OS) and virtualization solution?
  • How do I virtualize switching and packet processing?
  • How can I ensure sufficient performance, availability, reliability, and cost-efficiency?


Two new NFV platforms from Red Hat* and Wind River* address these questions with highly-integrated OpenStack* packages that are optimized for high performance and rapid deployment. Each platform combines a variety of software and hardware from the Intel® Internet of Things Solutions Alliance – let’s start by taking a look at the NFV platform from Red Hat, a General member of the Alliance.


Red Hat*/6WIND* NFV Foundation

As shown in Figure 1, Red Hat’s offering is built on Red Hat Enterprise Linux* and Red Hat Enterprise Virtualization. These components provide the fundamental reliability and efficiency needed to decouple network functions from the underlying hardware. Red Hat Enterprise Linux OpenStack* Platform controls the overall solution and provides an interface for a wide variety of network orchestration software tools.


red hat 6wind.png

Figure 1. The Red Hat*/6WIND* NFV platform is optimized for reliability and performance.


To ensure optimal performance, the solution incorporates the 6WINDGate* networking stack from Affiliate member 6WIND. Compared to standard Linux* distributions, this software delivers a 10x performance boost to both data plane applications and the virtual switch. For the data plane, 6WINDGate uses the Data Plane Development Kit (DPDK) library to process network packets outside the Linux kernel. This fast-path processing can deliver over 10 million packets per second per core on the latest Intel® Xeon® processors. (See the Red Hat/6WIND brief for more details.)


For switching, the 6WINDGate Open vSwitch Acceleration module runs through 6WIND’s fast path architecture. It delivers 200 Gbps throughput transparently without having to recompile the standard Open vSwitch, change anything in the OpenStack platform, Kernel-based Virtual Machine (KVM)/ QEMU hypervisor, or OpenFlow controller. (The 200 Gbps performance was recently verified on an HP ProLiant server– see 6WIND’s product brief for details. HP is a Associate member of the Alliance.)


To demonstrate their platform’s value, Red Hat and 6WIND recently created a vEPC orchestration proof-of-concept (PoC) illustrated in Figure 2. This PoC runs on PowerEdge* C6220 servers from Premier member Dell. Other collaborators on the PoC include Intel, Cyan, and Connectem. It is worth noting that the Dell PowerEdge C6220 offers a number of carrier-grade features such as certification to NEBS Level 3 and ETSI standards (including extended operating temperature range and shock resistance) and DC power supplies. Thus, the PoC is comprised of deployment-ready components from the hardware all the way up to the service orchestration layer.


vEPC PoC stack.png

Figure 2. Key elements of the vEPC PoC Stack.


This PoC nicely demonstrates how developers can deal with the complexities of NFV. I strongly recommend reading the related white paper on the Intel® Network Builders site– it describes the challenges and the solutions of vEPC in considerable detail and is well worth your time.


Wind River* Carrier Grade Communications Server

Wind River has also developed its own fully-integrated NFV solution, the Wind River* Carrier Grade Communications Server. As illustrated in Figure 3, Wind River has built this platform up over time, offering progressively higher levels of functionality over the last couple of years. The company’s latest offering integrates Wind River Open Virtualization with Wind River Linux, optimized real-time Kernel-Based Virtual Machine (KVM++), carrier grade enhancements to OpenStack, and DPDK accelerated vSwitch technologies. The solution is designed specifically for NFV deployment, and to seamlessly integrate with existing networks.



Figure 3. Wind River’s NFV platform builds on previous offerings.


The most obvious benefit of the platform is its high level of integration. But what really makes the platform noteworthy is its emphasis on carrier grade reliability. According to Wind River, the software supports “six 9s” reliability, translating to downtime as low as 30 seconds a year. It is hard to overstate the importance of this claim – an NFV design is only useful if it delivers carrier-grade reliability, and working out reliability issues can be a huge development hurdle. Thus, the availability of six 9’s reliability in a pre-integrated NFV software platform is a major asset.


The Wind River Carrier Grade Communications Server is optimized for and running on HP ProLiant servers. Given the platform’s emphasis on carrier-grade reliability, it is worth noting that the HP ProLiant line itself offers carrier grade features, including NEBS and ETSI certifications and extended lifecycles. It is also worth noting that 6WIND used this same HP server line to demonstrate its Open vSwitch performance. This fact that illustrates the fundamental value of NFV, namely the ability to use standard servers and software with a high level of interoperability.


Returning to Wind River’s NFV platform, the reliability of the software is complemented by its high performance. Particularly notable is the accelerated virtual switch software, which delivers 20x the performance of the standard Open vSwitch with no hardware acceleration required. This switch can deliver 12 million packets per second to guest virtual machines (VMs) using only two processor cores on an industry-standard server platform, in a real-world use case involving bidirectional traffic. What’s more, this cutting-edge performance is achieved using up to 33% fewer CPU resources than alternative solutions – see Wind River’s press release for details.


A Foundation for the Future

The Red Hat/6WIND and Wind River NFV platforms address some of the most pressing issues facing telecom developers today. By providing highly integrated foundations for virtualized equipment, these platforms promise to dramatically lower time-to-market while delivering industry-leading performance, efficiency and reliability. And it’s clear that these companies are fully committed to NFV, meaning that we can expect even greater integration and higher levels of performance in the future. I can’t wait to see what’s next!


Learn More

Contact featured members:


Solutions in this blog:


Related topics:


Dell OEM is a Premier member of the Intel® Internet of Things Solutions Alliance. Hewlett-Packard and Wind River Systems are Associate members of the Alliance. 6WIND SA is an Affiliate member and Red Hat is a General member.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® Internet of Things Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

As the march towards Network Functions Virtualization (NFV) continues, many telecom equipment manufacturers (TEMs) are revisiting the role of Deep Packet Inspection (DPI). On one hand, NFV presents new opportunities to optimize performance and revenue with DPI. On the other hand, the complexity of NFV can make these benefits harder to achieve.


In this blog, I will highlight some of the challenges of DPI in an NFV environment. I will also examine a reference architecture that uses NFV solutions from the Intel® Internet of Things Solutions Alliance. I will show how this architecture overcomes key design challenges and realize the benefits of NFV.


What Makes Virtualized DPI Different?

As I discussed in my last blog, NFV has the potential to improve network flexibility and cost structures, allowing operators to innovate more quickly and efficiently. However, the transition from fixed-purpose, hardware to virtualized applications requires careful attention to availability, performance, and scalability.


The scope and magnitude of these challenges is motivating TEMs to seek third-party software for their virtualized designs. Heavy Reading recently issued a detailed study on virtualized DPI which found that:


  • More than 90 percent of respondents said ETSI NFV would affect next-generation product design, and more than half said availability of standardized virtualized network function (VNFC) components would likely lead them to source more third-party components, including the proposed ETSI DPI VNFC.
  • Two thirds of vendors now believe DPI is a must-have technology.
  • The proportion of vendors choosing to source DPI from a third party is gradually rising, and a majority of those doing so prefer to use a pure-play supplier of DPI components.


In response to these trends, Associate Alliance member Wind River and Affiliate members Tieto and Qosmos recently teamed up with Intel to build a NFV DPI reference architecture. This design demonstrate how to overcome the common challenges associated with vDPI. The team has written up a detailed white paper explaining their work. I will touch on just a few points here; I recommend reading the paper for more details. And if you are new to the topic, check out the Qosmos/Intel NFV backgrounder for a good intro.


nfv reference design.png

Figure 1. The reference architecture shows how to implement virtualized DPI.


Carrier-Grade Availability

Availability is the cornerstone of any telecom solution. Thus, the reference architecture starts with high-reliability components like carrier-grade Wind River* Linux*. These carrier-grade components are coupled with high-availability design techniques such as redundant virtual machines (VMs) with failover capabilities.


Of course, reliability and redundancy are hardly new concepts for telecom design. What may be less obvious is the way performance can impact availability. Virtualized designs must contend with issues like context switching that can impact availability. Thus, NFV designs must maximize performance not only to achieve sufficient throughput but also to ensure availability.


Optimized Performance

The reference design tackles performance issues with a variety of design techniques. These start at the hardware level, where the Intel® Xeon® processors, Ethernet controllers, and Ethernet switch are all optimized for virtualization – and in the case of the switch, specifically optimized for virtualized networking. (More background on the switch here.) The key to this optimization is minimizing hypervisor overhead. Context switches are computationally expensive, and introduce unwelcome timing variability. Thus, the Intel hardware has been equipped with a variety of virtualization accelerators that offload hypervisor tasks, leading to considerably improved performance and determinism.


Building on this hardware foundation, the reference architecture uses the Data Plane Development Kit (DPDK) to accelerate packet processing across the virtual switch and within various applications. DPDK uses Data  Direct Input Output (DDIO) capability on Intel Xeon processors to make the data packets directly available in CPU Cache for processing, which avoid time consuming memory read operations.  I haven’t seen performance specs for this reference architecture, but similar designs can achieve 200 Gbps through the Open vSwitch – an impressive number, to say the least.


Moving up the software stack, the Tieto IP (TIP) Stack is also optimized for virtualized environments. For example, the TIP Stack implements Single Root I/O Virtualization (SR-IOV), which eliminates hypervisor involvement in data movement by giving each VM independent memory, interrupts, and DMA streams – all of which can be directly accessed by the Intel® Ethernet controllers.


Zooming a bit further out, there is the larger question of how a system can minimize VM-to-VM communications through the hypervisor. And here at last we hit upon the importance of DPI in NFV. By deploying DPI in front of incoming traffic, flows can be tagged to convey the protocol and application ID to other nodes (Figure 2). By identifying the needed processing up front, this approach eliminates the need to pass flows between VMs. In addition, it allows nodes like the Traffic Detection Function (TDF) to be stateless, simplifying their design.


dpi vnf.png

Figure 2. DPI can be used to tag incoming flows.


The reference architecture implements this flow classification with Qosmos DPI VNFC. Based on the popular Qosmos ixEngine*, this VNFC enables many functions, including quality of service (QoS) management, traffic shaping, throttling, and gating. In addition to supporting high availability and performance, the DPI VNFC is critical to enabling scalability, as we will see in our final section.


Flexibility and Scalability

One of the main selling points for NFV is the ability to rapidly scale and adapt to new needs. DPI is key here. Information extracted by the a DPI VNFC can be passed to the orchestrator/analytics system, which in turn can trigger VMs to scale capabilities up or down as needed. Of course, this requires that the DPI engine itself be scalable. In particular, it must be possible to add new instances of the VNFC without dropping packets or disrupting existing DPI instances.


One way the reference architecture addresses these issues is by separating DPI from the Packet Data Network Gateway (PDN-GW). This simplifies DPI scaling and enables increases in capacity without impacting the PDN-GW.


It is also worth noting that each DPI VM can be independently configured to meet specific needs. For example, a particular DPI instance could be configured to classify 100 selected apps at 40 Gbps with a particular latency window. To support such a configuration, the orchestrator can provide an appropriate number of CPU cores and memory. A different VM with different needs might receive greater or lesser resources.


Powering NFV with DPI

Overall, the reference architecture demonstrates three key points:


  • DPI has the potential to significantly extend the benefits of NFV.
  • The challenges of implementing DPI in an NFV context can be mitigated with third-party software.
  • Collaboration between third-party NFV software suppliers is essential for addressing the many complexities of NFV.


I expect to see much more progress on NFV software in the coming months. I look forward to bringing you the latest coverage as events unfold!



Learn More

Contact featured members:


Solutions in this blog:


Related topics:


Wind River Systems is an Associate member of the Intel® Internet of Things Solutions Alliance. Tieto and Qosmos are Affiliate members of the Alliance.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® Internet of Things Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

The telecom industry is clearly excited about Network Functions Virtualization (NFV), but this excitement has been accompanied by major concerns. One big worry is the risk involved. To realize the benefits of NFV, the industry must move to a mixed supplier environment – one where systems from different providers work together smoothly. Service providers are understandably skeptical about this possibility. Given the history of the industry, how can they be sure that multi-vendor solutions can achieve interoperability?


Wind River is answering this question with its Titanium Cloud ecosystem. The ecosystem is designed to ensure interoperability with the Wind River* Titanium Server, a highly integrated NFV platform (Figure 1). The ecosystem includes both hardware partners like Kontron and HP, as well as software providers like Brocade, GENBAND, Metaswitch Networks, Nakina Systems, and Overture. Because these hardware and software products have been pre-integrated and validated with Wind River’s NFV solutions, service providers and telecom equipment manufacturers (TEMs) can confidently select them for deployment in live networks.



Figure 1. Wind River* Titanium Server is highly integrated.


I recently had a conversation about the program with Charlie Ashton, senior director of business development for networking solutions at Wind River. Charlie pointed out that a full NFV solution requires many components, like Operations Support Systems (OSS) and service orchestrations (Figure 2). He explained that the goal of the program is to provide a complete, end-to-end stack that provides all of these components – and gives service providers and TEMs the ability to choose from multiple vendors.



Figure 2. NFV requires many components.


To achieve this goal, Wind River is working closely with its partners to ensure interoperability.

On the software side, engineers from Wind River and its partner companies are working together closely to validate correct operation on the NFV platform, and to optimize the software where possible. Similarly, Wind River is working closely with its hardware partners to validate and optimize the NFV platform on the target hardware.


The ecosystem has already lined an impressive array of hardware. For example, Wind River has pre-validated the Kontron Symkloud MS2910, which is designed with 10G OpenFlow redundant switching. This high density 2RU modular platform compacts multiple Intel® processors with dedicated memory and storage, and is optimized for stacking multiple platforms for massively scalable virtual applications. Notably, the Symkloud platform can share virtual resources across physical nodes without additional management software. Kontron recently collaborated with fellow Titanium Cloud members GENBAND and Wind River to demonstrate a NFV-enabled Session Border Controller.


Wind River also validated and optimized its software for the HP ProLiant DL380p and DL360p servers, with the BL460c to follow. HP recently collaborated with Brocade and Wind River to create a series of proof of concepts (PoCs) targeted at ETSI-NFV’s Reliability and Availability Work Group.  These demos featured seven instances of Brocade’s Vyatta vRouter 5400 running in different configurations on  HP DL360p rack mount servers.


The collaboration between HP and Wind River also extends to the newly announced HP OpenNFV Program. This program is designed to help the telecommunications industry accelerate innovation and launch services faster, more easily and more cost-effectively. There is considerable overlap between the membership of HP and Wind River’s programs – HP’s partners include not only Wind River but also 6WIND, Brocade, GENBAND, Intel, the Israel Mobile & Media Association, Mellanox Technologies, Spirent, and SK Telecom.


As part of their partnership, HP and Wind River announced plans to develop NFV solutions that combine Wind River Titanium Server with HP Helion OpenStack*. This combination of technologies is expected to provide communications service providers (CSPs) with an open-source-based cloud platform that meets reliability requirements and accelerates NFV deployments.


In the midst of all of these announcements, it is worth noting that Wind River Titanium Server is now commercially available, so you can put the platform to work immediately. It is also worth noting that the Wind River NFV platform was previously known as Wind River* Carrier Grade Communications Server (Wind River* CGCS). If you’ve been scratching your head wondering where why you hadn’t heard of the platform before, now you know – it has a new name!

I also think these announcements are noteworthy because they strongly suggest that NFV is transitioning from an exciting idea to a solid technology. It will be interesting to see how the industry continues to develop and mature in the coming months.


Learn More

Contact featured members:


Solutions in this blog:


Related topics:


Kontron is a Premier member of the Intel IoT Solutions Alliance. Hewlett-Packard and Wind River Systems are Associate members.6WIND is an Affiliate member.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® IoT Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

As high-performance embedded computing (HPEC) becomes a mainstay of military computing, the industry is increasingly interested in adapting Linux cluster architectures from the IT world. These architectures have obvious appeal for their low cost and high performance, but they are not generally designed for deterministic performance. Thus it is not clear that these architectures are suitable for real-time defense applications.


Members of the Intel® Internet of Things Solutions Alliance (Intel® IoT Solutions Alliance) have been hard at work answering this question. For example, Associate member GE-Intelligent Platforms recently published an excellent white paper that investigates interrupt response times and Message Passing Interface (MPI) latencies using OpenMPI on three different versions of Linux*. The Linux builds were provided by Concurrent Real-Time and Red Hat (both General members) and are all based on Red Hat* Enterprise Linux* 6.3:



All three kernels were tested on the GE-IP DSP280, a dual-socket, eight-core platform based on the Intel® CoreTM i7-2715QE processor. To characterize these platforms, a series of measurements were made with and without a background CPU load. The tests found that properly tuned systems could achieve interrupt response latencies of less than 10 µs and MPI latencies of around 1 µs, which is sufficient for many embedded applications.


Even standard Red Hat Enterprise Linux met soft real-time criteria with some tuning, such as disabling hyperthreading and isolating the real-time application on a specific core. However, interrupt latencies could still exceed 120 µs, making this standard distribution unsuitable for hard real-time applications.


In contrast, GE achieved highly deterministic behavior with the other flavors of Linux. For example, it got consistent sub-10 µs interrupt response times on Concurrent RedHawk Linux by applying real-time priority to the critical application, and by “shielding” the core running that application (see Figure 1). Red Hat* Enterprise MRG Realtime was not far behind, with optimized response times below 15 µs. These fast, reliable response times are an excellent fit for hard real-time applications.


redhawk latency.png

Figure 1. Concurrent RedHawk Linux* can provide real-time determinism. Note that all results for the “CHRT+CORE3” optimization (light blue) are under 10 µs.


Interestingly, MPI latencies were not a major problem on any of the kernels. Simply assigning real-time priority to the critical application produced low latencies and low jitter on all three versions of Linux. This leads to a second question. If OpenMPI is good enough for hard real time, should developers consider other options?


As it happens, GE has another great white paper on this very topic. This paper compares OpenMPI to GE’s AXISFlow in an impressively detailed and even-handed analysis, and summarizes the pros and cons of both options on five key criteria:


Programming – GE argues that OpenMPI is overly complex, with 300+ APIs that may have hidden side effects. In comparison, AXISFlow has only 42 APIs for a user to learn. On the other hand, OpenMPI’s main APIs are conceptually simple, and they hide many details from the developer. This makes it easier to get started than with AXISFlow, which exposes more interfaces to the developer.


Hardware Mapping – OpenMPI was designed for homogenous HPC systems with one MPI task per processor. AXISFlow was designed for heterogeneous embedded systems using multiple threads of execution on each core and multiple interconnects between CPUs. Thus, GE sees AXISFlow as a better fit for HPEC systems.


Performance – According to GE’s tests, OpenMPI offers lower latency for smaller message sizes and when there is no conflict with other CPU tasks. In contrast, AXISFlow outperforms in multi-tasking applications that need determinism (see Figure 3).



Figure 3. Relative performance of the MPI alternatives depends greatly on the conditions at hand.


Distributions – As the name says, OpenMPI is an open standard and is thus available for free – or with paid support from a variety of vendors. In contrast, AXISFlow is proprietary.


This last point deserves a bit of exploration. OpenMPI is popular not only in the IT world but increasingly in embedded applications. GE itself just added OpenMPI to its AXIS 6.0 design suite, which already supported AXISFlow. As another example, General member Mercury Systems supports the API in its Performance Enhanced OpenMPI/OFED.


On the other hand, there are plenty of popular propriety solutions, like Curtiss-Wright Continuum IPC and the Gedae Software Development Platform. (Curtiss-Wright and Gedae are General members of the Alliance.) A full analysis of HPEC architectures is not complete without careful consideration of these alternatives – something that is well beyond the scope of this blog.


Having said that, I think the overall picture is clear: Linux clusters are unquestionably suitable for military HPEC applications. This is an important development, because it allows embedded developers to leverage the cost and performance advantages of these architectures. I have no doubt that we will see IT-like Linux clusters continue to make inroads into military applications as the HPEC market matures.


Learn More

Solutions in this blog:


Related topics:


GE Intelligent Platforms is an Associate member of the of the Intel® IoT Solutions Alliance Gedae, Inc., Concurrent Real-Time, Curtiss-Wright Controls Defense Solutions, Mercury Systems, Inc., and Red Hat are General members of the Alliance.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® IoT Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Digital signage is already a vital tool for retailers, boosting sales by up to 33 percent. Now businesses can make their signage even more effective with Intel® Retail Client Manager  (Intel® RCM). This updated content management system (CMS) adds features including:


  • Remote Control, for browser-based management of both ad content and signage hardware
  • Audience Analytics, which enables signs to sense their audience, deliver custom messages, and measure the results
  • Localization, with new support for Spanish, Japanese, Russian, Simplified Chinese, German and Portuguese


These new features can help boost sales, cut costs, and simplify operations. Best of all, it’s easy to get started with Intel RCM thanks to support from Arrow, Avnet, and Seneca, all Associate members of the Intel® Internet of Things Solutions Alliance (Intel® IoT Solutions Alliance).


Before we get into the new features, a quick review of Intel RCM is in order. As shown in Figure 1, the basic components include:


  • Intel® RCM Campaign Manager, for creating and updating content
  • Intel® RCM Service, which distributes content and stores all related information (such as play history)
  • Intel® RCM Player, which displays content on the screen


Together, these components enable non-technical personnel to create, distribute, and manage multiple campaigns across digital signage, kiosks, and point-of-sale (POS) terminals, and other digital touch points. (See our earlier coverage for more details.)


rcm overview.png

Figure 1. Intel RCM enables distribution across a network.


With that background in mind, let’s take a peek at the new features:


Remote Control

The new Remote Control interface gives operators have a number of “in-band management” capabilities. Most notably, they can watch what is playing on up to six screens in real time (Figure 2).  Operators can also remotely power off or restart a media player, or take direct control of any player. For platforms equipped with Intel® Active Management Technology (Intel® AMT), operators can also exercise “out-of-band management” functions like waking up a device or resetting the BIOS.


remote control.png

Figure 2. Remote Control employs a browser-based interface.


These remote management tools can be a major money saver. According to Global Retail Insights (an IDC Company), remote management can reduce support costs by approximately $400 per device per year. What’s more, these tools can reduce downtime by enabling remote technicians to troubleshoot malfunctioning screens – thus ensuring that audiences get maximum exposure to ad campaigns.


Intel® RCM Audience Analytics

One of the main advantages of digital signage is its flexibility. A given sign can support many different campaigns, switching between campaigns as needed to meet various business goals.


The new Audience Analytics feature in Intel RCM takes full advantage of this flexibility by allowing signs to adapt to the audience. When a sign is paired with a digital camera, Audience Analytics can detect audience demographics like gender and age, allowing digital touch points to deliver the most relevant message to audience at hand. For example, a sign could advertise sport drinks to young males and diet soda to mature women.


Just as importantly, Audience Analytics enables a sign to measure the success of each campaign. Specifically, the software can measure interactions including dwell time, time of day, and viewed content. Figure 3 illustrates how this data can be used to evaluate the success of different campaigns.


dwell time.png

Figure 3. Audience Analytics can measure views and dwell time.


It is worth noting that most of the new capabilities in Intel RCM were previously available in separate product offerings. For example, the features of Audience Analytics were offered in the Intel® Audience Impression Metrics Suite (Intel® AIM Suite). However, Intel RCM better integrates these features and improves the user experience, makes signage networks easier to control, analyze, and maintain.


Getting Started with Intel® RCM

Intel RCM is available from Arrow, Avnet, and Seneca, as well as a variety of other distributors. In addition to offering Intel RCM as stand-alone software, several distributors are pre-packaging it with media players for easier deployment. A good example of this latter approach is the Seneca HDN illustrated in Figure 4. This small form factor media player offers a turnkey signage solution with features including:


  • High-performance, low-power 3rd or 4th generation Intel® Core™ processors
  • Support for Intel® AMT (and thus support for the full suite of Remote Control features)
  • Front USB port to expand I/O capability and simplified servicing
  • Serial port for central management and monitor control
  • Enhanced thermal efficiencies for extreme environments



Figure 4. The Seneca HDN is a turnkey media player.


Many other players have been bundled with Intel RCM, or certified for use with Intel RCM. For a full list, see the Solutions Directory. Here you will find solution for small signs, large, signs, and even multi-screen installations. And as always, don’t hesitate to contact me for more details!


Learn More

Contact featured member:

Solutions in this blog:

Related topics:


Arrow - North America, Avnet - North America, and Seneca are Associate members of the Intel IoT Solutions Alliance.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® IoT Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

Intel is actively contibuting code base to support opensource system software debug of Intel® Quark SoC and contributing to the Open On-Chip Debugger project (Open On-Chip Debugger).


With OpenOCD* 0.8.0 (OpenOCD - Open On-Chip Debugger - Browse /openocd/0.8.0 at the Intel® Quark SoC X1000 is officially supported.


Required software:


• Linux* host system

• Quark-patched OpenOCD


• Eclipse (Indigo tested) with CDT Plugin Installed (Main + Optional Features)

• Quark Kernel compiled with debug symbols

• Git

Required hardware:


• OpenOCD supported JTAG debugger.


For example:


For a complete set of supporting documentation, please visit the website for your specific JTAG hardware. The board has been tested with the following JTAG debuggers:


• TinCanTools* FLYSWATTER2


• Olimex* ARM-USB-OCD-H


The following pin adapter was used to connect the JTAG debugger to the Quark board:


Patching and building OpenOCD*


To enable Quark support, you must apply a patch to the OpenOCD source code and then build it.



• git

• libtool

• automake


In addition, to use a JTAG pod with an FTDI/FT2232 chip (like the Flyswatter2) you must install the related development library, using a command like:

$ sudo apt-get install libftdi-dev

Check out the OpenOCD source code, create a branch, and apply the Quark patch using the following commands:


$ git clone git:// openocd-code

$ cd openocd-code

Configure and build OpenOCD:


$ ./bootstrap

$ ./configure --enable-ft2232-libftdi --enable-maintainer-mode

$ make

It is not strictly necessary to install OpenOCD. The binary and configuration files can be used from the build/source tree directly if desired. However, it is recommended to perform this additional step:

$ sudo make install

JTAG USB device access


By default, non-root users won’t have access to the JTAG pods connected via USB. You must grant write access to the proper /dev/bus/usb entry every time a device is connected to be able to run OpenOCD using a non-root account. The process can be automated by adding a udev rule. Simply create a text file in the rules directory:

$ sudo vim /etc/udev/rules.d/99-openocd.rules


Example only - The IDs depend on the JTAG device"

SUBSYSTEM=="usb", ATTR{idVendor}=="0403", ATTR{idProduct}=="6010",MODE="0666"

SUBSYSTEM=="usb", ATTR{idVendor}=="15ba", ATTR{idProduct}=="002b",MODE="0666"


Launching OpenOCD*

The first step to enable source level debug is to connect your JTAG pod to the board and run OpenOCD selecting the correct interface and board configuration files. The example below uses a Flyswatter2 JTAG debugger.

$ openocd -f interface/flyswatter2.cfg -f board/quark_x10xx_board.cfg


Launching GDB*


GDB documentation is available here:

It is possible to perform source level debug using GDB by connecting to the OpenOCD internal GDB server, which answers on port 3333 by default. OpenOCD must be running as shown in the previous section.

Run GDB and connect to the OpenOCD internal GDB server. Load the debug info of a debug compiled Quark Kernel vmlinux file.


For the kernel built, the commands are:


$ gdb


(gdb) target remote localhost:3333

(gdb) monitor halt

(gdb) symbol-file /PATH/TO/MY_KERNEL_BUILD_DIR/clanton_linux_<version>/work /vmlinux


A detailed whitepaper on how to use GDB* with OpenOCD* targeting debug on Intel® Quark SoC X1000 can be found at

The Intel® JTAG Debugger included with Intel® System Studio 2014 Update 2 ( in it's latest version now supports Linux* (64bit x86_64 Linux* systems running Ubuntu* 12.04, 13.04 or Fedora* 17, 18) host targeting:


  • Intel® Atom™ Processor C2xxx
  • Intel® Quark™ SoC X1000
  • Intel® Atom™ Processor E38xx, Z3680, Z37xx
  • 3rd and 4th generation Intel® Core™ and Intel® Xeon™ processor


The software package dependencies are

  • Mono Runtime 2.10.8 or higher (more information at
  • The following packages are required:
    • Ubuntu*: mono-complete
    • Fedora*: mono-basic, mono-core, mono-data-mono-extras, mono-locale-extras
  • libusb 0.1.12 or higher
  • fxload 0.0.20020411 or higher


The Intel® ITP-XDP BR3 device can be ordered through

  • Select the Debug Toolsproduct category and add ITP-XDP BR3 to your cart.


Other new features  on Linux* host include:


  • Execution Trace using Intel® Processor Trace (Intel® PT) and Real-Time Instruction Trace (RTIT)


The Intel® JTAG Debugger now supports execution trace using two new hardware capabilities, Intel PT and RTIT.
These new hardware capabilities give the user increased trace depth overprevious technologies, with little to no impact on target execution speed.


Support for these capabilities is limited to specific processors and/or platforms, for more information please contact customer support.


  • Support of BIOS flashing for 4th generation Intel® Core™ processor
  • Support for 3rd generation Intel® Core™ processor target


For detailed debugger usage, please check out


Release Notes

Getting Started Guides

User Guides



What's New in Intel® System Studio 2014 Update 2

  1. Intel® VTune™ Amplifier
    1. Sampling collector improvements for Android* target
  2. Intel® JTAG Debugger
    1. Support for Intel® Atom™ Processor C2xxx, E3xxx, Z32xx, Z33xx  as well as 4th generation Intel® Core™ processors with Linux* hosted debugger.
    2. Support of BIOS flashing for 4th generation Intel® Core™ processor
    3. Support for 3rd generation Intel® Core™ processor target
    4. Support for latest generation Intel® Atom™ Processor Z3460 and Z3480.

What's New in Intel® System Studio 2014 Update 1

  1. Intel® IPP support of the Intel® Quark SoC with a subset of the full Intel® IPP function set has been added
  2. Intel® C++ Compiler for Android* and embedded OS Linux* updated
  3. Intel® VTune™ Amplifier
    • Updated version of Intel® VTune™ Amplifier with support for remote softwarebased algorithm analysis (Basic Hotspots, Concurrency, Locks and Waits) on embedded Linux target systems
    • New analysis type “TSX Exploration” for 4th generation Intel® Core™ processors.
    • Support for external data collection launched from the VTune Amplifier with the custom collector target configuration option or -custom-collector command line option.
    • Android 64 bit kernel support (not 64-bit user space)

The new Intel® System Studio released in mid-February combines support for the Intel® Quark SoC with highly optimized compiler and the XDB Intel® JTAG Debugger with support for performanc tuning, in-depth instruction and evend tracing, power & performance analysis, and first class build tools with hand-tuning assembly level optimizations specifically for the Intel® Atom(TM) Processe E38xx.

It targets cross-development for Yocto Project*1.5  and Wind River Linux* 5,6 and the Wind River Intelligent Device Platform.

Both Windows* and Linux* host are supported from within an Eclipse IDE integrated Intel compiler and library set.

Check it out at

1. Build Applications for Wind River* Linux* 5 OS Target

To find out more about Wind River* Linux* 5 and on how to install the required Wind River* Linux* toolchain go to

The required environment variables that need to be set using the source command are:



where <install_dir> is the installation path for the Wind River* Linux* software development toolkit, e.g. /export/SDK and target is <ia32> or <intel64>.

The compiler option specifying the cross-compilation is

icc  -platform=wrl50 my_source_file.c

icpc -platform=wrl50 my_source_file.c

The corresponding environment file is located in <install-dir>/bin/<target>

where <install-dir> is /opt/intel/ and target is <ia32> or <intel64>.

2. Build Applications for Wind River* Linux* 6 OS Target

To find out more about Wind River* Linux* 6 and on how to install the required Wind River* Linux* toolchain go to

After installation is completed, set PATH, WR_SYSROOT and WR_GNU_PREFIX environment variables to match cross-compilation tools directories:

32-bit target:

export PATH=<install_dir>/x86_64-linux/usr/bin/i586-wrs-linux:$PATH

export WR_SYSROOT=<install_dir >/qemux86

export WR_GNU_PREFIX=i586-wrs-linux-

64-bit target:

export PATH=<install_dir /x86_64-linux/usr/bin/x86_64-wrs-linux:$PATH

export WR_SYSROOT=<install_dir>/qemux86-64

export WR_GNU_PREFIX=x86_64-wrs-linux-


When compiling for Wind River* Linux* 6 target, specify -gnu-prefix= and --sysroot= options.

To compile a C source file (for example, my_source_file.c) for Wind River* Linux* 6 target, use a command similar to the following:

icc -gnu-prefix=$WR_GNU_PREFIX --sysroot=$WR_SYSROOT my_source_file.c

3. Build Applications for Yocto Project* Target

The Intel® System Studio installer will automatically integrate the Intel® C++ Compiler with an existing Application Development Toolkit (ADT) installation for Yocto Project* 1.5

Please refer to the Yocto Project* Application Development Toolkit User’s Guide at for details on the Application Development Toolkit Usage.

To be able to use the Application Development Toolkit with the Intel® C++ Compiler it has to be installed using the Application Development Toolkit tarball and installer script available at More detailed information on this can be found at

The Yocto Project* distributions and board support packages can be found at

Yocto Project* documentation can be found at

The required environment variables that need to be set using the command are:



The compiler option specifying the cross-compilation is

icc -platform=yl<version> my_source_file.c

icpc -platform=yl<version> my_source_file.c


icc -platform=yl<version> my_source_file.c

icpc -platform=yl<version> my_source_file.c


The corresponding environment file is located in <install-dir>/bin/<target>,

where <install-dir> is /opt/intel/ and target is <ia32> or <intel64>.


4. Compiler Usage for Intel® Quark Processor

When using the Intel® C++ Compiler use of the following two compiler options is recommended:

-mia32 –falign-stack=assume-4-byte


            tells the compiler to generate code for IA-32 architecture

-falign-stack=assume-4-byte :

            tells the compiler to assume the stack is aligned on 4-byte boundaries. The compiler can dynamically adjust the stack to 16-byte alignment if needed. This will reduce the data size necessary to call routines.

Intel® System Studio ( is now optimized and tested for Tizen* IVI ( ). Having a compiler that takes advantage of the latest optimizations for the Intel® Atom(TM) Processor E38xx, including Intel® Streaming SIMD Exttensions 4.2  (Intel® SSE 4.2) and the latest in memory access, cache utilization and instruction pipeline optimizations.

Tizen * IVI software stack components and C/C++ applications can be optimized and tuned using the Intel® VTune(TM) Amplifier for Systems in conjunction with the Sampling Enabling Product SEP:

Intel® System Studio now adds full support for the Android* developer to its feature set. Everything from system software debug via JTAG to device driver and shared object build and optimization with the Intel® C++ Compiler and the Intel® VTune(TM) Amplifier for Systems and Intel® Energy Profiler is available as part of one comprhensive development studio.

You can find out more at

Below are some interesting detailed whitepaper artcicles that highlight the depth and breadth of the feature set:

Intel(R) System Studio ( now supports UEFI BIOS debug not just through JTAG (UEFI Bios Debugging), but also through the EFI Development Kit 2.x (EDKII) as described at and

To find out more on how the Intel(R) System Debugger included with Intel(R) System Studio have a look at the in-depth article below:

windThe combination of rapidly growing data traffic on both wireless and fixed line networks and relatively slow growth in revenue has forced service providers to optimize network performance using policy enforcement systems in both mobile and fixed line networks.  Policy enforcement uses deep packet inspection (DPI) to identify applications and traffic flows. At the same time service providers are planning to implement network functions virtualisation (NFV), a new approach that uses virtual platforms to enable a more flexible network for service providers.


In this blog I am going to explore the benefits of using virtual platforms to implement DPI and policy enforcement using software development kits from 6WIND and Qosmos that have been optimized for Intel® Xeon® processors and Intel® Data Plane Development Kit (Intel® DPDK). 6WIND and Qosmos are Associate members of the Intel® Intelligent Systems Alliance. The 250-plus members of the Alliance collaborate closely with Intel to create hardware, software, tools, and services to help speed intelligent systems to market.


Policy Enforcement  in 3G/LTE Networks

Figure 1 shows the key elements in the combined 3G/LTE network. The LTE eNodeB base stations are connected directly into the enhanced packet core (EPC) that includes the Serving Gateway, PDN Gateway and MME. The 3G NodeB base stations are connected through the radio network controller (RNC) to the packet core (SGSN/GGSN). These elements are being implemented on virtual platforms as the industry moves to NFV.


Policy Enforcement in 3g-LTE Networks.png

Figure 1. Policy Enforcement in3G/LTE Networks (Source: Earlswood Marketing)


Policy enforcement in 3G/LTE networks is handled by the Policy and Charging Rules Function (PCRF) and Policy and Charging Enforcement Function (PCEF). The PCRF allows operators to dynamically control policies for bandwidth, charging and other functions for each subscriber and application. The PCEF implements these policies through the GGSN and PDN Gateway. Similar functions are used for policy enforcement in fixed line systems.


DPI Processing Platforms

DPI processing platforms have changed significantly over the last few years. Dedicated hardware has been replaced by multicore processors running on industry standard COTS platforms. This in turn has opened up the opportunity to use virtualized platforms for DPI and policy enforcement with applications running on virtual machines that are connected through virtual switches. This approach is being standardized by ETSI as Network Functions Virtualisation (NFV). There are many companies working on solutions for NFV including several that, like 6WIND and Qosmos, are supporting the CloudNFV initiative.


The development of the Intel® DPDK and the introduction of the Intel® Xeon® processor E5-2600 v2 have significantly increased the performance of virtual platforms. The 6WINDGate packet processing software and Qosmos ixEngine DPI software take full advantage of the Intel DPDK and Intel Xeon processors.


6WINDGate Packet Processing Solution

6WINDGate is a data plane processing software solution for multicore and virtualized platforms. The solution includes a full suite of layer 2-4 protocols for networking applications, works with commercial and open-source Linux distributions, and has full support for standard Linux APIs. 6WINDGate can be used to deliver best-in-class performance for mobile infrastructure, network appliances and data center networks.


6WINDGate in NFV.jpg

Figure 2. 6WINDGate in NFV (Source: 6WIND).


6WINDGate has full support for industry-standard hypervisors with no impact on system performance. Figure 2 shows 6WINDGate used in an NFV solution. Each virtual machine running within the hypervisor supports a virtual network function (NFV) with LINUX operating system and 6WINDGate networking stack. The virtual machines are connected through an open virtual switch which is also accelerated by the 6WINDGate networking stack.


The virtual machines and virtual switch are running on standard COTS or server platforms using x86 processors. 6WINDGate uses the Intel DPDK and is optimized for Intel processors such as the Intel Xeon processor E2600v2. When used to accelerate the standard Open vSwitch (OVS), 6WINDGate typically delivers a 10x improvement in switching performance and a 3x improvement in virtual machine density over a purely Linux implementation.


Qosmos ixEngine DPI Solution

The Qosmos ixEngine DPI is software development kit that supports real-time layer 7 IP flow analysis and application identification. The solution uses stateful inspection and heuristic analysis to extract metadata and content information. The ixEngine will extract over 6000 types of information for more than 1000 protocols. The Qosmos ixEngine includes software libraries and tools that are easily integrated into existing solutions. The Qosmos iXEngine is designed to work in a range of hardware platforms and virtual environments. The solution works with the 6WINDGate multicore network stacks and takes advantage of the Intel DPDK.


DPI-in-Traffic-Shaping_no caption.jpg

Figure 3. Qosmos ixEngine in Policy Enforcement (Source: Qosmos).


Figure 3 shows the Qosmos DPI technology being used for policy enforcement on a virtual switch. The solution runs within a hypervisor that is running on an Intel Xeon processor-based platform. The solution bypasses the LINUX kernel via the Intel DPDK to accelerate connections for the virtual applications and virtual switch communication. An external PCRF controls the policy enforcement application that is pre-integrated into the hypervisor. The traffic shaping function uses information in the PCRF- configured flow table to transmit, queue, drop, or tag packets as required to enforce policy.


Virtualized Solutions

The combination of the 6WINDGate packet processing and Qosmos ixEngine DPI solutions running on virtual platforms is enabling high performance DPI and policy enforcement for NFV environments. These solutions are just part of the developments being made to ease the shift from dedicated systems to virtual platforms in the telecom infrastructure.


Learn More

Contact Featured Alliance Members:

Solutions in this blog:

Related topics:


6WIND and Qosmos are Associate members of the Intel® Intelligent Systems Alliance.


Simon Stanley

Roving Reporter (Intel® Contractor), Intel® Intelligent Systems Alliance

Principal Consultant, Earlswood Marketing

Follow me on Twitter: @simon_stanley

Software Defined Networking (SDN) is changing the way companies develop solutions for data center and network infrastructure. The separation of control plane and data plane in SDN forces developers to use a more modular approach to system and software development. Open standards defined for SDN such as OpenFlow and OpenStack allow service providers and equipment manufacturers to use systems and building blocks from different suppliers. The challenge is finding the right building blocks to minimize time to market, ensure interoperability and maximize system performance.


In this blog I am going to explore the benefits of using the Wind River Open Network Software (ONS). ONS is a comprehensive switch software environment for developing data center and network infrastructure solutions that support SDN. Wind River Systems is an Associate member of the Intel® Intelligent Systems Alliance. The 250-plus members of the Alliance collaborate closely with Intel® to create hardware, software, tools, and services to help speed intelligent systems  to market.


The Importance of SDN


SDN makes the network simpler and easier to manage by separating the data plane and control plane functions. This separation together with the use of open interfaces allows carriers and service providers to source network elements from a variety of vendors or a single vendor as they prefer. This gives carriers and service providers opportunities to manage costs and deliver new services.


SDN switches and other data plane systems route packets using a flow table. The flow tables are managed by controllers that communicate with the switches through a secure channel defined by the OpenFlow protocol. The controllers can be managed by a centralized orchestrator through a cloud operating system such as OpenStack that controls large pools of networking, storage and compute.


Wind River Open Network Software (ONS)


The Wind River Open Network Software (ONS) is a complete network switch software environment to deliver highly optimized control plane software for SDN solutions including top of rack switches, micro server platforms and network infrastructure elements. Wind River ONS is available with extensive support including professional services to develop customer specific hardware and run-time solutions.


Wind River ONS New.png

Figure 1. Wind River Open Network Software (ONS).


The Wind River ONS is a modular design as shown in figure 1. The four main blocks cover L1, L2, L3 and open switch functionality. Below are the ONS core services and switch adapter API, above are the data base services, object model and management API. The Wind River ONS supports a range of network protocols including OSPF, BGP, Spanning Tree, Link Aggregation (LAG) and BGP as well as access lists and advanced tunneling. The solution supports OpenFlow and OpenStack for SDN implementations.


The ONS object model is defined in an XML descriptive language and the database covers every element that management API needs to access. The management API supports XML-RPC, SNMP and other management applications. The Wind River application development kit (ADK) allows engineers to develop additional applications such as MPLS.  The ADK includes the development environment and tools, libraries, sample code and documentation.


The switch adapter API connects the ONS core services through a switch adapter to either the switch silicon or the SimSwitch hardware simulator. The switch adapter block is specific to the physical switch silicon being used. Wind River will supply source code or run-time code for a number of different hardware platforms including the Intel Seacliff Trail reference platform. Support for additional platforms is available through Wind River professional services. The SimSwitch hardware simulator allows developers to build the system software before the switch hardware is available.


Intel Seacliff Trail Reference Platform


Wind River ONS has been ported to 10 different hardware platforms including the Intel Seacliff Trail Reference Platform show in Figure 2. The reference platform includes an Intel® Ethernet Switch FM6700 device dual power supplies and AMC module. The Intel Ethernet Switch FM6700 device integrates a FlexPipe frame processor that is used to support SDN applications. The device will support up to 4000 OpenFlow 12–tuple table entries. The FlexPipe frame processor is programmed through the ONS Switch Adapter.


SCT2 - Smaller.jpg

Figure 2. Intel Seacliff Trail Reference Platform


The Seacliff Trail Reference Platform AMC slot has a single AMC module with Intel® Xeon® processor E3-1105C and Intel® Communications Chipset 89xx Series. The Intel Xeon processor E3-1105C has four 1GHz hyperthreaded 64-bit cores that run the Wind River ONS software.


Building Blocks for SDN

The Wind River ONS is one of a number of building blocks that are already available for SDN. By using off-the-shelf solutions that are designed to support open Interfaces and protocols and taking advantage of hardware reference platforms such as Sea Cliff Trail developers can quickly bring new systems to market that support SDN. The shift towards SDN is accelerating driven on by the use of open interfaces and the widespread support from both service providers and equipment manufacturers.


Learn More

Solutions in this blog:


Related topics:


Wind River Systems is an Associate member of the Intel® Intelligent Systems Alliance.


Simon Stanley

Roving Reporter (Intel® Contractor), Intel® Intelligent Systems Alliance

Principal Consultant, Earlswood Marketing

Follow me on Twitter: @simon_stanley

The introduction of Intel® Advanced Vector Extensions (Intel® AVX) in 2011 began significantly improved vector processing in each generation of the company's processors—which resulted in these units becoming even more popular platforms for signal and image processing. Recently Intel AVX 2.0 was introduced in the Haswell microarchitecture, for further performance boosts, including:


  • Fused multiply-add (FMA) instructions with double peak floating-point throughput to 307 GFLOPS (billion floating point operations per second) at 2.4 GHz in a quad-core 4th generation Intel® Core™ processor
  • Extension of most integer instructions to 256 bits for two-times higher peak integer throughput
  • Doubling fixed point arithmetic throughput
  • New vector gather, shift, and cross-lane permute functions enable more vectorization and more efficient loads and stores, resulting in fixed- and floating-point algorithm improvements




Haswell microarchitecture improvements also contribute to greater performance in signal and image processing applications. These changes include:


  • Having the memory pipeline perform two loads and a store operation on each cycle
  • Doubling L1 cache bandwidth to 96 bytes per cycle (64 byte read plus 32 byte write)
  • Doubling L2 cache bandwidth to 64 bytes per cycle

These upgrades, plus the internal Last Level Cache, 320 GB per second Ring Bus, and DDR3 dual-channel memory (peak memory bandwidth = 25 GB/sec at 1,600 MHz) make sure the processor is constantly "fed" to maximize performance.



Intel AVX 2.0 is most beneficial for applications that are CPU-bound as well as those that require significant time in vectorizable loops with:


  • Iteration count ≥ vector width (i.e. ≥ 8 integers, 8 floats, or 4 doubles)
  • Integer arithmetic and bit manipulation (i.e. video and image processing)
  • Floating point operations that make use of FMAs (i.e. linear algebra)
  • Non-contiguous memory access (i.e. those that can use the new gather and permute instructions)

Benchmarking Intel AVX 2.0 processing performance

N. A. Software (NAS) develops and licenses radar algorithms and low-level DSP libraries, including VSIPL (Vector, Signal, and Image Processing Library)—which supports multithreading and is typically used on large multicore and shared memory systems, allowing scalable performance for large problems.


The company has introduced an optimized Intel AVX 2.0 VSIPL designed for complex vector multiply operations, sine/cosine, and split complex FFTs. This library is standalone code not dependent on third party software, which allows the library to be readily recompiled for any operating system, leveraging the benefits of the Intel AVX 2.0 instruction set.


NAS recently used VSIPL to benchmark Intel AVX 2.0. The results showed Intel AVX 2.0 can speed functions upwards of twice the speed of the initial version of Intel AVX (see below).


Intel AVX Image.png


The following figure provides greater detail of the first item in the previous table, the 1D FFT using split complex data:


GFLOPS Rate.png


David Murray, NAS technical director, notes, "the significant speedups when using the new Intel AVX 2.0 instruction set. These speedups come from the FMA instructions." In separate, wider ranging DSP study, he notes an average speedup of 774 DSP operations on Intel AVX compared to Intel AVX 2.0. "You see a large increase in performance for operations using integer data or short integer data because the Intel AVX 2.0 instruction set contains a wider range of eight-way SIMD [Single Instruction Multiple Data, i.e. the same operation on multiple data sets] vector operations. There is also a large increase in performance with float operations because the Intel AVX 2.0 instruction set contains eight-way SIMD fused multiply-add instructions. While some of the performance speedups with double precision data are due to our algorithm improvements, the integer and float speedups are down to the Intel AVX 2.0 instruction set," Murray says.


NAS also did an Intel AVX 2.0 benchmark with the company's SARMTI (Synthetic Aperture Radar and Moving Target Indication) advanced radar processing algorithm. SARMTI extracts high-resolution data of slow and fast moving objects directly from a synthetic aperture radar image, eliminating the need for a separate moving target (Doppler) radar. Here again AVX 2.0 showed a notable speed improvements of 1.26 to 1.52 times that of Intel AVX. And for a similar benchmarking study, Murray reports, "Our SARMTI application speeds up by between 1.33 and 1.42 on Haswell when compared to Ivy Bridge."


"We can supply Intel AVX 2.0 versions of all our products," Murray concludes. "We have made this investment because the solution, both hardware and software, is significantly faster. However, we still supply Intel AVX and other legacy technology] where we have customer requirements. Customers can be slow in changing their hardware selection, and programs can be tied to a hardware decision for many years to come."


Learn More

Contact Featured Alliance Member:

Solutions in this blog:

Related topics:



NA Software is an Affiliate member of the Intel® Intelligent Systems Alliance.


Rick DeMeis

Roving Reporter (Intel Contractor), Intel Intelligent Systems Alliance

Follow me on Twitter: @rickdemeis

Big data, multimedia communications and cloud-based services are forcing service providers to deploy intelligent network infrastructures. At the core of these network infrastructures are intelligent network systems with the latest multicore processors. The software running on these multicore processors needs to support control plane processing and complex packet processing functions delivering optimal performance and fast time to market.


In this blog I am going to explore the benefits of using the Wind River Intelligent Network Platform (INP) to develop high-performance networking applications on Intel® Xeon® processors. Wind River Systems is an Associate member of the Intel® Intelligent Systems Alliance. The 250-plus members of the Alliance collaborate closely with Intel to create hardware, software, tools, and services to help speed intelligent systems to market.


Developing Network Infrastructure

Network traffic is rapidly growing on both fixed line and wireless networks. Internet access, cloud services, video and TV on demand and voice over IP (VoIP) are all driving network bandwidth. LTE/4G wireless networks and the enhanced packet core (EPC) are providing dramatically higher capacity to mobile devices with at least 10Mbps already available in many areas and some operators planning support for 1Gbps using LTE-Advanced.


To profitably deliver these services carriers are investing in intelligent networks that can effectively control network traffic based on application, available bandwidth and subscriber status. These new networks are entirely packet-based and handle a mix of data, video and voice. The intelligent network systems at the core of these networks must efficiently forward packets to the destination, analyze flows to identify applications and subscribers, and inspect packet content to filter out malware and other unsuitable content.


Wind River Intelligent Network Platform (INP)

The Wind River Intelligent Network Platform (INP) is an integrated and optimized software system designed to enable equipment providers to deliver high-performance network systems that accelerate, analyze and secure network traffic and applications. The INP includes both critical run-time components and tools for developing network systems using the latest processors including Intel Xeon processors based on the Sandy Bridge microarchitecture.


As shown in Figure 1 the INP integrates runtime solutions for both Linux operating system and high performance data plane engines running on multicore processors. The platform can be used with native Linux applications and a growing range of data plane applications. The INP has been optimized for Wind River Carrier Grade Linux, a commercial grade open source runtime solution that complies with the Carrier Grade Linux 5 specification. Developers can also use their own Linux distribution.


Wind River INP.png

Figure 1. Wind River Intelligent Network Platform (INP)


The Wind River Application Acceleration Engine (1) is an optimized network stack supporting layer 3 and layer 4 network protocols. The Content Inspection Engine (2) is a high-speed pattern matching solution for Deep Packet Inspection (DPI). The Flow Analysis Engine (3) enables protocol and application identification, a critical component of intelligent network systems.

DPDK + INP.jpg

Figure 2. Intel® Data Plane Development Kit and INP


The Application Acceleration Engine works in conjunction with Linux in the data plane and leverages the Intel® Data Plane Development Kit (Intel® DPDK). As shown in Figure 2 the INP is closely integrated with the Intel DPDK leveraging the performance of the libraries to accelerate networking applications and protocols running on Intel processors. The Wind River Application Acceleration Engine supports 1, 10, and 40Gbps interfaces and Wind River claims performance gains of up to 1,100% for IP-forwarding and up to 650% for TCP.


Flow Analysis and Content Inspection

The Wind River Flow Analysis Engine is shown in Figure 3. Packets are extracted from the incoming packet stream and categorized into different traffic flows using data from the flow class libraries. The flow analysis engine can also be used to identify the communications protocols and applications related to individual packets and flows. The flow information from the Flow Analysis Engine is forwarded to other network elements, either within the INP or external and can be used to prioritize traffic associated with high value applications or subscribers.

WR Flow Analysis.jpg

Figure 3. Wind River Flow Analysis Engine


The Wind River Content Inspection Engine is a high-speed pattern matching solution. The Content Inspection Engine matches groups of regular expressions against blocks or streams of data extracted from the incoming packet stream. The run-time engine matches data against a compiled pattern database. Matching is supported across an arbitrary number of block writes and is conformant to a large subset of the industry standard Perl Compatible Regular Expressions (PCRE) syntax.


Media Gateway Example

The Wind River Intelligent Network Platform can be used in many different network systems. A simple example is a Media Gateway that is delivering video streams to subscribers. The video stream is split into packets that are forwarded by the INP Application Acceleration Engine. Individual video streams are identified by the INP Flow Analysis Engine. The content of the video stream is checked by the INP Content Inspection Engine for malware. The individual elements of the INP work together, to ensure the media gateway is delivering a high quality video stream.


Putting It All Together

The Wind River Intelligent Network Platform with Intel DPDK and Intel Xeon processors provide a high performance solution to accelerate packet forwarding, flow analysis and content inspection. By using standard processors and off-the-shelf software system providers can efficiently develop intelligent network systems that will maximize network throughput and service provider return on investment.


Learn More


Solutions in this blog:


Related topics:


Wind River Systems is an Associate member of the Intel® Intelligent Systems Alliance.

     Contact Wind River>>


Simon Stanley

Roving Reporter (Intel® Contractor), Intel® Intelligent Systems Alliance

Principal Consultant, Earlswood Marketing

Follow me on Twitter: @simon_stanley