Skip navigation

The elevator pitch for Network Functions Virtualization (NFV) is simple: why make massive investments in custom equipment when standard servers can do the job – and with far better flexibility? The reality, of course, is much more complicated. To deploy an NFV solution, developers must answer questions including:


  • Which servers should I use?
  • What is the right operating system (OS) and virtualization solution?
  • How do I virtualize switching and packet processing?
  • How can I ensure sufficient performance, availability, reliability, and cost-efficiency?


Two new NFV platforms from Red Hat* and Wind River* address these questions with highly-integrated OpenStack* packages that are optimized for high performance and rapid deployment. Each platform combines a variety of software and hardware from the Intel® Internet of Things Solutions Alliance – let’s start by taking a look at the NFV platform from Red Hat, a General member of the Alliance.


Red Hat*/6WIND* NFV Foundation

As shown in Figure 1, Red Hat’s offering is built on Red Hat Enterprise Linux* and Red Hat Enterprise Virtualization. These components provide the fundamental reliability and efficiency needed to decouple network functions from the underlying hardware. Red Hat Enterprise Linux OpenStack* Platform controls the overall solution and provides an interface for a wide variety of network orchestration software tools.


red hat 6wind.png

Figure 1. The Red Hat*/6WIND* NFV platform is optimized for reliability and performance.


To ensure optimal performance, the solution incorporates the 6WINDGate* networking stack from Affiliate member 6WIND. Compared to standard Linux* distributions, this software delivers a 10x performance boost to both data plane applications and the virtual switch. For the data plane, 6WINDGate uses the Data Plane Development Kit (DPDK) library to process network packets outside the Linux kernel. This fast-path processing can deliver over 10 million packets per second per core on the latest Intel® Xeon® processors. (See the Red Hat/6WIND brief for more details.)


For switching, the 6WINDGate Open vSwitch Acceleration module runs through 6WIND’s fast path architecture. It delivers 200 Gbps throughput transparently without having to recompile the standard Open vSwitch, change anything in the OpenStack platform, Kernel-based Virtual Machine (KVM)/ QEMU hypervisor, or OpenFlow controller. (The 200 Gbps performance was recently verified on an HP ProLiant server– see 6WIND’s product brief for details. HP is a Associate member of the Alliance.)


To demonstrate their platform’s value, Red Hat and 6WIND recently created a vEPC orchestration proof-of-concept (PoC) illustrated in Figure 2. This PoC runs on PowerEdge* C6220 servers from Premier member Dell. Other collaborators on the PoC include Intel, Cyan, and Connectem. It is worth noting that the Dell PowerEdge C6220 offers a number of carrier-grade features such as certification to NEBS Level 3 and ETSI standards (including extended operating temperature range and shock resistance) and DC power supplies. Thus, the PoC is comprised of deployment-ready components from the hardware all the way up to the service orchestration layer.


vEPC PoC stack.png

Figure 2. Key elements of the vEPC PoC Stack.


This PoC nicely demonstrates how developers can deal with the complexities of NFV. I strongly recommend reading the related white paper on the Intel® Network Builders site– it describes the challenges and the solutions of vEPC in considerable detail and is well worth your time.


Wind River* Carrier Grade Communications Server

Wind River has also developed its own fully-integrated NFV solution, the Wind River* Carrier Grade Communications Server. As illustrated in Figure 3, Wind River has built this platform up over time, offering progressively higher levels of functionality over the last couple of years. The company’s latest offering integrates Wind River Open Virtualization with Wind River Linux, optimized real-time Kernel-Based Virtual Machine (KVM++), carrier grade enhancements to OpenStack, and DPDK accelerated vSwitch technologies. The solution is designed specifically for NFV deployment, and to seamlessly integrate with existing networks.



Figure 3. Wind River’s NFV platform builds on previous offerings.


The most obvious benefit of the platform is its high level of integration. But what really makes the platform noteworthy is its emphasis on carrier grade reliability. According to Wind River, the software supports “six 9s” reliability, translating to downtime as low as 30 seconds a year. It is hard to overstate the importance of this claim – an NFV design is only useful if it delivers carrier-grade reliability, and working out reliability issues can be a huge development hurdle. Thus, the availability of six 9’s reliability in a pre-integrated NFV software platform is a major asset.


The Wind River Carrier Grade Communications Server is optimized for and running on HP ProLiant servers. Given the platform’s emphasis on carrier-grade reliability, it is worth noting that the HP ProLiant line itself offers carrier grade features, including NEBS and ETSI certifications and extended lifecycles. It is also worth noting that 6WIND used this same HP server line to demonstrate its Open vSwitch performance. This fact that illustrates the fundamental value of NFV, namely the ability to use standard servers and software with a high level of interoperability.


Returning to Wind River’s NFV platform, the reliability of the software is complemented by its high performance. Particularly notable is the accelerated virtual switch software, which delivers 20x the performance of the standard Open vSwitch with no hardware acceleration required. This switch can deliver 12 million packets per second to guest virtual machines (VMs) using only two processor cores on an industry-standard server platform, in a real-world use case involving bidirectional traffic. What’s more, this cutting-edge performance is achieved using up to 33% fewer CPU resources than alternative solutions – see Wind River’s press release for details.


A Foundation for the Future

The Red Hat/6WIND and Wind River NFV platforms address some of the most pressing issues facing telecom developers today. By providing highly integrated foundations for virtualized equipment, these platforms promise to dramatically lower time-to-market while delivering industry-leading performance, efficiency and reliability. And it’s clear that these companies are fully committed to NFV, meaning that we can expect even greater integration and higher levels of performance in the future. I can’t wait to see what’s next!


Learn More

Contact featured members:


Solutions in this blog:


Related topics:


Dell OEM is a Premier member of the Intel® Internet of Things Solutions Alliance. Hewlett-Packard and Wind River Systems are Associate members of the Alliance. 6WIND SA is an Affiliate member and Red Hat is a General member.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® Internet of Things Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

As the march towards Network Functions Virtualization (NFV) continues, many telecom equipment manufacturers (TEMs) are revisiting the role of Deep Packet Inspection (DPI). On one hand, NFV presents new opportunities to optimize performance and revenue with DPI. On the other hand, the complexity of NFV can make these benefits harder to achieve.


In this blog, I will highlight some of the challenges of DPI in an NFV environment. I will also examine a reference architecture that uses NFV solutions from the Intel® Internet of Things Solutions Alliance. I will show how this architecture overcomes key design challenges and realize the benefits of NFV.


What Makes Virtualized DPI Different?

As I discussed in my last blog, NFV has the potential to improve network flexibility and cost structures, allowing operators to innovate more quickly and efficiently. However, the transition from fixed-purpose, hardware to virtualized applications requires careful attention to availability, performance, and scalability.


The scope and magnitude of these challenges is motivating TEMs to seek third-party software for their virtualized designs. Heavy Reading recently issued a detailed study on virtualized DPI which found that:


  • More than 90 percent of respondents said ETSI NFV would affect next-generation product design, and more than half said availability of standardized virtualized network function (VNFC) components would likely lead them to source more third-party components, including the proposed ETSI DPI VNFC.
  • Two thirds of vendors now believe DPI is a must-have technology.
  • The proportion of vendors choosing to source DPI from a third party is gradually rising, and a majority of those doing so prefer to use a pure-play supplier of DPI components.


In response to these trends, Associate Alliance member Wind River and Affiliate members Tieto and Qosmos recently teamed up with Intel to build a NFV DPI reference architecture. This design demonstrate how to overcome the common challenges associated with vDPI. The team has written up a detailed white paper explaining their work. I will touch on just a few points here; I recommend reading the paper for more details. And if you are new to the topic, check out the Qosmos/Intel NFV backgrounder for a good intro.


nfv reference design.png

Figure 1. The reference architecture shows how to implement virtualized DPI.


Carrier-Grade Availability

Availability is the cornerstone of any telecom solution. Thus, the reference architecture starts with high-reliability components like carrier-grade Wind River* Linux*. These carrier-grade components are coupled with high-availability design techniques such as redundant virtual machines (VMs) with failover capabilities.


Of course, reliability and redundancy are hardly new concepts for telecom design. What may be less obvious is the way performance can impact availability. Virtualized designs must contend with issues like context switching that can impact availability. Thus, NFV designs must maximize performance not only to achieve sufficient throughput but also to ensure availability.


Optimized Performance

The reference design tackles performance issues with a variety of design techniques. These start at the hardware level, where the Intel® Xeon® processors, Ethernet controllers, and Ethernet switch are all optimized for virtualization – and in the case of the switch, specifically optimized for virtualized networking. (More background on the switch here.) The key to this optimization is minimizing hypervisor overhead. Context switches are computationally expensive, and introduce unwelcome timing variability. Thus, the Intel hardware has been equipped with a variety of virtualization accelerators that offload hypervisor tasks, leading to considerably improved performance and determinism.


Building on this hardware foundation, the reference architecture uses the Data Plane Development Kit (DPDK) to accelerate packet processing across the virtual switch and within various applications. DPDK uses Data  Direct Input Output (DDIO) capability on Intel Xeon processors to make the data packets directly available in CPU Cache for processing, which avoid time consuming memory read operations.  I haven’t seen performance specs for this reference architecture, but similar designs can achieve 200 Gbps through the Open vSwitch – an impressive number, to say the least.


Moving up the software stack, the Tieto IP (TIP) Stack is also optimized for virtualized environments. For example, the TIP Stack implements Single Root I/O Virtualization (SR-IOV), which eliminates hypervisor involvement in data movement by giving each VM independent memory, interrupts, and DMA streams – all of which can be directly accessed by the Intel® Ethernet controllers.


Zooming a bit further out, there is the larger question of how a system can minimize VM-to-VM communications through the hypervisor. And here at last we hit upon the importance of DPI in NFV. By deploying DPI in front of incoming traffic, flows can be tagged to convey the protocol and application ID to other nodes (Figure 2). By identifying the needed processing up front, this approach eliminates the need to pass flows between VMs. In addition, it allows nodes like the Traffic Detection Function (TDF) to be stateless, simplifying their design.


dpi vnf.png

Figure 2. DPI can be used to tag incoming flows.


The reference architecture implements this flow classification with Qosmos DPI VNFC. Based on the popular Qosmos ixEngine*, this VNFC enables many functions, including quality of service (QoS) management, traffic shaping, throttling, and gating. In addition to supporting high availability and performance, the DPI VNFC is critical to enabling scalability, as we will see in our final section.


Flexibility and Scalability

One of the main selling points for NFV is the ability to rapidly scale and adapt to new needs. DPI is key here. Information extracted by the a DPI VNFC can be passed to the orchestrator/analytics system, which in turn can trigger VMs to scale capabilities up or down as needed. Of course, this requires that the DPI engine itself be scalable. In particular, it must be possible to add new instances of the VNFC without dropping packets or disrupting existing DPI instances.


One way the reference architecture addresses these issues is by separating DPI from the Packet Data Network Gateway (PDN-GW). This simplifies DPI scaling and enables increases in capacity without impacting the PDN-GW.


It is also worth noting that each DPI VM can be independently configured to meet specific needs. For example, a particular DPI instance could be configured to classify 100 selected apps at 40 Gbps with a particular latency window. To support such a configuration, the orchestrator can provide an appropriate number of CPU cores and memory. A different VM with different needs might receive greater or lesser resources.


Powering NFV with DPI

Overall, the reference architecture demonstrates three key points:


  • DPI has the potential to significantly extend the benefits of NFV.
  • The challenges of implementing DPI in an NFV context can be mitigated with third-party software.
  • Collaboration between third-party NFV software suppliers is essential for addressing the many complexities of NFV.


I expect to see much more progress on NFV software in the coming months. I look forward to bringing you the latest coverage as events unfold!



Learn More

Contact featured members:


Solutions in this blog:


Related topics:


Wind River Systems is an Associate member of the Intel® Internet of Things Solutions Alliance. Tieto and Qosmos are Affiliate members of the Alliance.


Kenton Williston

Roving Reporter (Intel Contractor), Intel® Internet of Things Solutions Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston