As the march towards Network Functions Virtualization (NFV) continues, many telecom equipment manufacturers (TEMs) are revisiting the role of Deep Packet Inspection (DPI). On one hand, NFV presents new opportunities to optimize performance and revenue with DPI. On the other hand, the complexity of NFV can make these benefits harder to achieve.
In this blog, I will highlight some of the challenges of DPI in an NFV environment. I will also examine a reference architecture that uses NFV solutions from the Intel® Internet of Things Solutions Alliance. I will show how this architecture overcomes key design challenges and realize the benefits of NFV.
What Makes Virtualized DPI Different?
As I discussed in my last blog, NFV has the potential to improve network flexibility and cost structures, allowing operators to innovate more quickly and efficiently. However, the transition from fixed-purpose, hardware to virtualized applications requires careful attention to availability, performance, and scalability.
The scope and magnitude of these challenges is motivating TEMs to seek third-party software for their virtualized designs. Heavy Reading recently issued a detailed study on virtualized DPI which found that:
- More than 90 percent of respondents said ETSI NFV would affect next-generation product design, and more than half said availability of standardized virtualized network function (VNFC) components would likely lead them to source more third-party components, including the proposed ETSI DPI VNFC.
- Two thirds of vendors now believe DPI is a must-have technology.
- The proportion of vendors choosing to source DPI from a third party is gradually rising, and a majority of those doing so prefer to use a pure-play supplier of DPI components.
In response to these trends, Associate Alliance member Wind River and Affiliate members Tieto and Qosmos recently teamed up with Intel to build a NFV DPI reference architecture. This design demonstrate how to overcome the common challenges associated with vDPI. The team has written up a detailed white paper explaining their work. I will touch on just a few points here; I recommend reading the paper for more details. And if you are new to the topic, check out the Qosmos/Intel NFV backgrounder for a good intro.
Figure 1. The reference architecture shows how to implement virtualized DPI.
Availability is the cornerstone of any telecom solution. Thus, the reference architecture starts with high-reliability components like carrier-grade Wind River* Linux*. These carrier-grade components are coupled with high-availability design techniques such as redundant virtual machines (VMs) with failover capabilities.
Of course, reliability and redundancy are hardly new concepts for telecom design. What may be less obvious is the way performance can impact availability. Virtualized designs must contend with issues like context switching that can impact availability. Thus, NFV designs must maximize performance not only to achieve sufficient throughput but also to ensure availability.
The reference design tackles performance issues with a variety of design techniques. These start at the hardware level, where the Intel® Xeon® processors, Ethernet controllers, and Ethernet switch are all optimized for virtualization – and in the case of the switch, specifically optimized for virtualized networking. (More background on the switch here.) The key to this optimization is minimizing hypervisor overhead. Context switches are computationally expensive, and introduce unwelcome timing variability. Thus, the Intel hardware has been equipped with a variety of virtualization accelerators that offload hypervisor tasks, leading to considerably improved performance and determinism.
Building on this hardware foundation, the reference architecture uses the Data Plane Development Kit (DPDK) to accelerate packet processing across the virtual switch and within various applications. DPDK uses Data Direct Input Output (DDIO) capability on Intel Xeon processors to make the data packets directly available in CPU Cache for processing, which avoid time consuming memory read operations. I haven’t seen performance specs for this reference architecture, but similar designs can achieve 200 Gbps through the Open vSwitch – an impressive number, to say the least.
Moving up the software stack, the Tieto IP (TIP) Stack is also optimized for virtualized environments. For example, the TIP Stack implements Single Root I/O Virtualization (SR-IOV), which eliminates hypervisor involvement in data movement by giving each VM independent memory, interrupts, and DMA streams – all of which can be directly accessed by the Intel® Ethernet controllers.
Zooming a bit further out, there is the larger question of how a system can minimize VM-to-VM communications through the hypervisor. And here at last we hit upon the importance of DPI in NFV. By deploying DPI in front of incoming traffic, flows can be tagged to convey the protocol and application ID to other nodes (Figure 2). By identifying the needed processing up front, this approach eliminates the need to pass flows between VMs. In addition, it allows nodes like the Traffic Detection Function (TDF) to be stateless, simplifying their design.
Figure 2. DPI can be used to tag incoming flows.
The reference architecture implements this flow classification with Qosmos DPI VNFC. Based on the popular Qosmos ixEngine*, this VNFC enables many functions, including quality of service (QoS) management, traffic shaping, throttling, and gating. In addition to supporting high availability and performance, the DPI VNFC is critical to enabling scalability, as we will see in our final section.
Flexibility and Scalability
One of the main selling points for NFV is the ability to rapidly scale and adapt to new needs. DPI is key here. Information extracted by the a DPI VNFC can be passed to the orchestrator/analytics system, which in turn can trigger VMs to scale capabilities up or down as needed. Of course, this requires that the DPI engine itself be scalable. In particular, it must be possible to add new instances of the VNFC without dropping packets or disrupting existing DPI instances.
One way the reference architecture addresses these issues is by separating DPI from the Packet Data Network Gateway (PDN-GW). This simplifies DPI scaling and enables increases in capacity without impacting the PDN-GW.
It is also worth noting that each DPI VM can be independently configured to meet specific needs. For example, a particular DPI instance could be configured to classify 100 selected apps at 40 Gbps with a particular latency window. To support such a configuration, the orchestrator can provide an appropriate number of CPU cores and memory. A different VM with different needs might receive greater or lesser resources.
Powering NFV with DPI
Overall, the reference architecture demonstrates three key points:
- DPI has the potential to significantly extend the benefits of NFV.
- The challenges of implementing DPI in an NFV context can be mitigated with third-party software.
- Collaboration between third-party NFV software suppliers is essential for addressing the many complexities of NFV.
I expect to see much more progress on NFV software in the coming months. I look forward to bringing you the latest coverage as events unfold!
Contact featured members:
Solutions in this blog:
Roving Reporter (Intel Contractor), Intel® Internet of Things Solutions Alliance
Editor-In-Chief, Embedded Innovator magazine
Follow me on Twitter: @kentonwilliston