Skip navigation
1 2 3 Previous Next

Archive

38 Posts authored by: charlieashton

First, the obligatory conference summary. NFV celebrated its first birthday in Frankfurt last week, at the SDN & OpenFlow World Congress which occurred exactly a year after a group of telecom service providers announced the formation of the ETSI NFV ISG. Both from the wide range of conference presentations and from the solutions showcased by exhibitors, it was clear that tremendous progress has been made in addressing the complex issues that must be solved in order for deployments to be viable and cost-effective. In our booth, we were pleased to see strong interest in our accelerated Open vSwitch solution that’s optimized for SDN and NFV applications. We enjoyed Tom Nolle’s explanation of the benefits this provides and we also learned that we’d won a 2013 Excellence in SDN Award from TMC.

End of trip report. The conference has been summarized excellently in a variety of blogs, written by attendees who have done a great job of highlighting the most interesting points from such an extensive event.

For me, though, the presentation that I will remember longest is the one given by Tetsuya Nakamura, Senior Research Engineer at NTT DOCOMO and Assistant Manager with the ETSI NFV ISG.

In his presentation “Carriers' expectations and challenges for Network Virtualisation”, Nakamura-san outlined the impact on communications infrastructure caused by the 2011 earthquake in Eastern Japan. He explained how NTT DOCOMO has applied SDN technology to ensure that the most critical communications will suffer less impact in any future disasters: these communications are voice calls from people desperate to confirm the safety of family members.

SDNOF-1.pngNakamura-san showed graphs illustrating how the demand for voice services skyrocketed in the minutes and hours immediately following the earthquake and tsunami.

NTT DOCOMO saw a 60x increase in attempted voice calls during this time. Obviously, when the priority is to confirm that a family member is OK, no-one cares about email or rich media, they want to talk to them.

NTT DOCOMO was able to connect only around 5% of the call attempts. Even as far away as Tokyo, the network was so congested that the vast majority of calls within Tokyo failed.

The net result: many millions of concerned people were unable to confirm the safety of their family members. At the same time, ironically, people were able to stream videos and access rich media pretty much as normal.

The problem, of course, was the static configuration of the network. Network capacity (bandwidth and equipment) was provisioned to handle normal volumes of voice, email and rich media traffic. In normal times, the goal for the network is to provide high-quality service at a reasonable price and resources are optimized for cost-efficiency. In a disaster scenario, however, the requirement is to support the most necessary (voice) communications, by applying a huge amount of resources not used for voice in normal periods.

As Nakamura-san stated in his slides, “Disaster-resilient mobile networks must support contradicting demand in normal periods and in the disaster period. To develop flexible mobile networks, the key issue is how to improve flexibility in control systems to allow reallocation of functions depending on the changing demands.”

SDNOF-2.pngEnter SDN. Since the disaster, NTT DOCOMO has implemented SDN to achieve dynamic resource allocation, ensuring the viability of their networks even in an emergency. This brings a massive improvement in the acceptance rate of voice calls immediately following a disaster.

A combination of the Service Resource Control Framework and Call/Service Control provide flexible control of system resources.

In a normal period, resources are mostly allocated to EPC functions, supporting email and rich media traffic. In response to a disaster, however, virtualization-aware control and management services are used to dynamically reallocate resources, scaling up the capacity for IMS-based voice traffic.

As illustrated in the diagram below, NTT DOCOMO expects these innovations to boost the call acceptance rate by 5x during a scenario such as the 2011 disaster. This is achieved not by increasing the network capacity (shown flat in the analysis), but by the SDN-based reallocation of resources.

SDNOF-3.pngNTT DOCOMO has built data centers in Tohoku and Yokosuka to evaluate and stress-test this architecture. They can emulate congestion in an area supporting 500,000 consumers, representing more than 700,000 calls per hour (196 calls per second). Before reallocation, the network can accept 125,000 calls per hour and this rises to 700,000 after reallocation, with the transition being accomplished in less than 30 minutes. Eventually, the trials will encompass additional sites and then the architecture will be deployed in the live network.

We all read and listen to a lot of SDN- and NFV-related stories where the outcome is measured in service velocity, CAPEX, OPEX and ARPU. Nakamura-san’s presentation was different. He talked about the anguish felt by real people, trying to contact family members in the aftermath of the kind disaster that most of us can only imagine. In this case, SDN technology will ensure that many more of those calls go through. Isn’t that the ultimate example of “end-user value” for the people of Japan?

In the category of “really useful facts I learned last week (and can still remember after all this time)”, this one is certainly near the top of the list….

Skype.pngSkype now has over 500 million users and has seen 70 million conversations taking place simultaneously.

That means that if Skype was a country, it would be the third most populous nation in the world, below China and India but above the United States and Indonesia (thanks Wikipedia for the population data).

And presumably the 70 million Skype conversations that have happened simultaneously far outweigh the number of people in any of these countries actually talking to each other face-to-face, because those that aren’t on Skype are busy on Facebook, Renren, Sina, Weibo or WeChat describing what they just had for lunch.

These numbers came from a keynote address by Lori Lee, Senior Executive Vice President of Home Solutions at AT&T, during the TIA conference in Washington DC. As an interesting comparison, Ms. Lee pointed out that the 70 million simultaneous Skype conversations significantly exceed the total number of landlines provided by AT&T and Verizon combined.

This is a perfect illustration of the strategic threat faced by telecom service providers worldwide: the Over-the-Top (OTT) players continue to siphon revenue away from traditional voice and data services, while riding for free on the high-bandwidth networks into which exist only as a result of the massive capital investments made by the service providers themselves. While SDN and NFV promise to enable the service providers to efficiently deliver new, high-value services to both enterprises and consumers, most of the discussion still seems to be about cost savings rather than increased ARPU (Average Revenue per User).

TIA-2013-SDN-Workshop.jpgAt TIA, I enjoyed being part of a panel discussion that also involved executives from Ericsson (Don McCullough), Orange (Christos Kolias) and Tellabs (Stuart Benington).

During this session, moderated by Marc Cohn from Ciena, we talked about opportunities for Software Defined Networking (SDN) in mobile networks. So mostly we talked about Network Functions Virtualization (NFV), which celebrates its first birthday this week, having been formally announced during last year’s SDN & OpenFlow World Congress. It was encouraging that the audience questions were all about tactical topics relating to the deployment of NFV (network performance, end-to-end reliability, open-source software, Proof-of-Concepts). No-one seems to doubt that the overall NFV strategy makes sense; the issues are all about execution, timing and risk.

Our panel was part of a one-day SDN workshop held before the start of the TIA itself. The workshop included sessions that explored topics such as how carriers will gain a competitive advantage with SDN, the opportunities enabled by virtualization, what the big equipment suppliers are doing in SDN and where the investment dollars are flowing.

Throughout all these discussions (and there was certainly strong audience participation) there were a number of very consistent threads:

  • Service providers are very strongly predisposed to buy the technology that they need for successful SDN/NFV deployments, rather than develop it themselves. They will be heavily involved in PoCs and technology evaluations, but in the end they have a proven supply chain and there’s lots of benefit in preserving it.
  • Virtualization (network, server and storage) and cloud are key technologies that will enable the widespread usage of SDN and NFV.
  • OpenStack and Open Daylight are vastly preferred over proprietary solutions for orchestrators and SDN controllers.
  • The most important factor about “open” APIs is that they foster the development of a robust ISV community. That’s much more valuable than full compliance with complex standards that result from lengthy work by committees. De-facto standards are fine as long as ISVs can readily support them (of course the PC is the classic example of this). And the long-term goal is always “siliconization”, which leads to the ultimate in cost reduction.
  • Major vendors are starting to talk about real SDN/NFV products and there will be a large number of PoCs in 2014.
  • In 2012, “SDN” was widely considered to be synonymous with “OpenFlow”. Now, everyone understands that SDN is much more than a protocol and that OpenFlow is just one element within SDN.

 

SDNOF.pngI expect that many of these discussions will continue during SDN & OpenFlow World Congress in Frankfurt this week.

 

If you’re there, please do stop by 6WIND’s booth (number 16), where we’ll be demonstrating our accelerated Open vSwitch solution that’s optimized for SDN and NFV applications.

Hope to see you this week for NFV’s first birthday celebrations. Now please excuse me, my Skype is ringing…..

In the midst of a wave of announcements during Intel Developer Forum (IDF) last week, I have to confess that, for me, the most thought-provoking press release was the one that hit my Twitter feed on Thursday: NASA’s confirmation that the Voyager 1 spacecraft has left the solar system.

Voyager_spacecraft.jpgLet’s think about this for a moment: Voyager 1 was designed in the early 1970s, using 1960s technology, and it was launched in 1977, before many of the readers of this post were born. In 36 years, it has travelled 12 billion miles and is now 120 times as far from the Sun as the Earth is. Enough of its instrumentation is still working that it’s able to measure magnetic fields, plasma waves and other characteristics of empty deep space.

Equally amazing to me is the fact that we are able to receive and decode this information, transmitted back to Earth at a bit rate of 160bps using a 23W radio transmitter. As NASA loves to explain, that’s the same power as a refrigerator bulb and the signal takes 17 hours to reach us.

I’m sure someone reading this can calculate just how attenuated that signal is when it gets here. Just image the sophistication of the signal processing within the Deep Space Network that is required to extract meaningful data from a 23W signal that’s travelled 12 billion miles.

And let’s not forget that, early in its lifetime, Voyager 1 performed stunning flybys of Jupiter and Saturn. Many of us have those photos on our walls or as screen savers. And its last service call was on the launch pad. No way to send a technician to swap out a failing component or reseat a connector. Remote firmware upgrades, however, have been performed occasionally and very carefully.

I’m old enough to remember when the two Voyagers were launched. I had lots of hair and a deep fascination with the space program (at least one of those characteristics has survived). For me, it was unfathomable that a machine could travel as far as Jupiter, let alone out of the solar system almost 40 years later.

Surely, this represents the ultimate in terms of rigorous system design, total reliability and a communications system with unsurpassed sensitivity. Everyone with an understanding of the complexity of these systems and the challenges that have been overcome in this program should have the utmost respect for the engineers at JPL and elsewhere that made it happen. After all, here’s an example of the most common design tool:

SlideRule.jpg

OK, time to put my slide rule away and share some thoughts about IDF……

6WIND’s booth at IDF was in the “Software-Defined Infrastructure” area, so our neighbors were hardware, software and systems companies focused, like us, on applications such as Software Defined Networking (SDN) and Network Functions Virtualization (NFV).

Following our announcement on September 9th that our accelerated Open vSwitch (OVS) solution is fully-compatible with the Intel® Open Network Platform (Intel® ONP), we demonstrated that solution in our booth.

Booth.jpgAttendees seemed to be impressed to see the 6WINDGate™ software accelerating the performance of the standard OVS by more than 10x (68Mpps vs. 6Mpps on ten cores, details here), with no changes required to the OVS code itself.

We talked to people working on enterprise, cloud and telecom (NFV) data centers. They confirmed that the improving the number of VMs per server (VM density) by 3x to 9x, which is typically what our software enables, creates a massive advantage in terms of CAPEX and OPEX. After all, their subscribers are essentially paying for cores running VMs, not cores running OVS.

It’s hard to believe that NFV is less than one year old (that would be less than a thirty-sixth of a Voyager), with the ETSI initiative basically being started at SDN & OpenFlow World Congress in October 2013. Despite this short period, however, it was very clear from conversations at IDF that engineers and architects are firmly focused on practical questions relating to Proof-of-Concepts (PoCs) and real-world testing.

Many of the questions that we were received about 6WINDGate were not about the performance acceleration that it provides (attendees could see that from the demo and anyway it’s widely deployed in telecom equipment today) but about how to install it in real networks and start running tests. We heard several requirements relating to NFV PoCs that will be happening by the end of the year, with clear expectations of performance results being delivered quickly.

From that perspective, there was significant interest in CloudNFV. Knowing that 6WIND is a founder member, several people stopped by our booth for more details. People already knowledgeable about the initiative appeared to look on it very favorably, as a quick approach to a true end-to-end, multi-vendor implementation of the ETSI requirements.

Since our focus for NFV is on data plane acceleration, we had a number of interesting discussions about the impact of virtualization on the system-level performance of network functions. The general consensus seems to be that Virtual Network Functions (VNFs) need to deliver overall cost-performance that’s comparable with traditional physical equipment, otherwise the ROI for NFV just isn’t viable. Of course, meeting this requirement is not easy given the performance limitations of standard virtualized environments. This is a problem that the 6WINDGate solution solves (and a topic for another post).

Interoperability is a major topic for service providers and telecom equipment manufacturers working on NFV. A number of architects confirmed to us that traditional, physical networking equipment will coexist with virtualized solutions for a long time. From a software perspective, solutions are required that provide optimal performance in both environments, for maximum operational efficiency as well as optimized CAPEX/OPEX.

Several attendees also gave us their thoughts on phased approaches to NFV hardware. The low-risk strategy seems to be to first deploy virtualized applications on existing hardware platforms, achieving initial CAPEX/OPEX reductions through improved resource utilization. Next, the virtualized applications can be migrated to customized COTS or server platforms, optimized for NFV workloads. For the ultimate in CAPEX/OPEX improvements, the complete system will finally be transitioned into data centers and into the cloud.

Inevitably, network reliability was a hot topic when we talked to NFV architects. Telecom service providers have set the bar high in this area and, as subscribers, we all have demanding expectations when it comes to service availability and network uptime. For traditional physical infrastructure, the mechanisms of High Availability systems are pretty well understood and service providers know what it takes to achieve “Carrier-Grade Reliability”, whether that means zero-downtime, five-nines or another SLA metric.

Once the network is virtualized and running in the cloud, a whole new set of reliability problems and questions arise that are far beyond the scope of this post. Thinking of a (then) state-of-the-art system that’s still functioning after 36 years, 12 billion miles, extreme temperature fluctuations, killer radiation and absolutely no service calls, maybe we need a concept of “Voyager-Grade Reliability” to keep us communicating seamlessly as all our telecom traffic moves to the cloud?

ITEXPO.JPGOn August 26, I participated in an SDN/NFV panel discussion during the SDN Precon event that is part of ITEXPO. While the panel session itself covered a range of topics, one of the more interesting aspects of the whole SDN Precon event was that there were a number of presentations that highlighted new use cases that can be enabled by NFV.

 

In his presentation, Mike Sapien, Principal Analyst at Ovum, reviewed the likely impact of SDN on telecom and enterprise applications. He talked about various current SDN deployments, such as those at NTT, Google and Equinix (this last example comprising both connectivity between data centers as well as communication between the data center and a variety of ecosystem partners). He discussed the opportunities for appliance-based enterprise services and explained how SDN results in the simplification of services together with centralization.

Ronald Gruia, Principal Analyst at Frost and Sullivan, explained that VCs are expected to invest over $500M in SDN and NFV companies during 2013. He highlighted a number of NFV use cases, including data center cloud infrastructure, home network management, mobile operator applications such as video optimization and network visualization. He also stressed some key “caveats” or challenges that must be addressed in order for NFV to achieve widespread deployments, including scalability, interoperability, security, V & V (validation and verification) and Carrier Grade reliability.

Mark Durrett, Director of Marketing at Overture Networks, introduced the CloudNFV initiative (Overture is one of the six founding companies, along with 6WIND). He explained how managed CPE appliances (e.g. firewalls and routers) represent a compelling use case, with significant cost savings to be achieved through the elimination of CPE appliances, through pay-as-you-go services and through CAPEX/OPEX savings.

From Huawei, Mike McBride, Director of Technology and Strategy, presented a detailed approach to the virtualization of the home gateway appliance, explaining that current solutions suffer from a high volume of service calls, increasing equipment costs and decreasing service margins. He described how, as a first phase, the home gateway (HGW) function in the ONT equipment at the customer site would be replaced by a virtual home gateway (vHGW) instantiated within the OLT in the carrier network, reducing cost and complexity. Subsequently, a soft BNG (Broadband Network Gateway) controller would implement service chaining within the edge network, eventually co-located with IT functions in the core. As a next step, all these functions would be migrated into the telco cloud for ultimate cost savings.

Of course, all these use cases are examples of cost reductions compared to traditional physical implementations of network functions and network orchestration. In a post published a few weeks ago, I observed that most of the NFV discussions at conferences appear to be around cost reductions rather than revenue growth. Not much seems to have changed since then.

Clearly, significant CAPEX and OPEX savings are expected through the virtualization of functions that have traditionally been implemented as stand-alone, dedicated, fixed-function equipment. From the perspective of service providers, though, top-line P&L growth ultimately comes from making money, and specifically from increasing their Average Revenue per User (ARPU). That means new services that bring real value to subscribers, both enterprises and also consumers like you and me.

As the NFV network-level architecture details and deployment strategies firm up, so that service providers have confidence about the new capabilities of the infrastructure, I would hope that we’ll see a lot more discussions about interesting new services that will raise our monthly bills as subscribers and boost the service providers’ top line revenue.

What are your thoughts on this? What new services do you see being introduced thanks to NFV that will bring real value to subscribers, whether enterprises or consumers?

Proud to be on the CloudNFV™ team

Post image 1.pngNow that the details of the CloudNFV initiative have been announced, we’re proud to confirm that 6WIND is one of the founding members. When Tom Nolle, President of CIMI Corporation, first invited us to participate, we immediately saw the potential for this group of vendors to rapidly introduce a demonstration platform for Network Functions Virtualization (NFV) and thereby accelerate the deployment of NFV by service providers. We appreciate the opportunity to work with our CloudNFV partners to demonstrate how our technology solves critical data plane performance challenges in NFV.

 

In a quote that he provided for our press release, Tom stated that “The success of NFV or the virtualization of switches or routers using SDN is totally dependent on the data-path performance of the VMs used to host the functions. The performance of the data path can make or break the business case for any network-virtual applications. 6WIND is the industry leader in data path acceleration, and also the first vendor to step forward with an NFV mission statement. There's nobody with better credentials to optimize density and performance for CloudNFV.”

 

In this post, I’ll explain the key benefits that the 6WINDGate™ networking software, running on Intel® Architecture platforms, provides for the CloudNFV initiative.

Leveraging technology proven in physical infrastructure

6WIND is uniquely positioned to provide data plane solutions for CloudNFV because 6WINDGate is already widely used in physical networking equipment deployed in telecom infrastructure worldwide. Through high-performance packet processing, 6WINDGate enables service providers to maximize the number of subscribers supported per blade, in applications such as LTE Evolved Packet Core (EPC) equipment.

Over the past couple of years, we’ve extended our technology to incorporate solutions for networking performance bottlenecks associated with virtualization elements such as hypervisors and virtual switches, running on Intel® Architecture platforms, so that TEMs and service providers can now adopt our software, already proven in physical networking implementations, to maximize the performance of their Virtual Networking Functions (VNFs) in NFV deployments.

6WINDGate fits in two places within the CloudNFV ecosystem: first, it maximizes the switching performance of the virtual switch that provides high-bandwidth network traffic to the VNFs running in Virtual Machines (VMs), thereby increasing the aggregated bandwidth delivered to the VMs and, second, it accelerates the data plane performance of each VNF.

Data plane performance challenges in CloudNFV

At this point in the evolution of NFV, most vendors are working on aspects relating to network management and orchestration. This is also a key focus for CloudNFV and the approach is explained well in the white paper available on the website. This emphasis makes perfect sense given the extreme complexity of legacy telecom networks and the challenges of migrating these control-related systems to a completely new software-based architecture. And, of course, the concept of “Carrier Grade reliability” reflects the expectations that we all have for the constant availability of network services.

Additionally, though, there are fundamental data plane issues that must be addressed in order for any CloudNFV implementation to be cost-effective.

Post image 2.png

The first bottleneck is the software virtual switch (vSwitch) running on the server platform. This vSwitch must provide sustained, aggregated high-bandwidth network traffic to the Virtual Network Functions (VNFs). At the same time, the performance of (secure) VM-to-VM communications must be maximized.

Both these requirements are necessary to ensure that CloudNFV deployments are cost-effective when compared with traditional network infrastructure based on physical switches.

Unfortunately, standard virtual switches such as the open-source Open vSwitch (OVS) do not deliver adequate performance or scalability to address these needs.

The second bottleneck is the performance of the VNFs themselves. Service providers will need their VNFs to deliver cost-performance that is comparable to that achieved by equivalent physical implementations. Otherwise, their NFV deployments won’t be cost-effective and there will be no ROI justification for a transition to NFV.

VNF performance, though, is constrained by two factors. One is the poor performance and limited scalability of standard Operating System networking stacks. The other is the limitation on bandwidth for communication outside the VM that is imposed by standard hypervisors.

6WIND’s solutions

Within CloudNFV deployments, the 6WINDGate networking software addresses the two performance bottlenecks described above.

Post image 3.png

 

First, 6WINDGate accelerates the virtual switch that switches network traffic to the VMs in which the VNFs are instantiated.

When used to accelerate the standard Open vSwitch (OVS), 6WINDGate delivers a 10x improvement in switching performance. This typically results in at least a 3x improvement in the number of VMs that can be instantiated per blade (the VM density), with even greater improvements achieved when the VNFs require sustained high-bandwidth traffic.

As part of improving OVS performance, 6WINDGate also accelerates secure tunneling protocols such as IPsec, GRE, NVGRE, VLAN and VxLAN which are required OVS features for supporting high-bandwidth, secure VM-to-VM traffic.

Second, 6WINDGate accelerates the performance of VNFs. Thanks to its fast path data plane architecture, 6WINDGate typically delivers 10x the performance of the standard Linux networking stack, with no changes required to the Linux kernel. This performance scales linearly with the number of cores configured to run the fast path. 6WINDGate includes a comprehensive set of networking protocols, for example PPP (used in a virtual Broadband Access Server or “vBRAS”), firewall and IKE (used in security gateways) and TCP termination (used in WAN Optimization appliances).

As a result of optimizations for virtualized environments, 6WINDGate delivers comparable performance running under a hypervisor to that achieved when running in a physical implementation. This enables service providers to obtain best-in-class cost-performance from their VNFs, such as firewalls and security gateways.

In both these two CloudNFV use cases, 6WINDGate runs within the KVM hypervisor and Linux, under the control of OpenStack, on the Intel® Architecture-based Dell server that comprises the overall hardware deployment platform.

Open standards and compatibility with legacy software

When 6WINDGate is used to accelerate OVS (the first bottleneck described above), no changes are required to the standard OVS code itself. 6WINDGate intercepts packets that would normally be processed in the (slow) OVS data plane, processing them in the 6WINDGate data plane instead.

In the case of VNF acceleration (bottleneck #2), 6WINDGate is fully compatible with standard Linux networking APIs (Netfilter, Netlink etc.). This means that no modifications are required to the VNF applications themselves in order to take advantage of the performance improvement provided by 6WINDGate.

Finally, 6WINDGate is fully-compatible with the OpenFlow protocol used in many NFV architectures.

Next steps

As explained in the CloudNFV announcement, demonstrations will start with a series of public and open webinars beginning in late September and include live, in-person demos conducted at various industry events through the end of the year. We look forward to participating in these events. Hope to see you there!

It’s all about the money

Operators of telecom infrastructure, enterprise networks and data centers are increasingly exploring the deployment of network security functions as virtual appliances instead of physical appliances.

There are two obvious business drivers for this trend. The first is reduced CAPEX, resulting from the use of high-volume, standard hardware platforms rather than low-volume proprietary hardware. The second is reduced OPEX, thanks to the improved resource utilization achieved through virtualization. The major focus on Network Functions Virtualization (NFV) throughout the telecom industry is a strong indicator of the major cost savings that operators believe to be possible through this strategy of network virtualization.

Additions not replacements

Image 1 for IPsec post.jpgAt 6WIND, most of our customers are network equipment manufacturers and many are working on virtual security appliances, implementing functions like UTMs, IPSs, firewalls etc. But they typically view the virtual appliances as extensions to their current portfolio of physical appliances, rather than immediate replacements.

 

This makes perfect sense given that physical appliances are not going to disappear at any time in the near future. Realistically, we can expect to see the gradual introduction of virtual appliances in scenarios with compelling Return-on-Investment (like NFV), accompanied by exhaustive evaluations to ensure that the cost-performance and especially reliability meet the standards set by physical appliances.

Given our customers’ strategy of introducing virtual security appliances while maintaining their current portfolios of physical appliances, we place great importance on providing a comprehensive set of network protocols that can be deployed in both solutions, with consistent APIs, features and performance.

IPsec everywhere

IPsec is a good example of a protocol that is critical for network security, and 6WIND provides an IPsec solution for both physical and virtual appliances. IPsec is widely used for VPNs in telecom infrastructure (LTE femto gateways, security gateways, GGSNs etc.) as well as in enterprise networking (UTMs, IPSs, firewalls etc.). There are also emerging use cases in multi-tenant data centers, where secure tunnels are increasingly adopted for VM-to-VM communication.

IPsec provides authentication (session management) and data confidentiality (encryption/decryption) at Layer 3. IPsec is actually a suite of protocols, including IKE (Internet Key Exchange), AH (Authentication Header) and ESP (Encapsulating Security Payload) amongst others, with the IPsec standard defining how these protocols communicate.

Offloads maximize CPU utilization

Within IPsec, 3DES, AES, SHA1 and MD5 are all resource-hungry algorithms used for authentication and encryption that are increasingly offloaded to specialized hardware-based engines. This offloading maximizes the availability of CPU resources for running applications, rather than security. Given the large variety of available processor platforms, supporting the appropriate crypto engine is not a simple task and requires a processor-agnostic approach for optimum efficiency. The 6WINDGate™ networking software integrates support for several widely-used crypto accelerators, including:

  • Intel® Communications Chipset 89xx series “Cave Creek” (hardware acceleration)
  • Intel® Multi Buffer Crypto for IPsec Library (software acceleration).


Advanced features, open configuration

6WINDGate provides a high performance IKE daemon, thereby increasing the tunnel establishment rate. The Security Association (SA) look-up mechanism is based on a 16-bit hash table. The Security Policy (SP) mechanism supports either a linear lookup, or a hash table lookup based on SP selector address prefixes, or a trie-based lookup, which can be selected and combined based on configurable thresholds.

6WINDGate also supports open, standard Linux-based APIs, enabling the SA and SP databases (SAD and SPD) to be configured by third-party IKE control plane modules. In order to minimize the latency of the system, both SPD and SAD are located in shared memory. Through its support of IKEv1, IKEv2 and Anti Replay features for security re-enforcement, as well as NAT traversal for enabling IPsec in complex networks, the 6WIND IPsec module addresses a wide range of common usage scenarios.

Industry-leading IPsec performance for physical appliances

Thanks to its optimized architecture and advanced features, 6WINDGate delivers industry-leading IPsec performance for physical appliances.

Image 2 for IPsec post.png

For example, on a 2.7GHz dual-socket Intel “Sandy Bridge” platform (two 8-core processors) with 16GB RAM, 6WINDGate achieves IPsec performance of 5.4 Gbps per core (1,420-byte packets), for a total platform performance of 73 Gbps using 14 cores.

(Why 14 cores? Because the reference platform has seven NICs and a configuration with seven cores represents the best way to balance the traffic.)

Besides the raw performance, the graph illustrates that the IPsec performance scales linearly with the number of cores configured to run the 6WINDGate fast path. This scalability is a key benefit of the 6WINDGate architecture, explained in more detail here.

What about IPsec for virtual appliances?

Stay tuned for the next post in this series, where we’ll explain how 6WINDGate addresses the networking performance constraints imposed by standard hypervisors and discuss the IPsec performance that 6WINDGate delivers for virtual security appliances.

By now, it’s probably a safe assumption that regular readers of this blog are more than familiar with the many benefits that Network Functions Virtualization (NFV) promises to deliver for telecom service providers (and if you’d like an introduction, this page would be a good place to start).

Good attendance at the April ETSI NFV meeting in Santa Clara, as well on-going participation in the various working groups that are hammering out technical details, certainly seems to confirm strong interest not only from the service providers who started the initiative in late 2012, but also other ecosystem stakeholders such as network equipment manufacturers, system integrators, software companies and processor suppliers.

In terms of the business benefits of NFV, most of the discussion to date appears to be around saving money. Significant CAPEX and OPEX savings are expected through the virtualization of functions that have traditionally been implemented as stand-alone, dedicated, fixed-function equipment.

One use case that’s frequently discussed is a “virtual CPE”, with an architecture that relies on a very simple CPE (modem, switch and antenna) with all services relocated to virtual network appliances on a central server. Another example is the cloud RAN (Radio Access Network) concept, in which the eNodeB at the antenna is replaced by a low-cost Remote Radio Head, with digitized RF data backhauled over fiber to a centralized Baseband Unit pool in which the baseband processing is performed. Finally, Evolved Packet Core (EPC) functions for mobile networks can be run as Virtual Network Functions (VNFs) in Virtual Machines (VMs) on generic server platforms.

From the perspective of the service providers, though, top-line P&L growth ultimately comes from making money, and specifically from increasing their Average Revenue per User (ARPU).

Increasing ARPU means extracting more money from subscribers, both enterprises and also consumers like you and me. In fact a recent study by Infonetics “SDNs, 40G/100G, and MPLS Control Plane Strategies: Global Service Provider Strategy, July 2013” indicated that 48% of carriers want SDN/NFV in order to “create network services not possible with existing technologies/protocols”.

So I think it’s interesting to speculate on what kinds of innovative and presumably valuable new services we can expect thanks to the massive investments that service providers will presumably be making in NFV. This subscriber-side view doesn’t seem to be a big topic (yet) at NFV events, but there were some relevant discussions at the recent SDN & OpenFlow Congress in Singapore.

Potential new services mentioned at that event included:

  • Bandwidth-on-demand: Want to watch a really important sporting event on your laptop in HD for the next couple of hours? Just go to online to your service provider, select “HD quality”, specify how long you want it for and the network instantly provisions the necessary bandwidth and service chaining.
  • Services-on-demand: Need Japanese-to-English translation on your tablet? Go online and request that service as soon as you hit the ground in Tokyo and it’ll be up and running automatically before you’re off the plane.
  • Tiered security: How about if the network automatically configured its service chaining to route your traffic through security appliances appropriate to the content that you’re accessing? Maybe you need different security features in the chain for web browsing as opposed to a Skype session.

 

As the NFV network-level architecture details and deployment strategies firm up, so that the service providers have confidence about the new capabilities of the infrastructure, I’m sure we’ll see a lot more discussions about interesting new services that will raise our monthly bills as subscribers and boost the service providers’ top line revenue.

What are your thoughts on this? What new services do you see being introduced thanks to NFV that will bring real value to subscribers, whether enterprises or consumers?

At 6WIND we’re doing our part to boost the cost-effectiveness of the NFV network infrastructure and accelerate deployment of NFV solutions. Please join us and Red Hat for a webinar on June 25th “The first open software platform for network functions virtualization” to learn all about our initiative in this area. Click here to register.

Between two and three hundred people attended the ETSI Network Functions Virtualization (NFV) meeting in Santa Clara last month. As expected, there was a strong turnout from the telecom service providers who started the NFV initiative in late 2012, while many other companies were also represented such as network equipment manufacturers, system integrators, software companies and processor suppliers.

The overall impression was that key service providers are strongly committed to NFV, with the potential of interesting new business opportunities for a range of other players in an ecosystem that will have to include providers of hardware, software and services.

In terms of the business benefits of NFV, most of the discussion to date appears to be around the CAPEX and OPEX savings that can be achieved through the virtualization of functions that have traditionally been implemented as stand-alone, dedicated, fixed-function equipment.

One use case that’s frequently discussed is a “virtual CPE”, with an architecture that relies on a very simple CPE (modem, switch and antenna) with all services relocated to virtual network appliances on a central server. Another example is the cloud RAN (Radio Access Network) concept, in which the eNodeB at the antenna is replaced by a low-cost Remote Radio Head, with digitized RF data backhauled over fiber to a centralized Baseband Unit pool in which the baseband processing is performed. Finally, Evolved Packet Core (EPC) functions for mobile networks can be run in Virtual Machines (VMs) on generic server platforms.

Each of these scenarios promises to reduce both CAPEX and OPEX for service providers, though there’s still healthy debate as to whether a simple substitution of virtualized functions for dedicated equipment really reduces costs enough, once new operational expenses are factored in.

I was surprised not to hear more at the ESTI meeting about how NFV can enable new services, especially cloud-based applications, which would deliver real value to subscribers. These would seem to be the key to increasing Average Revenue per User (ARPU) and driving top-line growth in the P&L, rather than just reducing expenses through virtualization. On reflection, though, carriers probably view new services and applications as their proprietary secret sauce and not the kind of topic that they want to talk about in a room full of their competitors.

At this point in the evolution of NFV, the working groups and “expert groups” are mainly focused on topics relating to network management and orchestration topics. This makes perfect sense given the extreme complexity of legacy telecom networks and challenges of migrating these control-related systems to a completely new software-based architecture. And, of course, the concept of “Carrier Grade reliability” reflects expectations that we all have, as subscribers with the freedom to switch providers as soon as our contracts are up.

Before too long, though, we’ll need to talk about a fundamental data plane issue that must be addressed in order for any NFV implementation to be cost-effective.

Packet processing is a key function that dominates the processing workload for many telecom network subsystems. These include the CRAN and CPE solutions mentioned above, as well as several key EPC functions such as the Mobility Management Entity (MME), Packet Gateway, Security Gateway and Serving Gateway. In the case of traditional non-virtualized equipment, standard Operating System networking stacks provide poor performance for packet processing because of overheads and latencies within the kernel, so many equipment suppliers have adopted solutions such as 6WIND’s software, which solves that problem through a fast path architecture.

Virtualized implementations present additional challenges for packet processing. Virtualizing hundreds of CRAN, CPE or complex EPC functions on a single server requires high performance network bandwidth to be distributed to the virtualized functions. Standard virtualized architectures that work well for computing applications are unable to deliver the required performance for these demanding network applications. Bottlenecks in the hypervisor, the virtual switch and the VMs themselves can degrade overall networking performance by an order of magnitude for virtualized functions when compared to physical implementations. This is a potential showstopper in terms of the cost-effectiveness of NFV architectures, for which cost-per-subscriber will be a critical metric.

Fortunately, this problem can be solved. Through the use of techniques such as IO Virtualization (IOV), virtual NIC (vNIC) drivers and direct VM-to-VM (VM2VM) communication, as well as a fast path data plane implementation, it’s possible to bypass the bottlenecks. Advanced software solutions can deliver the high networking performance that’s required in order to achieve the cost reductions that are the basis of the NFV concept.

As an example of this, at Open Networking Summit during the week before the recent NFV meeting, 6WIND demonstrated a solution that delivers a 10x improvement in the performance of the standard Open Virtual Switch (OVS), which performs the critical switching function required to deliver high-bandwidth network traffic to VMs.

So while most of the NFV attention and conversations are currently centered on management and orchestration issues, let’s not forget that sooner or later we’ll need to make sure the data plane performance challenges are addressed too. This will be a critical factor in ensuring that the NFV architectures do indeed deliver the CAPEX and OPEX reductions that are promised. And then we’ll have low-cost, high-performance platforms for the delivery of innovative new services for subscribers, which can be the basic of increased service provider revenues.

 

In a recent post, we summarized a number of announcements that 6WIND recently made involving key industry partners. We also announced that 6WIND will lead other major industry initiatives to bring compelling solutions for the industry to accelerate networking and telecom transformation.

 

6WIND is proud to announce the establishment of the dpdk.org open-source project to accelerate development of Intel® Data Plane Development Kit (Intel® DPDK) architecture-based networking and telecoms products.

 

In telecom, enterprise and cloud applications, the Intel® DPDK, which is a set of optimized libraries and drivers provided to OEMs under a BSD licensing model, is an increasingly viable foundation for packet processing. However, the development of a complete networking and telecom application requires strong expertise.

 

DPDKorg.JPG

As a major contributor to the Intel® DPDK, 6WIND has developed valuable enhancements to the software including crypto support via AES-NI, virtualization support, Mellanox 40GbE ConnectX®-3 EN series and Cavium Nitrox® support.

 

6WIND also developed strong expertise in leveraging the Intel® DPDK as a result of optimizing its 6WINDGate™ data plane solution (comprising virtual switch acceleration, IP forwarding, virtual routing, IPsec, firewall, Large Scale NAT, TCP termination etc.) to run at maximum performance on the Intel® DPDK platform. 6WIND also provides professional services and technical support to developers. As a result, 6WIND enables use of Intel® DPDK in Virtual Network Appliances, Software-Defined Networks and Network Function Virtualization.

 

At dpdk.org, engineers can find libraries, drivers, documentation, development mailing list, git repository and support relevant to high-performance packet processing using the Intel® DPDK software.

 

The dpdk.org open-source project is active now and openly available to any developers using the Intel® DPDK. It will provide an openly-accessible repository for released versions, while encouraging innovative collaboration and enabling participating companies to offer specialized support.

This week, 6WIND announced extensions to the 6WINDGate™ networking software that deliver a 10x performance improvement for virtual switches. These new capabilities address three critical scalability challenges faced by service providers operating public/private cloud data centers as well as telecom infrastructure architected around Network Functions Virtualization (NFV) concepts.

The first scalability challenge relates to the number of Virtual Machines (VMs) per server blade. This VM density is increasing rapidly, leveraging on-going improvements in the performance of the x86 processors used on those blades. Today, a typical server blade in a service provider data center hosts at least 50 VMs, with that number expected to grow to hundreds within a few years.

Because of this growth in the number of VMs running on each server blade, the data center network needs to expand beyond its current limit at the Top-of-Rack (ToR), to a model where a virtual switch on each server blade is used to distribute the increasing volume of network traffic to virtualized applications. This function is typically implemented using the open-source Open vSwitch (OVS) or an equivalent proprietary virtual switch.

The second scalability challenge is the network bandwidth required by VMs. With the constant growth in rich media applications, individual VMs can require sustained network bandwidth of 1Gbps or more. As VM density increases, this bandwidth requirement can quickly outstrip the capacity of a standard virtual switch, constraining either the number of VMs that can be instantiated on a blade or the performance seen by the users of those VMs.

The final major challenge is a result of evolving traffic patterns within data centers. In traditional client-server data centers, network traffic was predominantly “North-South”: from the Internet, through the core switch and aggregation layer, to the ToR switch and then to the server blades. Within multi-tenant Web 2.0 data centers, however, VM mobility and VM sprawl cause the majority of traffic to be “East-West”: between VMs located on different physical server blades.

This growth in East-West traffic means that high-bandwidth VM-to-VM communication (VM2VM) is mandatory. To isolate and secure VM2VM, however, requires secure tunneling services, running on the virtual switch, that extend beyond the basic Layer 2 features provided by a typical virtual switch. Within a cloud, these ensure that only specific VMs can access the applications and data owned by other VMs. Between clouds, policies are enforced to ensure that data and traffic for one cloud is not visible to another.

Adding to the overall security- and networking-related workload that must now be supported on server blades is the increasing trend towards the use of overlay network technologies which avoid the 4,094 tunnels limitation of traditional VLANs. Data centers are now adopting VXLAN and NVGRE protocols, with 16-bit IDs that allow for 16 million tunnels.

 

The new extensions to the 6WINDGate networking software provide solutions to these three challenges, delivering a data plane solution that achieves 5x - 10x acceleration for the baseline Layer 2 switching function.

 

Thanks to this increase in Layer 2 switching performance, data center operators can achieve the increases in VM density that are enabled by on-going improvements in the performance of server processors. They can also deliver high network bandwidth to individual VMs, addressing the performance needs of users running streaming media applications or other bandwidth-hungry workloads.

At the same time, 6WINDGate delivers high performance on the necessary secure tunneling protocols such as IPsec, GRE, NVGRE, VLAN and VxLAN.

Within the virtual switch, the 6WINDGate data plane spies on 6WINDGate spies on configuration messages from the virtual switch control plane to its data plane, intercepts the appropriate packets and processes them within the 6WINDGate fast path, thereby achieving the 5x – 10x performance acceleration.

These high-performance switching, tunneling and security functions are achieved with no changes required to the virtual switch software itself.

6WINDGate is also fully-compatible with OpenFlow and completely transparent to the applications running on the platform, which do not need to be recompiled or re-verified in order to work with this high-performance solution.

We will be demonstrating the use of 6WINDGate to accelerate the standard Open vSwitch, in partnership with Big Switch Networks and Ixia, in our booth number 401 at the Open Networking Summit in Santa Clara, CA from April 15th through 17th. We hope to see you there!

6WIND recently made a number of announcements involving key industry partners, including:

 

The transformation of telecom and networking enabled by SDN (Software-Defined Networking) and NFV (Network Function Virtualization) is often compared to the server virtualization and cloud revolution that happened a few years ago.

 

There is however a major difference. Standard applications “only” need processing capabilities, memory and storage to run efficiently in a virtualized environment. Designing cloud-based network architecture and virtualized networking functions also require high speed and secured I/Os.

 

I believe the combination of high performance generic hardware platforms based on the latest generation of multi-core processors, leading open source software platforms, 40Gbe Ethernet solutions and the 6WINDGate data plane software is a great foundation for addressing the transformation of telecom and networking.

 

6WIND is committed to being a key player in the ecosystem that brings compelling solutions for the industry to accelerate network virtualization. We will lead other major industry initiatives and announce breakthrough products soon.

 

Stay tuned…

NFV (Network Function Virtualization) was obviously one of the hottest topics at MWC 2013. Several use cases were demoed by tier-1 platform vendors, telecom equipment manufacturers, service providers and operators to show the benefits of the NFV concept.

 

These use cases can be classified into two categories. The first one is the migration of functions traditionally located in at the customer premises into the operator’s infrastructure.

 

One example is the concept of a “virtual CPE” that can be applied either to consumer or enterprise markets. The new architecture relies on a very simple CPE (modem, switch and antenna) and all services run on a central server in a virtual network appliance. CAPEX is considerably reduced. Software updates, service configuration, introduction of new services, maintenance are also greatly simplified, so service providers and operators can significantly reduce their OPEX as well.

 

The cloud RAN (Radio Access Network) concept similarly reduces both CAPEX and OPEX for mobile networks. In the conventional network architecture, each antenna connects to a co-located eNodeB from which traffic is backhauled to the Evolved Packet Core (EPC) over an IP link. With the Cloud RAN approach, however, the eNodeB at the antenna is replaced by a low-cost Remote Radio Head. The digitized RF data is then connected over a fiber link to a centralized Baseband Unit pool in which the baseband processing is performed.

 

A second category of use cases addresses core network functions. In contrast to the first one, the equipment is already located in the core infrastructure. For instance, virtualizing EPC functions for mobile networks in virtual machines on generic platforms instead of dedicated equipment reduces the CAPEX. Dynamic allocation and management of network resources bring more flexibility and better usage of network resources, thereby reducing OPEX and increasing ARPU.

 

Will NFV be the real first transformation of the network and telecom business? These different uses cases seem to be very attractive for operators, enabling them to increase their margins and deploy differentiating services faster. To make this happen, end users will also need to see real  reductions in their Internet expenses.

 

However, all the concepts have to come back to reality. Virtualizing hundreds of CPE or complex EPC functions on a single server requires high performance network bandwidth to be distributed to the virtualized functions. Today, standard virtualized architectures that work perfectly well for computing won’t be able to deliver the required network performance without optimized solutions like our 6WINDGate software. Please check how many software layers a network packet has to go through before reaching an application running in a virtual machine.

In the ruthless quest to squash the use of buzzword-based marketing speak in business communications, phrases such as world-class, best practices and paradigm shift are rarely seen any more – which is actually a step in the right direction given how abused these phrases were just a few years ago. However, there are times when these phrases capture exactly the right idea and should be used to describe a new trend or idea.

 

This is one of those times. Within the context of current technology trends and transitions occurring within the telecommunications and enterprise networking worlds today, I want to bring paradigm shift out of the buzzword dog house and use it to frame today’s ideas on Network Convergence.

 

In basic terms, a paradigm shift is a change from one way of thinking to another.  To really abuse the marketing-speak buzzwords, in high-tech a paradigm shift is a technology-driven shift from one set of best practices to another set to achieve world-class performance. (I apologize!). The reason I like the phrase is that it implies that this shift is revolutionary, not evolutionary.  When there is a paradigm shift in high-tech, new essential technologies emerge, business models are changed, old skills have to be updated, familiar players have to re-invent themselves and new players emerge.  We’re talking big-time change in a 5 to 10 year period of time.

 

So what’s happening to the Network Convergence idea?  It’s been around for decades.

 

Back in the 90’s, Network Convergence in the telecom world referred to the consolidation of the separate voice, video and data communications networks into a single network used to deliver all forms of communications services. Today, we simply refer to this era as the ‘transition to IP-based communications’ or something like that. Instead of working really hard to build bridges between the different networks, we finally decided we had to replace them all with a single network.

 

At the same time, in the enterprise networking world, Network Convergence had a similar impetus.  To simplify their networks in order to lower their costs and improve manageability, companies started to replace the myriad of vendor-specific networking technology, such as IBM’s SNA, Novell’s IPX/SPX, Apple’s AppleTalk, Digital’s DECnet and Microsoft’s NetBEUI to name a few, with the Unix-based TCP/IP protocol, the transport mechanism of the Internet.

 

Today, while the internet is ubiquitous, the networking world we live and work in is characterized by two ‘best-practices’ that helped us through this 1st era of Network Convergence.  First, the carrier-grade telecom networking world is still vastly different from the enterprise or IT networking worlds and far too frequently one camp wants nothing to do with the other. And second, the networking infrastructure of both worlds is still hardware-driven based on proprietary technology.

 

These last two holdovers from the past are starting to outlive their usefulness.  Simply put, the CAPEX and OPEX costs associated with building and maintaining today’s highly secure mobile and cloud computing network infrastructure are staggering.

 

Today, the paradigm shift that is Network Convergence (2nd Era) has 3 key pieces:

  1. To lower CAPEX and OPEX costs, the transition from proprietary hardware to commodity multi-core platforms is essential.  In fact, this transition is already highly visible with much of the LTE infrastructure in the mobile world and cloud computing infrastructure being based on standard server platforms. The new Intel® Platform for Communications Infrastructure is a great example of how the latest multicore processor technology is being optimized for both application and communication processing.
  2. The ‘hardware-driven’ network will transition to a software-based network. Still in the early stages of study and pilot implementations, the ideas, protocols and products behind Software Defined Networking are just starting to emerge. Increased flexibility and improvement in resource utilization are the drivers here. And with a significant group of network operators, equipment vendors and network technology providers behind the effort, SDN will only gain momentum.
  3. The telecom network world and the enterprise (or Data Center) network world will become one-and-the-same.  Why? Again, very simply stated, they are both are moving to highly virtualized software-based networks built on standard server model.  The differences between the implementation approaches are vanishing.  One interesting question is ‘will the organizational (some would say political) differences vanish too?’

 

Network Convergence paradigm shift – absolutely, a revised definition of the concept. Revolutionary – absolutely, big-time changes that will be endorsed (and resisted) on many fronts. Essential to continue to create increasingly capable (and essential), cost-effective networks – absolutely.

 

Eric Carmès, CEO of 6WIND has shared some of his thoughts on the network convergence paradigm shift in an interview for SDNCentral. Check it out.

Within both public and private cloud data centers, the number of Virtual Machines per server blade is increasing rapidly, leveraging on-going improvements in the performance of the x86 processors used on those blades. Today, a typical server blade in a service provider data center hosts at least 50 VMs, with that number expected to grow to hundreds within a few years.

 

Because of this growth in the number of VMs running on each server blade, the data center network needs to expand beyond its current limit at the Top-of-Rack, to a model where a virtual switch on each server blade is used to distribute the increasing volume of network traffic to virtualized applications. This function is typically implemented using the open-source Open vSwitch (OVS) or an equivalent proprietary virtual switch.

 

For multi-tenant data centers, high-bandwidth VM-to-VM communication (VM2VM) is mandatory. To isolate and secure VM2VM, however, requires extensive routing, firewalling and load balancing services that extend beyond the basic Layer 2 features provided by a typical virtual switch. Also, multi-tenant architectures require traffic engineering (ACL, tunneling, QoS etc.) to be performed at the server edge in order to provide users with individual, differentiated services.

 

Advanced security policies are necessary to secure both physical and virtual traffic. Within a cloud, these ensure that only certain VMs can access the applications and data owned by other VMs. Between clouds, policies are enforced to ensure that data and traffic for one cloud is not visible to another.

 

Adding to the overall security- and networking-related workload that must now be supported on server blades is the increasing trend towards the use of overlay network technologies which avoid the 4,094 tunnels limitation of traditional VLANs. Data centers are now adopting VXLAN and NVGRE protocols, with 16-bit IDs that allow for 16 million tunnels.

 

From the point of view of the data center operator, it is critical that the solutions they select to achieve the virtual switch enhancements described above are fully-compatible with emerging options for data center orchestration such as OpenStack and for Layer-2/-3 management such as OpenFlow. In addition, Carrier Grade reliability is mandatory for enterprise-class data centers.

 

At 6WIND, we recognized these security-driven virtual switch challenges and enhanced the 6WINDGate™ software, already widely deployed in network appliances, to deliver performance improvements and advanced networking features for virtual switches (details here). We’ll be discussing our solutions in our booth number 854 at RSA Conference, hope to see you there!

A few years ago, Intel released its Data Plane Development Kit (Intel® DPDK) software. The DPDK library is provided to OEMs under a BSD licensing model, enabling the design of high performance networking software based on either pipelined or run-to-completion architectures. It runs in Linux userspace, providing optimizations for Intel® architecture platforms such as data plane libraries, optimized NIC drivers (queue and buffer management, packet flow, classification, poll-mode NIC drivers along with a simple API Interface and standard tool chain) and run-time environment (low overhead, run-to-completion model optimized for fastest possible data plane performance).

 

DPDK is a key environment for enabling the use of Intel architecture platforms in data plane applications. DPDK, however, is a complex technology and in-depth networking software expertise is required to use it efficiently. Since 2007, 6WIND has been working with Intel to optimize its 6WINDGate™ networking software suite for Intel embedded processors and to provide the most advanced and comprehensive suite of pre-integrated data plane and control plane protocols on top of DPDK, including high availability capabilities.

 

As a member of the Intel Intelligent Systems Alliance, 6WIND provides support either for the stand-alone Intel DPDK or for DPDK integrated with 6WINDGate. Therefore, customers can benefit not only from 6WIND’s full support of DPDK but, more importantly, from the in-depth system expertise 6WIND developed through designing high performance networking software.

 

You can download the Intel – 6WIND white paper “Optimized Data Plane Processing Solutions using the Intel® DPDK” from our Web site for more information about 6WINDGate for Intel architectures.

 

In addition to functions and NIC drivers provided with the standard DPDK distribution, 6WIND also provides valuable add-ons to DPDK for increased system functionality and performance. These include:

  • Support for non-Intel NICs such as the Mellanox ConnectX®-3 EN series,
  • Crypto support via both Intel® Multi-Buffer Crypto software and Cave Creek hardware,
  • DPDK drivers for external crypto engines such as Cavium’s Nitrox®,
  • Drivers for enabling DPDK in virtualized environments such as VMware VMWNET3 and KVM-XEN VIRTIO.

 

Providing these add-ons is very important because network software developers are accustomed to having all the required drivers available in a Linux environment and they expect to use them in the DPDK environment as well. It’s likely one of the most important limitations that I hear about from engineering teams using DPDK.

 

6WIND provides these add-ons as part of the private DPDK branch maintained by 6WIND and fully synchronized with Intel's on-going releases of the baseline library.

 

We encourage you to check our roadmap for future add-ons and we would be very pleased to learn about your DPDK requirements so that we can better adapt our plans to meet them.

Filter Blog

By date: By tag: