In the midst of a wave of announcements during Intel Developer Forum (IDF) last week, I have to confess that, for me, the most thought-provoking press release was the one that hit my Twitter feed on Thursday: NASA’s confirmation that the Voyager 1 spacecraft has left the solar system.

Voyager_spacecraft.jpgLet’s think about this for a moment: Voyager 1 was designed in the early 1970s, using 1960s technology, and it was launched in 1977, before many of the readers of this post were born. In 36 years, it has travelled 12 billion miles and is now 120 times as far from the Sun as the Earth is. Enough of its instrumentation is still working that it’s able to measure magnetic fields, plasma waves and other characteristics of empty deep space.

Equally amazing to me is the fact that we are able to receive and decode this information, transmitted back to Earth at a bit rate of 160bps using a 23W radio transmitter. As NASA loves to explain, that’s the same power as a refrigerator bulb and the signal takes 17 hours to reach us.

I’m sure someone reading this can calculate just how attenuated that signal is when it gets here. Just image the sophistication of the signal processing within the Deep Space Network that is required to extract meaningful data from a 23W signal that’s travelled 12 billion miles.

And let’s not forget that, early in its lifetime, Voyager 1 performed stunning flybys of Jupiter and Saturn. Many of us have those photos on our walls or as screen savers. And its last service call was on the launch pad. No way to send a technician to swap out a failing component or reseat a connector. Remote firmware upgrades, however, have been performed occasionally and very carefully.

I’m old enough to remember when the two Voyagers were launched. I had lots of hair and a deep fascination with the space program (at least one of those characteristics has survived). For me, it was unfathomable that a machine could travel as far as Jupiter, let alone out of the solar system almost 40 years later.

Surely, this represents the ultimate in terms of rigorous system design, total reliability and a communications system with unsurpassed sensitivity. Everyone with an understanding of the complexity of these systems and the challenges that have been overcome in this program should have the utmost respect for the engineers at JPL and elsewhere that made it happen. After all, here’s an example of the most common design tool:


OK, time to put my slide rule away and share some thoughts about IDF……

6WIND’s booth at IDF was in the “Software-Defined Infrastructure” area, so our neighbors were hardware, software and systems companies focused, like us, on applications such as Software Defined Networking (SDN) and Network Functions Virtualization (NFV).

Following our announcement on September 9th that our accelerated Open vSwitch (OVS) solution is fully-compatible with the Intel® Open Network Platform (Intel® ONP), we demonstrated that solution in our booth.

Booth.jpgAttendees seemed to be impressed to see the 6WINDGate™ software accelerating the performance of the standard OVS by more than 10x (68Mpps vs. 6Mpps on ten cores, details here), with no changes required to the OVS code itself.

We talked to people working on enterprise, cloud and telecom (NFV) data centers. They confirmed that the improving the number of VMs per server (VM density) by 3x to 9x, which is typically what our software enables, creates a massive advantage in terms of CAPEX and OPEX. After all, their subscribers are essentially paying for cores running VMs, not cores running OVS.

It’s hard to believe that NFV is less than one year old (that would be less than a thirty-sixth of a Voyager), with the ETSI initiative basically being started at SDN & OpenFlow World Congress in October 2013. Despite this short period, however, it was very clear from conversations at IDF that engineers and architects are firmly focused on practical questions relating to Proof-of-Concepts (PoCs) and real-world testing.

Many of the questions that we were received about 6WINDGate were not about the performance acceleration that it provides (attendees could see that from the demo and anyway it’s widely deployed in telecom equipment today) but about how to install it in real networks and start running tests. We heard several requirements relating to NFV PoCs that will be happening by the end of the year, with clear expectations of performance results being delivered quickly.

From that perspective, there was significant interest in CloudNFV. Knowing that 6WIND is a founder member, several people stopped by our booth for more details. People already knowledgeable about the initiative appeared to look on it very favorably, as a quick approach to a true end-to-end, multi-vendor implementation of the ETSI requirements.

Since our focus for NFV is on data plane acceleration, we had a number of interesting discussions about the impact of virtualization on the system-level performance of network functions. The general consensus seems to be that Virtual Network Functions (VNFs) need to deliver overall cost-performance that’s comparable with traditional physical equipment, otherwise the ROI for NFV just isn’t viable. Of course, meeting this requirement is not easy given the performance limitations of standard virtualized environments. This is a problem that the 6WINDGate solution solves (and a topic for another post).

Interoperability is a major topic for service providers and telecom equipment manufacturers working on NFV. A number of architects confirmed to us that traditional, physical networking equipment will coexist with virtualized solutions for a long time. From a software perspective, solutions are required that provide optimal performance in both environments, for maximum operational efficiency as well as optimized CAPEX/OPEX.

Several attendees also gave us their thoughts on phased approaches to NFV hardware. The low-risk strategy seems to be to first deploy virtualized applications on existing hardware platforms, achieving initial CAPEX/OPEX reductions through improved resource utilization. Next, the virtualized applications can be migrated to customized COTS or server platforms, optimized for NFV workloads. For the ultimate in CAPEX/OPEX improvements, the complete system will finally be transitioned into data centers and into the cloud.

Inevitably, network reliability was a hot topic when we talked to NFV architects. Telecom service providers have set the bar high in this area and, as subscribers, we all have demanding expectations when it comes to service availability and network uptime. For traditional physical infrastructure, the mechanisms of High Availability systems are pretty well understood and service providers know what it takes to achieve “Carrier-Grade Reliability”, whether that means zero-downtime, five-nines or another SLA metric.

Once the network is virtualized and running in the cloud, a whole new set of reliability problems and questions arise that are far beyond the scope of this post. Thinking of a (then) state-of-the-art system that’s still functioning after 36 years, 12 billion miles, extreme temperature fluctuations, killer radiation and absolutely no service calls, maybe we need a concept of “Voyager-Grade Reliability” to keep us communicating seamlessly as all our telecom traffic moves to the cloud?