Skip navigation

Embedded Community

6 Posts authored by: charlieashton

I spent a couple of days last week at the Open Networking Summit in Santa Clara. The conference covered a wide range of business and technical subjects relating to Software Defined Networks, with the topic of OpenFlow receiving particular attention.

 

In a keynote address on the first day of the conference, Urs Hoelzle, Senior Vice President at Google, described how Google used OpenFlow as part of a major overhaul of its internal networking infrastructure to reduce costs and increase efficiency. This internal “G-scale” network connecting Google’s vast data centers worldwide actually carries more traffic than its external “I-scale” customer-facing network.

 

Hoelzle presented a fascinating description of how Google approached the massive task of swapping their network hardware while, of course, ensuring no disruption to on-going traffic. Having concluded when they started the project that there was no off-the-shelf networking equipment “even remotely suitable for this task”, Google developed their own 128x10G chassis using merchant silicon, ready for deployment starting in early 2010. They then installed this in each data center, using a method whereby they first pre-deployed the equipment at a site, taking down half the site’s networking equipment and connecting it to the new system. After testing to verify that the upgrade worked, the engineers would then repeat the process for the remaining 50 percent of the site’s equipment. By early this year, Google’s entire internal network was running the new, OpenFlow-based equipment.

 

Google now has a central traffic engineering controller that optimizes routing and provides more predictable performance. OpenFlow allows network packets to be routed by software running on routers. By separating packet switching from network management, Google has better and easier control over their network.

 

Many speakers pointed to Google’s announcement as a key proof point for OpenFlow, demonstrating that the technology is usable in the world’s largest networks and delivers quantifiable business benefits. This view was certainly reinforced by other case studies presented at the conference, by organizations as diverse as Kindred Healthcare, Indiana University and NTT Communications.

 

One of the questions debated at the conference was how quickly OpenFlow will gain traction in broader enterprise and data center applications. A couple of speakers pointed out that Google of course has access to resources way beyond what’s available to most IT organizations, while service providers are naturally hesitant to introduce new technology into something as complex as the average enterprise network.

 

Martin Casado, Nicira’s CTO, talked about the value of OpenFlow within virtual switches (announcing that the OpenFlow Virtual Switch is now included in the Linux kernel), but recommended using Layer 3 to manage the external fabric interconnect. Speakers from Dell, HP and other companies mentioned that merchant silicon (e.g. switches) is not yet available with fully optimizations for OpenFlow.

 

A common thread within the presentations was the likely coexistence between existing networks and SDN. Speakers from Cisco and Juniper both talked about this, explaining that they expect to leverage traditional routing protocols with SDNs for the foreseeable future. Cisco described how SDNs should be linked to analytics and policies, using visibility into the control, data, management and transport planes to drive more intelligent decisions.

 

Finally, there were some interesting discussions about what “killer apps” might emerge as the driving force behind Software Defined Networking and OpenFlow. Network virtualization was mentioned, as was fine-grained policy for Bring Your Own Device scenarios. A couple of people talked about a future with “app stores” providing off-the-shelf networking applications that would (presumably) run on standard, secure, risk-free networking APIs. Personally, I would settle for not having to reboot my home router every time there’s a full moon, or a decent GUI for updating its firmware.

 

What are your thoughts on this topic? What do you expect to happen in terms of OpenFlow’s adoption in enterprise networks and data centers? What are the key issues (both business and technical) that the industry needs to resolve to accelerate this adoption? Is there a “killer app” for this technology that will trigger a sudden increase in deployments?

In last week’s blog I mentioned a few of the interesting new products featured at RSA Conference and discussed one of the common threads from conversations with attendees, which was the need for increased performance and scalability in the low-level packet processing functions that are a critical part of many of these products.

 

From a system architecture perspective, we also heard consistent comments about the need for efficient implementations of hybrid systems based on multicore architectures.

 

In this context the term “hybrid system” refers to one comprising multiple (typically, two) processor architectures. Generally, one architecture (generally x86) is used for the control plane and another for the data plane. The data plane architecture can be either a standard multicore platform such as Cavium or NetLogic/Broadcom, or in some cases a specialized Network Processor from a company such as Netronome.

 

It’s interesting to talk to system designers about the tradeoffs that they consider when choosing a hybrid system architecture as opposed to a unified approach (where both the control plane and data plane are implemented on the same architecture and, often, on the same processor).

 

One the one hand, a unified system brings compelling advantages in terms of a single software development environment, a more straightforward hardware design and the ability to work with a single processor supplier.

 

A hybrid system approach creates the challenges of dealing with two (generally very different) software development environments, two Operating Systems, software integration issues and a more complex hardware design. Also, the overall project schedule is now subject to the delivery dates for two multicore processors rather than one.

 

The message that we received from attendees at RSA Conference is that, in specific cases, the increased complexity and risk of a hybrid design is still outweighed by the overall system-level advantages that can be achieved. Generally, these advantages are reduced system cost, higher overall system throughput and increased performance on specific security-related functions which typically are performed in dedicated offload engines in a hybrid system.

 

Within 6WIND’s software, we have recognized this need and provide highly-efficient support for hybrid system architectures. All the 6WINDGate control plane code and most of our data plane code is architecture-independent. Architecture-specific abstractions and optimizations are performed within a data plane module called the “Fast Path Networking SDK” or “FPN-SDK”.

 

From our customers’ point of view, they license from us the 6WINDGate stack including the protocols that they need, along with the appropriate FPN-SDK modules for the processors that they select. In terms of our development process, this architecture enables us to work on both new protocols and new architecture support as parallel activities, bringing a high degree of efficiency to our new product development process.

 

What do you see as the trends and tradeoffs in terms of hybrid vs. unified multicore systems in networking and security applications? Will there always be a market segment where hybrid systems are the optimum solution?

I spent most of my time at RSA Conference last week working our booth, with limited time to walk the show floor and understand the wide range of new products on display. In conversations with attendees and exhibitors, however, I was struck by the wide range of new solutions being discussed at the event as well as the diversity of challenges that are being addressed in the whole category of network security.

 

Among the interesting new products featured on the show floor, Cisco demonstrated a new application-aware firewall product, Cisco ASA CX, as part of its SecureX strategy. Cisco ASA CX is an extension to its Adaptive Security Appliance that offers control over 1,000 different applications, including Facebook, Google+, LinkedIn, Twitter and iTunes. The application-aware firewall product breaks those applications down into more than 75,000 micro-applications, or application components, reporting on how much network bandwidth is consumed by specific applications and application features.

 

Cyberoam introduced a Web Application Firewall (WAF) feature for its UTM appliances, that adds a layer of protection to corporate websites and web-based applications through behavior detection. The WAF implements a website flow detector that apparently can “‘self learn” the legitimate behavior and response of web applications, repelling customized and automated attacks

 

Lumeta showed a feature called Enhanced Perimeter Discovery (EPD) within their IPsonar network discovery software, which identifies the network perimeter and demarcates between known and unknown devices in a network. EPD uses a targeted approach to identify devices on the network that have the ability to pass traffic into unauthorized networks. When systems forward traffic to unknown, unauthorized or untrusted networks, EPD detects and sends an alert on the complete context of the network conversation, including details on the device or host and the addresses and networks connected to, potentially identifying holes in a network’s perimeter.

 

McAfee announced a new mobile security suite with privacy protection for devices, data and applications. McAfee Enterprise Mobility Management (EMM) 10.0 includes security updates for enterprise customers to enable “bring-your-own-device” (BYOD) practices in the enterprise. EMM software allows employees to choose their devices, while offering access to mobile corporate applications. New features and functionality include email sandboxing, the ability to block iCloud backup for iOS, extended security policies, increased scalability and application blacklisting for Android and iOS.

 

WatchGuard showed two new UTM appliances, the XTM 25 and XTM 26. These are designed for small businesses, wireless hotspots and branch offices. They provide HTTPS inspection and Voice over IP (VoIP) support, as well as options for application control and all other WatchGuard security services (WebBlocker, spamBlocker, Gateway Antivirus, Intrusion Prevention Service, Reputation Enabled Defense and LiveSecurity Service). Identity-based firewalling is used with integrated VPNs to create tunnels for Apple iPad, iPhone and other iOS-based devices, or to enable remote connectivity for mobile employees.

 

That’s just a small snapshot of a large number of new products announced at the show. We’ll discuss others in a follow-on blog next week.

 

When talking to Product Managers at companies listed above as well as many others, a common thread was the need for increased performance and scalability in the low-level packet processing functions that are a critical part of many of these products. As the scope of threats and hacks grows, this drives the need for constant increases in the functionality (and complexity) of the security products themselves, so it becomes critical to maximize the amount of CPU bandwidth available for application-level stacks and software. In turn, this drives a need to perform the low-level packet processing and security functions as efficiently as possible, minimizing the CPU resources required for Layer 2 through Layer 4 functions while maximizing their overall performance. Since this is one of the major benefits of the 6WINDGate software, we had many interesting discussions about these challenges with attendees who visited our booth.

 

DPI was another major thread at the conference. OEMs are implementing a wide variety of DPI-based functions in their network security functions. If you’re interested in understand how to incorporate DPI technology in your product as quickly and painlessly as possible, check out this Thursday’s webinar “Leverage DPI in your next networking product - we won't tell your boss about the time you saved” (click here to register).

 

Were you at RSA Conference last week? What were the main trends that you saw at the event?

Are you going to RSA Conference this week? If so, please stop by and visit with 6WIND and BroadWeb. We’ll be discussing the Integrated Software Solution for Intrusion Prevention that we announced on February 24.

 

As mentioned in that announcement, 6WIND and BroadWeb have worked together to develop a pre-integrated software solutions for IPS systems, which will accelerating OEMs’ time-to-market while at the same time delivering industry-leading performance.

 

As readers of this blog will know, the proliferation of network applications such as gaming and peer-to-peer file sharing has created business-critical challenges for network administrators and service providers. Networks must be constantly monitored for intellectual property leakage and other types of abuse.

 

While Deep Packet Inspection (DPI) technology provides network administrators with better visibility and control, it’s important for the DPI subsystem to perform at line rate while incorporating signatures that precisely describe the behavior of attack and applications. That’s the key issue that 6WIND and BroadWeb have addressed through an integration of their respective technologies.

 

By integrating the 6WINDGate™ packet processing software with BroadWeb’s DPI engine, the two companies have provided a best-in-class solution for network security appliances. Performing single-pass scanning, the combined solution delivers a throughput of 20Gbps per core on a dual Intel® Xeon® processor platform with a clock speed of 2.67GHz. The overall system-level performance of the combined solution scales linearly with the number of processor cores configured to run 6WINDGate, enabling OEMs to deliver network security products that simultaneously perform both IPS and Application Identification functions at line rate.”

 

With DPI technology being widely integrated into network appliances, this partnership overcomes the performance bottleneck of DPI technology and offers OEMs improved time-to-market along with high efficiency. The combined solution can easily be integrated into a variety of network devices targeted at both telecommunication and enterprise markets.

 

We’ll be featuring this solution, as well as the rest of product lines, in our booths at RSA Conference this week (6WIND: booth 242, BroadWeb: booth 2133). Hope to see you there!

Last week, FierceBroadbandWireless published an interesting study from Maravedis, outlining their projections for the growth in LTE adoption over the next twelve months.

 

Amongst other predictions, the study forecasts that the worldwide LTE subscriber base will reach 54 million by the end of 2012, 46% of whom will be in North America and 36% in the Asia-Pacific region. LTE networks are expected to be up and running in more than ten countries, including China, India, Malaysia, South Korea, Taiwan and the United States. China Mobile expects to launch the first truly large-scale deployment this year which will drive major economies of scale in handsets.

 

Maravedis notes a significant slowdown in mobile WiMAX subscribers. With 28 million users at the end of 2012, the mobile WiMAX subscriber base is expected to decline as major mobile WiMAX operators migrate their networks to LTE. Operators forecasted to make WiMAX-to-LTE transitions include Clearwire (US), Yota (Russia) and P1 (Malaysia).

 

The study indicates that there will be 160 commercial LTE deployments in service by the end of 2012, an increase of more than 100% from the 61 in service at the end of 2011.

 

Maravedis anticipates that Ericsson, Huawei and Nokia-Siemens Networks will continue to be the leading infrastructure vendors chosen by LTE operators during 2012, with Nokia-Siemens Networks receiving the largest share of contracts awarded.

 

Cloud RAN is identified as one of the key technologies for 2012 (see an earlier Forum post on this topic “Cloud RAN Outlook: Fair or Cloudy?”). Vendors such as Ericsson have begun to publicly announce their success in deploying cloud RAN equipment, while processor suppliers such as TI have announced specialized SoCs. At the same time, Alcatel-Lucent has demonstrated their small remote radio head solutions at several trade shows.

 

This contents of this report mirrors the activity and growth that we at 6WIND are seeing within our customer base. Our high-performance packet processing software is already deployed in a large number of LTE networks, some of which are in trials while others are in full commercial operation. We see strong interest from our OEM customers in the challenges of increasing network capacity, designing for maximum scalability, reducing cost and accelerating time-to-market.

 

What are the key trends that you’re seeing in the LTE market? What do you see as the major trends and challenges for 2012?

This post on the Multicore Packet processing Forum http://www.multicorepacketprocessing.com/%e2%80%9ccloud-ran%e2%80%9d-outlook-fair-or-cloudy/ outlines some conflicting viewpoints on the "Cloud RAN" concept for LTE networks.

Filter Blog

By date: By tag: