Skip navigation
2016

Post1.pngOver the past couple of months, we’ve been involved in a series of conversations about the challenges of using OpenStack in telecom infrastructure. Since OpenStack was designed for enterprise-class IT applications, these challenges are formidable but we’ve demonstrated that they can be solved.

 

On June 15th we participated in a webcast with Peter Willis from BT. Peter described in detail the issues that he’s observed through some very detailed evaluations of vanilla (non-hardened) OpenStack distributions and we talked about solutions that Wind River has incorporated into the Titanium Server virtualization platform.

 

We discussed this topic in more detail in a white paper that you can download and we highlighted some additional, equally-critical OpenStack enhancements in a post published around the same time “Don’t be scared of OpenStack for telecom”.

 

Now we’re delighted to bring you a more detailed, technical deep-dive into this topic. On July 20th, SDxCentral will be hosting a webinar “Using OpenStack to solve real-world NFV problems today and tomorrow”, presented by Ian Jolliffe who is the Chief Architect of Titanium Server and has been working on these problems for longer than pretty much anyone else in our industry.

 

In this post, we’ll give a preview of some of key topics that Ian will address in this webinar. To get all the details, make sure you register here and block the time on your calendar to enjoy Ian’s in-depth explanation of what’s required to implement a telco-grade version of OpenStack.

Post2.png

Wind River: leaders in the community

 

If you review the list of companies contributing to the various OpenStack projects, you’ll see that the vast majority of them are in the enterprise market, working on patches and enhancements designed for enterprise needs. Wind River is one of very few companies focused on solving important OpenStack problems that are critical for telecom applications.

 

Our strategy is straightforward: once we develop a telecom-oriented patch or enhancement, we verify with our customers that it does indeed solve an important problem and then upstream it back to the community for inclusion in a future OpenStack release. This procPost3.pngess ensures that our technology has indeed solved real-world problems before it’s upstreamed.

 

Since most of the maintainers are focused on enterprise topics, it sometimes takes longer than we would like for our contributions to be accepted, but we work with the maintainers to streamline the process as much as possible.

 

Our experts focus on ten core OpenStack projects with the majority of our work being on the Nova Compute project, which happens to be the largest project in the community. A recent glance at Stackalytics data showed us ranked in top 10% of contributors, right next to AT&T in the overall table.

 

System upgrades: the #1 issue for Operations teams

 

When we talk to the Operations teams at service providers about their strategies for network virtualization, their top concern is inevitably how to handle system upgrades. Even with traditional physical infrastructure, this complex activity can cause costly, unplanned service outages if performed incorrectly. It’s critical for service providers to deliver rolling upgrades in real time, with operating systems, applications, databases and protocols smoothly updated in a way that avoids any disruptions to customers’ services.

 

OpenStack hasn’t addressed this scenario because the current community focus is on API compatibility and configuration changes at the project level. This implies significant, error-prone manual intervention to implement upgrades. In an enterprise environment, downtime can be planned and systems can be shut down for hours while new software is installed, deployed, tested and if necessary rolled back.

 

As Ian will discuss in the upcoming webinar, support for seamless updates has been a major focus for the developers of Titanium Server. They’ve implemented a wide range of OpenStack enhancements that resulted in a telco-grade platform that enables hitless updates between releases.  Live migration of Virtual Machines (VMs) ensures that there’s no service downtime during system updates and the installation of new releases is completely transparent to end users. None of this possible with a vanilla enterprise-class OpenStack distribution.

Post4.png

Critical challenges for OpenStack in telecom

 

In order for OpenStack to be usable in telco applications, our engineers had to add a number of critical new features.

 

Besides the hitless update technology mentioned above, these features include: the ability to perform live migration of VMs, including VMs based on Intel® DPDK; SRIOV support for direct VM access to Network Interface Cards (NICs), implemented in a way that is deployable and manageable; scheduler enhancements including NUMA awareness; fast fault detection with automatic recovery; Enhanced Platform Awareness (EPA); guaranteed recovery from startup storms and stampedes; telco-grade AAA security (authentication, authorization and accounting).

 

All these features and more will be covered in detail in the webinar.

 

Open solutions bring interoperability and portability

 

Open solutions” remain a hot topic throughout the telecom industry. Service providers all say they want them and vendors all say they provide them, but what does the term really mean? The recent survey by TelecomTV resulted in some interesting answers. First, there was strong consensus amongst the respondents that open solutions ensure interoperability between vendor products while mitigating the risk of vendor lock-in. Second, there was clear acknowledgement that, while open-source projects and open standards provide a solid foundation, they need vendor support and enhancements in order to be commercially viable.

 

Post5.pngThis topic will be a major focus for Ian’s presentation. He’ll explain how we work with ecosystem partners through the Titanium Cloud ecosystem, to validate the correct operation of their hardware and software products when used with Titanium Server. He’ll relate this to our strategy of developing telecom-oriented patches and enhancements, verifying with our customers that they solve important problems and then upstreaming them to the community.

 

Ian will talk about the comprehensive processes we’ve established to validate and guarantee the compatibility and interoperability that our customers expect, so that they can implement multi-vendor, end-to-end use cases while selecting from pre-integrated elements. This enables our customers to accelerate their deployment cycles while minimizing their schedule risk, representing major business advantages as the industry transitions to virtualized solutions.

 

Be sure to register for the webinar now: you can look forward to an in-depth discussion of all the topics discussed here as well as many other innovations that Wind River has implemented in order to make sure that OpenStack is finally usable in demanding telco applications.

ArtemisDaphnis

Converging in the Fog

Posted by ArtemisDaphnis Jun 22, 2016

By Mychal McCabe, Vice President, Corporate Marketing at Wind River

 

A reasonable person could be forgiven for thinking that the distinction between centralized and distributed computing models is beginning to come to an end, with Fog architectures bringing computation, communication, control and storage tantalizingly close to edge devices.

 

Hype around a handful of technologies is fueling the fire, with each viewed as a phenomena in a distinct market or market segment rather than the set of innovations that will accelerate convergence of the network, data, and control into a pervasive Internet of Things.

 

IoT is the most visible thanks to popular consumer devices, broad applicability of the concepts across market segments, and the sheer opportunity for everyone along the value chain. SDE or Software Defined Everything is a catchall for the variety of technology domains where software is being decoupled from specialized hardware: Software Define Networking (SDN), Software Defined Data Center (SDDC), and Software Defined Storage (SDS).  Finally, Network Functions Virtualization (NFV) virtualizes network services that are carried out today by proprietary, dedicated hardware.

 

From an IT point of view, Software Defined Everything addresses everything so long as it happens to be in a datacenter or the enterprise. From an OT point of view, there are specific applications where the business benefits of software defined architectures are compelling enough to prompt something that in the early days of our enthusiasm for the Internet of Things seemed unlikely: a massive rip and replace of brownfield infrastructure resulting in a connected and more intelligent edge.

 

The big cloud players see this increase in intelligence and connectivity as an opportunity for their analytics capabilities to migrate further toward the edge (simple analytics on a single device, more complex multi-device analytics, and machine learning) while at the same time opening up the possibility of specific workloads and some classes of control logic migrating from the edge to the cloud. Not surprisingly incumbents in the OT domain are leveraging their expertise and footprint at the edge to move toward the cloud.

 

Fittingly the current proving ground for application of these technologies, which relate to one another in ways that are only beginning to become clear, is what our customers and our partners are increasingly referring to as The Fog.

 

The fixed function nature of embedded systems –software “cut to fit” for specialized hardware, built from the ground up to do one thing– results in exceptional performance and reliability that cannot be easily replicated when migrating control logic to the cloud. The trade-off comes in the form of added complexity, as addressed in the OpenFog Consortium FAQ:

 

“While adding fog computing to an IoT network would seem to add complexity, that complexity is sometimes necessary. In certain use cases, fog computing solves the inadequacies of cloud-only models, which have serious latency, network bandwidth, geographic focus, plus reliability and security challenges.”

 

This is what the early days of the Internet of Things looks like: Aggressive application of proven, disruptive IT technologies and lifecycle approaches –virtualization, cloud, analytics, DevOps, containers, etc.– applied to OT: edge computing applications that have historically been the domain of embedded systems.

 

For additional information from Wind River, visit us on LinkedIn.

At Wind River, we find that our Titanium Server network virtualization platform is regularly used for NFV evaluations performed by service providers. As the only commercially-available NFV cloud designed specifically to meet the needs of telecom networks, it’s the obvious choice for running edge, access and core use applications.

 

Post1.pngIn many cases, we don’t get a lot of feedback from the service provider about the specific technical or business challenges that they’re looking to address. So we were delighted when China Telecom opened up with some interesting information about a recent evaluation of Titanium Server  We had a fascinating discussion with Ou Liang, Senior Engineer, Head of NFV Infrastructure Technology and Solutions at the Guangzhou Research Institute of China Telecom.

 

In this post we’ll summarize findings from Mr. Ou Liang resulting from the evaluation. We’ll explain what his team sees as the early use cases for NFV, the critical challenges that must be addressed in NFV infrastructure, why they picked Titanium Server as the platform for their evaluations and the results that they achieved.

 

Focus on time-to-market for telecom applications for NFV

 

China Telecom sees network virtualization as a means to accelerate the deployment of fixed-line and mobile broadband functions such as vEPC, vIMS, VoLTE, video quality assurance and IoT. With over 200 million subscribers, they anticipate that faster time-to-market will quickly translate into significant improvements in Average Revenue per User (ARPU) while network virtualization helps to reduce their operational expenses.

 

Critical challenges for NFV infrastructure

 

China Telecom needs to maintain Carrier Grade reliability for the services that it delivers to its vast number of subscribers. While the term “Carrier Grade” implies a wide range of capabilities and features, the key considerations are reliability, robustness and real-time packet forwarding performance, especially under the scenario of 3-layer decoupling.

 

An “open” solution is essential. China Telecom has one of the most succinct definitions of “open” that we’ve heard recently: to them it simply means that the solution must be fully compatible with all the applicable standards and interfaces. At the same time, the vendor must allow for on-going product enhancements and compatible differentiation, so that services can be customized to meet China Telecom’s business needs while remaining compliant with the baseline specifications.

 

It’s important to China Telecom that they collaborate with a vendor that can assist them with the complex work of developing and integrating the infrastructure elements that they need.

 

Finally, with an aggressive schedule for deploying NFV, China Telecom doesn’t have time to wait for a vendor to work out the bugs in its infrastructure solution and delay them through multiple product iterations. They need a solution that’s ready for deployment now and already proven in the market.

 

Why Titanium Server?

 

Post2.pngMr. Ou Liang was very clear about his reasons for selecting Titanium Server for this NFVi evaluation.

 

“Our testing requirements demanded ultra-reliability, robust and real-time forwarding performance in which high availability is a chief consideration concerning NFVI. Titanium Server helped us explore whether the hierarchical decoupling of the NFV infrastructure could be feasible. It also delivered on the key need for a commercial ready NFVI platform to be the foundation for our demanding test scenarios.”

 

In addition to the technical benefits of Titanium Server, China Telecom was encouraged by wide range of products from industry-leading partners that have been validated through the Titanium Cloud ecosystem. They expressed that while service providers’ internal tests are vital, they welcome mutual certification among ecosystem vendors to optimize go-to-market time, improve efficiency and reduce their risk. They believe ecosystem partners play an important role in assisting vendors optimize their products for local markets and they were pleased with the number of Asian companies participating in Titanium Cloud.

 

The end result: quantifiable benefits

 

Within this evaluation, China Telecom used Titanium Server as the commercial ready infrastructure platform and compared it with the community OpenStack platform to determine the key differences of the two platforms regarding robustness, high-availability and high performance. This NFV infrastructure focused evaluation contributed a number of additional end-to-end business validations across several scenarios and help set up China Telecom’s internal NFVi criteria for further NFV production deployment.

 

Our work with China Telecom has already produced some exciting results and we’re very grateful for the opportunity to collaborate with Mr. Liang and his team. We’re eagerly looking forward to the next steps.

It’s been an interesting few days for OpenStack conversations. Two very different threads helped to confirm what our customers have known for a while: if you’re contemplating using OpenStack in a telecom deployment you can stop listening to the nay-sayers and go right ahead. Just make sure you use an OpenStack platform that’s been properly hardened to meet real telco needs.

 

A chat with BT

 

Post1.pngThe first conversation was a webcast with Peter Willis from BT. If you weren’t able to see it live, it’s worth watching the archive because Peter described in detail the issues that he’s observed through some very detailed evaluations of vanilla (non-hardened) OpenStack distributions. This isn’t the first time that he’s explained these findings: at SDN and OpenFlow World Congress last year, he gave a presentation titled “How NFV is different from Cloud: Using OpenStack for Distributed NFV”, in which he identified six significant limitations of vanilla OpenStack distributions which could potentially constrain their use for applications like virtual CPE.

 

During our webcast, we chatted about the solutions that Wind River has incorporated into the Titanium Server virtualization platform.

 

I had the opportunity to outline how our enhancements to OpenStack enable Titanium Server to address and resolve the six major challenges that Peter discussed.

 

Peter concluded with the comment that our solution represents “good progress and valuable learnings”, which is very much in line with the feedback that we’ve received from many other service providers. If you’d like more details on how Titanium Server solves critical problems, you can go here to download a white paper with full details.

 

A perspective from AT&T

 

A few days before the webcast, we had the opportunity to discuss some recent comments by AT&T. In an article published on May 9th titled “AT&T Rallies Carriers Around OpenStack”, Light Reading’s Carol Wilson outlined AT&T’s plans to advance OpenStack as a carrier-class cloud platform.

 

 

The article contained an interesting quote from Sorabh Saxena, AT&T's senior vice president of software development and engineering: “Saxena does see some things that need to be addressed with OpenStack, starting with addressing carrier-grade needs for "supporting performance, latency, throughput, jitter -- all the needs that a carrier has," he says”.

 

The solution is here

 

There’s good news for anyone in the telco world who interpreted this comment to mean “stay away from OpenStack for the foreseeable future”. Wind River’s experts started working on these problems several years ago as part of our development of Titanium Server. We launched the product more than two years ago, customers started adopting it shortly afterwards and it’s available now to accelerate your move into carrier-class network virtualization.

 

OpenStack is an integral component of the Titanium Server platform. The enhancements that we’ve made to OpenStack enable Titanium Server to meet the most demanding requirements that service providers have for service assurance, Virtual Machine (VM) lifecycle management, platform maintenance, telco-grade security and performance. Wind River is committed to contributing all our enhancements to the core OpenStack projects, focusing first on areas where our telecom expertise provides the most value.

 

These enhancements include:

  • Titanium Server is the only carrier grade network virtualization platform with deployable SRIOV support. This means greatly improved manageability: no more troublesome hand-crafted configuration files, easily configurable/bootable from GUI, full support for cold migration, resize, pause/resume, improved resource management such as Virtual Function tracking, improved documentation and much better test coverage.
  • Full platform support for hitless upgrades and patching. Upgrades represent one of the biggest if not the biggest barrier to OpenStack deployments. Titanium Server provides full backward compatibility between APIs and data schema compatibility for all platform components. This highly automated process ensures simplicity and predictability during one most crucial periods in managing a network. No need to send a complement of engineers to each site to complete an upgrade. Another great example of how Titanium Server significantly reduces OPEX.     
  • Complete, proven solutions for performance and latency problems in network virtualization, already demonstrated through multiple CRAN PoCs with tier-one Telecom Equipment Manufacturers (TEMs) and service providers.
  • Fast fault detection and automatic recovery, which are critical to achieving six-nines availability at the infrastructure level. Compare this with enterprise-class OpenStack distributions that deliver no more than three- or four-nines uptime, way below standard telco requirements.
  • We’ve extended Enhanced Platform Awareness (EPA) in Nova scheduling, adding critical elements to optimize scheduling decisions specifically for network virtualization applications. We’ve extended NUMA zone support to ensure VM, NIC and vSwitch can be scheduled on the same socket , either strictly or as a preference. We have also added strict virtual memory management and have proposed a Newton blueprint for enhanced scheduler error reporting (improving troubleshooting effectiveness).
  • Live VM migration, including the live migration of DPDK-based VMs. Even the latest OpenStack release, Mitaka, only has preliminary support for live migration, which is definitely not telco-grade.
  • And in the context of Mitaka, we’re among the first commercially-available and supported telco virtualization platforms based on this release.

 

Of course we don’t just consume code from the OpenStack community, we participate actively. We are committed to contributing our enhancements to all of the core projects. We are currently focused on Nova, which is the largest and arguably the most important OpenStack project, solving real world problems that are unique to telecom. This is where our carrier grade expertise adds the most value. In Nova, Wind River ranks in the top 10% of contributor companies, right next to AT&T and ahead of many big-name TEMs and service providers.

 

Ahead of the game

 

So don’t believe all the bad press about OpenStack in telecom. If you pick the right platform you can move ahead now and deploy OpenStack-based infrastructure that delivers the service uptime, performance and maintainability that you need. And most importantly, by leveraging a pre-integrated, proven solution you will be two to four years ahead of your competitors who chose to roll their own platforms.

 

Money, money, money

 

Just think what a two- to four-year advantage in deployment means: not only will you jump-start major reductions in operational costs for your network but you’ll be able to accelerate the agile deployment of new services that generate increased revenue.

Cost reductions plus top-line revenue growth. Why wait?

In a recent post we outlined some of the approaches we’re taking to solve OpenStack-related problems and thereby enable it to become a viable solution for VM orchestration in telco applications such as virtual CPE. This topic is also covered in a detailed white paper that you can find here.

 

As we talk with service providers and enterprises, the conversations regularly turn to another OpenStack concern which can impose significant cost penalties in applications such as network virtualization and data centers: the limited performance of the virtual routing feature in vanilla OpenStack distributions.

 

In this post we’ll briefly outline this performance problem and explain one solution that has been proven to address it.

 

Neutron Router Challenges

 

All OpenStack distributions include a virtual router (vRouter), typically implemented as a kernel function within Neutron. This vRouter enables traffic to be routed either “East-West” between Virtual Machines (VMs) within the cloud (on the same or different compute nodes) or “North-South” between VMs and external networks.

Post1.jpg

For many “simple” applications the Neutron vRouter can be used instead of either a vRouter hosted in a VM or a physical router installed as additional hardware in the rack. On the surface this looks like a good deal with apparent cost savings.

 

It’s worth noting at this point that the Neutron vRouter doesn’t provide all the capabilities of a typical physical appliance or its virtualized equivalent. For example, it’s a shared service with static gateway and static routing functions. There’s no support for more complex protocols like BGP or OSPF, nor for ownership by a single tenant. There will always be a need for VNF-based routers in many network virtualization use cases. 

 

Within the limited scope of features that it does provide, the Neutron vRouter has a couple of major constraints.

 

First, the bandwidth of the Neutron vRouter is extremely limited: typically no more than 1% of 10G line rate.

 

Second, although the Neutron vRouter supports Network Address Translation (NAT) and NAT (SNAPT) albeit with a slow kernel-based implementation, it does not support Destination NAT (DNAPT) for externally-originated traffic.

 

These limitations mean that the Neutron vRouter is inadequate for many applications, forcing users either to instantiate a VNF-based router or to install a physical router. Either approach imposes a significant cost and power overhead.

 

Titanium Server

 

One solution to these Neutron vRouter limitations is offered by Wind River’s Titanium Server network virtualization platform.

 

Within Titanium Server, the Neutron vRouter is replaced by an Accelerated Virtual Router (AVR) which is implemented as part of the Accelerated Virtual Switch (you can read about the OPEX benefits delivered by the Accelerated Virtual Switch in this post).

Post2.jpg

Titanium Server‘s AVR is fully-compatible with all the relevant Neutron APIs, so any software developed to leverage these APIs will run correctly on Titanium Server with no changes needed. It will just run a lot faster. And there’s no risk of vendor lock-in.

 

250x Performance Improvement

 

Testing across multiple use cases has shown that the AVR delivers a massive 250x improvement in throughput and 9x improvement in latency, compared to the Neutron vRouter when running on identical hardware platforms. For example, on 256-byte packets AVR achieves an average throughput of 50% of 10G line rate and an average latency of 51µs, compared to less than 1% of line rate and 571 µs for the Neutron vRouter.

 

In terms of features, AVR provides full accelerated support for NAT and SNAPT/DNAPT, removing one of the major obstacles to the use of OpenStack-based routing.

 

Just like the Neutron vRouter, AVR has full support for distributed routing, enabling the vRouter to run on the same compute node as the VMs and removing the need for separate network nodes dedicated to the routing function.

 

Our customers have confirmed that for many telecom-oriented applications, the performance and features provided by AVR obviate the need for either a VNF-based router or a physical router, resulting in significant savings of both cost and power.

 

Finally, AVR is highly efficient in terms of resource utilization. As part of the Titanium Server Accelerated vSwitch, it can be configured to run on as few as one processor core (depending on the bandwidth required by the VMs), maximizing the number of cores available for running revenue-generating VMs. This post provides insights into the OPEX advantages achieved by increased VM density.

 

Cost-Effective Virtual Routing

 

In conclusion, Titanium Server’s Accelerated Virtual Router enables OpenStack-based routing to be cost-effective for a wide range of applications, while transparently retaining full compatibility with software written to leverage the vanilla Neutron vRouter in traditional OpenStack distributions and ensuring there’s no risk of vendor lock-in.

By Mychal McCabe, Vice President, Corporate Marketing at Wind River

 

Change in the technology industry is a cliche; we take for granted the forward tension that comes with our future orientation and a culture obsessed with innovation. So it’s tempting to dismiss the current moment as routine rather than extraordinary. Having lived through years of hype surrounding the Internet of Things, Network Functions Virtualization, and Software Defined Networks the impact that they are beginning to have on mature markets and established value chains seems slow rather than revolutionary.

 

Still there’s every indication that we’re at risk of outpacing ourselves. Not two days after a video captured a car’s autopilot kicking in to avoid a potentially lethal accident, Fortune published an article highlighting glitches in the software of a car produced by the same manufacturer. In the Fortune piece, one owner described having to literally hold his door shut during operation, i.e. while driving down the road. Yet car companies, tech titans, and start-ups are laser-focused on delivering a particular experience: transforming transportation into a service, with autonomous vehicles available on demand.

 

The rapid progress in autonomy is generating headlines and social media that technology enthusiasts follow with enthusiasm typically reserved for Apple product launches; and yet the relationship between autonomous vehicles of all kinds and our existing, aging infrastructure is poorly understood.

 

If infrastructure comes up at all, it is described as something we’ll need to address eventually, along with things like policy and ethics. The broad outline of a world where we have dedicated lanes for autonomous vehicles is easily drawn, the devilish details less so. What sort of network availability and performance will be required? How will active or passive enforcement be managed? How will all elements of the eventual system be certified, maintained, and updated? Made safe, made secure.

 

Autonomy is not alone. The relationship between infrastructure and more visible technologies shaping how the world will live and work over the rest of this decade is of interest primarily to companies that will need to ensure our next generation infrastructure is at least as dependable as current infrastructure even as the world around us accelerates toward the future.

 

Dedicated, highly-specialized hardware in the networking and storage domains is gradually giving way to Software Defined approaches that will move some of the capabilities of such hardware into applications running on generic hardware. A similar transition is underway in the Industrial market, especially in the oil and gas segment, where the control functions that have historically been managed by highly-specialized and redundant are beginning to give way to Software Defined Industrial Systems.

 

Delivering equal or superior performance, reliability, safety, and security as capabilities migrate out of hardware and into applications, as logic migrates from controllers and actuators to fog and cloud, as the number of connected devices skyrockets and the network itself becomes increasingly virtual is non-trivial. It requires deep expertise in the OT, networking, and device domains; and given that these areas of expertise are all Wind River sweet spots, we’re excited to be a part of this Software Defined technology wave.

 

For additional information from Wind River, visit us on LinkedIn.

OPNFV recently held their inaugural Plugfest, hosted by CableLabs at their facility in Louisville, Colorado. These events are critical to OPNVF’s mission of encouraging and facilitating multi-vendor solutions for Network Functions Virtualization (NFV) basedPost1.png around open standards and the open-source community.

  
There are many smiling faces in the photo below and several belong to the Wind River team. Joining 41 participants from 19 companies, our experts deployed Titanium Server as an NFV infrastructure platform running complete implementations of multiple virtualized functions, demonstrating interoperability between several NFV solution vendors.

 

Post2.png

Tetsuya Nakamura, Principal Systems Architect at CableLabs and member of the OPNFV Board of Directors, described the accomplishments of the Plugfest in this recent post

 

As Tetsuya explains, the Plugfest focused on the installation of NFV infrastructure (NFVI), as well as various virtual network functions (VNFs) implementing key telecom functions such as virtual EPC and virtual IMS. These multi-vendor configurations ran on multiple hardware platforms and used multiple installers.

 

Post3.pngThe Titanium Server NFVI platform hosted both nanocell and EPC demonstrations. Together with our colleagues from the VNF vendors, we demonstrated full interoperability in a use case comprising a complete virtualized nanocell and virtualized EPC, connecting with the phone over the air and streaming data to the phone.

 

Titanium Server was also used to demonstrate a couple of OPNFV projects, Functest (base system functionality testing) and Yardstick (infrastructure verification).

 

As we talk to our customers about “compatibility”, we find the challenge is increasingly about demonstrating interoperability with other companies in the NFV ecosystem. From the service providers' point of view, open standards avoid the risk of vendor lock-in by encouraging the development of compatible and interoperable solutions by multiple companies. But service providers typically incorporate products from more than one vendor in the complete solution that they deploy, so they need proof that products that should work together seamlessly actually do so. As OPNFV continues to make progress, we expect it to become a de facto standard against which all NFV and NFVI vendors will have to test their solutions.

 

Plugfests like the one hosted by CableLabs are vital as the telecom industry transitions from traditional, fixed-function equipment to dis-aggregated virtualized solutions that leverage best-in-class products from multiple vendors. Events like this demonstrate that an industry-wide ecosystem can successfully form around open standards and deliver interoperable, compatible products.

 

At Wind River, we were proud to contribute to this inaugural OPNFV Plugfest and we look forward to even more exciting and advanced accomplishments at the next one.

Filter Blog

By date: By tag: