Skip navigation
2016

Over the past couple of years, “CRAN” has become a term with multiple meanings. Depending on the author and the context, the “C” might stand for “Cloud”, “Centralized”, “Cooperative” or even “Clean”. In fact we posted a blog last year asking whether it was time to refine these acronyms to get some clarity.


Regardless of which “C” you chose, all the approaches to CRAN are intended to reduce CAPEX and OPEX for service providers. In this post, we’ll outline a tweak to the standard CRAN architecture, which you can see demonstrated at Mobile World Congress later this month, and explain how it might accelerate the adoption of this technology.


Post1.pngBy centralizing and potentially virtualizing those processing functions that were traditionally performed by dedicated networking equipment located at the antenna sites (cell sites), CRAN enables the cell site to be simplified to where it’s just the radio unit comprising power amplifiers, filters etc. and the antenna. The cost of the baseband and call processing functions drops dramatically thanks to COTS hardware and virtualization.


Simultaneously, the risk of network downtime is reduced and the overall experience for subscribers is improved.

If we consider the installed base of CRAN equipment, it’s clear that this technology already represents a significant share of overall telecom spending. An Infonetics report estimates that revenue from CRAN architecture equipment was $4.9B in 2014, up from $4.1B the previous year, and projecting $10B by 2018. So far, the majority of deployments are in Asia, with rollouts now starting in Europe, Latin America and North America this year.


The business benefits are compelling: SNS Research reports that China Mobile has shown a 30% reduction in CAPEX and a 53% reduction in OPEX in its CRAN trials.


The flavor of CRAN most widely deployed today is the traditional Centralized RAN architecture. In this approach, the basestation (BTS) is decomposed by decoupling its remote radio head (RRH) at the cell site from its baseband unit (BBU). Multiple RRHs are connected to a single, shared BBU over dark fiber, via a “fronthaul” interface that is typically Common Protocol Radio Interface (CPRI), though Open Basestation Architecture Initiative (OBSAI) is also used in some scenarios. Separating the RRHs from the BBUs reduces the cost (both CAPEX and OPEX) of the equipment at the cell site. Depending on the level of aggregation, it brings some economies of scale to the BBU, the cost of which is also reduced whenever it’s located indoors.


Despite these benefits, the fronthaul interface presents some limitations around performance, scalability and resource utilization.


First, CPRI interfaces require both high bandwidth and low latency. While these requirements can be addressed through the use of active or passive WDM technology, this necessitates additional transmission equipment which to some extent negates the potential CAPEX and OPEX benefits.


Second, the resource utilization challenges are a result of the dynamic nature of mobile communications. As the diagram indicates, areas such as office buildings, campuses and residential zones exhibit a “tidal wave” effect. For commercial locations, most networks experience peak traffic loads at the start and end of the business day, with moderate loading throughout the rest of the day and negligible traffic at night.


Fronthaul interfaces based on CPRI or OBSAI, however, transmit data streams even in the absence of traffic, resulting in inefficient data transmission and over-provisioned equipment.


Finally, as service providers look ahead to 5G, they recognize that ultra-dense, flexible, small-cell deployments will be needed to meet their capacity and density requirements. Problems introduced by frequent handoffs between cells and signal interference will need to be resolved through a centralized control plane, as well as closer air interface coordination between macro cells and small cells. It’s become clear that fronthaul interfaces will need to provide low-latency, high-bandwidth transmission services in order to meet the demands of 5G.

 

Recognizing these current and future challenges, China Mobile has proposed an evolved CRAN architecture called "Next Generation Fronthaul Interface” (NGFI), with first instantiations as mini CRAN, explained in detail in a comprehensive white paper available here.


As illustrated in the diagram below, the key principle of NGFI is that some of the functions of baseband units (BBUs) and remote radio units (RRUs) are redefined.

 

Post3.pngFirst, some baseband processing is shifted to the RRU, which leads to a change in the functions of the BBU and RRU. The BBU is renamed the ‘Radio Cloud Center’ (RCC) and the RRU becomes the “Radio Remote System” (RRS). The RRS implements functions that include the antennas and the radio aggregation unit (RAU, formerly part of the BBU’s baseband processing). Its coverage will be equivalent to that of a macro cell or a macro cell plus a micro cell (i.e. a HetNet). As a centralized baseband processing unit, the RCC incorporates the remaining functions of the former BBU as well as higher layer control functions, handles multiple cells and serving as a function pool.


The NGFI links the RRS and RCC functions, providing the interface between the restructured baseband processing and remote radio functions. Its design must consider the new demands that this revised structure places on bandwidth, delay and synchronization. It is implemented with three logical layers: the data layer, data adaptation layer, and physical carrier layer.

The NGFI data layer contains user data, control data, synchronization data and control data associated with all radio technologies. The adaptation layer adapts the radio data to the transport network, ensuring that the transmission features of this data remain well-matched to the features of the lower-layer transmission network. The physical carrier layer can be any suitable access network, typically Packet Transport Network (PTN), Passive Optical Network (PON) or Wavelength Division Multiplexing (WDM).


As one of China Mobile’s partners in the NGFI project, Wind River is proud to be collaborating in a demonstration of the technology at Mobile World Congress (Hall 3 Stand 3D30).

In the demo, the Titanium Server NFV Infrastructure software runs on a MaxCore™ Platform from Artesyn, one of our Titanium Cloud ecosystem partners. Titanium Server ensures full Carrier Grade reliability for the RAN along with maximum compute performance, while the MaxCore Platform represents ideal hardware infrastructure thanks to its compact form factor. This enables us to support up to 96 sectors on one system while maintaining low power consumption in a small space with an aggressive system-level cost.

Post6.png

If you’re at Mobile World Congress, please do stop by to see this demo.


NGFI has been architected to address key challenges in today’s CRAN implementations, including fronthaul performance, scalability and dynamic resource allocation. At the same time, its designers have anticipated the future needs of 5G which will bring a wealth of new user applications requiring large-scale RAN sharing functions and ultra-high RAN bandwidth.


We look forward to seeing you at Mobile World Congress and to showing you this new approach to CRAN, which has the potential to deliver significant business benefits no matter which definition of “C” is in vogue today.

Over the past year or so, a strong consensus seems to have emerged within the telecom industry that the #1 use case for early deployments of Network Functions Virtualization (NFV) will be Virtualized Customer Premises Equipment (vCPE). In this post we’ll briefly summarize why vCPE represents such an interesting business proposition for service providers and introduce Wind River’s new solution for this application, Titanium Server CPE.


Applicable to both enterprise and residential scenarios, vCPE replaces physical hardware appliances traditionally located at the customer premise that provide connectivity, security and other functions. By deploying general-purpose compute platforms based on industry-standard servers, either locally at the customer's premise or in a centralized data center, service providers can remotely instantiate, configure and manage functions that were once deployed on dedicated hardware platforms. This virtualization of CPE functions reduces OPEX through more efficient utilization of compute resources, through increased agility in the deployment of services and through the elimination of “truck rolls” required to update and maintain equipment at remote locations.



No wonder CFOs are excited


Industry analysts have studied the economics of vCPE in detail. A fascinating recent report from Analysys Mason, “vCPE services business case: potentially billions of dollars payback for fixed CSPs”, provides a wealth of information on the ROI for both enterprise and residential vCPE.


Among their most interesting findings, Analysys Mason calculates that enterprise vCPE (otherwise known as virtual Business CPE “vBCPE”) can generate new revenue of $1.4B in North America and Western Europe over a 5-year period, for service providers who are early adopters of this technology. vBCPE offers them a Return on Investment (ROI) of 156% along with 47% annual OPEX savings per site and a payback period of just 3 years. Those compelling numbers result from a combination of the use of standard hardware, operations support system (OSS) automation and customer self-provisioning.


For residential applications, the primary benefit of vCPE is cost reduction. Service providers can increase margins, while also allowing customers to self-provision their upsell services and add video with cloud-based digital video recording (cloud DVR) and virtual set-top boxes (vSTB). For first-movers, Analysys Mason estimates $1.3B in net cost savings over a 5-year period, with ROI of 49%, a 5-year payback period and a massive 82% cost savings per household.


Numbers like these indicate a business value proposition for vCPE that is much more clear-cut than for many other NFV functions and services. As a use case, vCPE also has less dependency on the complexities of Management and Orchestration (MANO), where the standards are still in flux and there’s a lack of consensus on which approach will gain the most traction.


So it’s easy to see why Finance executives at service providers are excited about vCPE. From the Operations side however, important questions have been raised about achieving service reliability while maximizing cost reductions. Wind River’s recently-announced Titanium Server CPE platform addresses this challenge. 

 

Multiple hosting scenarios


Post1.pngService providers typically need to support three primary hosting options for the Virtualized Network Functions (VNFs) that comprise a vCPE:

  • The service provider’s data center, supporting many thousands of customers over a wide geographical area.
  • The service provider’s Point-of-Presence (PoP) or Central Office (CO) locations, serving multiple local customers.
  • The customer premise, which is whether the traditional physical appliances are located.


In the first two scenarios, service providers need to run the vCPE remotely on an NFV Infrastructure (NFVI) platform that guarantees Carrier Grade uptime for the services in order to minimize revenue impacts from downtime, while at the same time optimizing server utilization to maximize OPEX savings compared to physical infrastructure. Wind River’s Titanium Server NFVI platform was developed to address exactly these requirements and has achieved great industry traction since its launch a couple of years ago. In a service provider data center, PoP or CO, a typical installation will involve tens, hundreds or thousands or servers. Titanium Server allows a large number of compute node servers to be controlled by as few as two control node servers, which is key to maximizing the overall utilization of the infrastructure.


The customer premise scenario presents different challenges, particularly for enterprise (vBCPE) deployments.



Business CPE needs high reliability and low cost


Post2.png

For many vBCPE deployments at enterprise customers, low cost is critical to justify the replacement of traditional physical network appliances. The vBCPE performance requirements can in many cases be met by a single server, especially with the processing bandwidth now available in the latest Intel® Architecture platforms. At the same time, enterprises require high reliability for networking services, which are viewed as business-critical functions.


For enterprise deployments that are both low cost and highly reliable, the ideal solution is to deploy the vBCPE on a two-server configuration. To minimize cost, each server runs all the necessary compute, control and storage functions.


The dual-server approach allows for an active-standby approach to reliability, guaranteeing service continuity in the event of a failure of either node.


Wind River’s recently-announced Titanium Server CPE platform is the industry’s first NFVI solution that allows service providers to ensure this balance of low cost and high reliability.



Introducing Titanium Server CPE

 

Post3.pngThe new Titanium Server CPE platform builds on the proven, well-established Titanium Server NFVI platform.


By leveraging the Titanium Server CPE platform, service providers can deploy cost-sensitive NFV applications such as a virtual Business CPE on only two servers. Each server runs all the compute, control and storage functions, while delivering best-in-class VNF performance to maximize the number of users supported per server and thereby also maximize OPEX savings.


At the same time, Titanium Server CPE allows service providers to maintain full Carrier Grade uptime for the vBCPE services. As we’ve discussed in previous posts, this enables them to minimize any revenue impact resulting from Service Level Agreement (SLA) penalties triggered by service downtime.


Titanium Server CPE is available now and it’s the obvious NVFI platform for service providers to choose when deploying vBCPE functions hosted on premise at their enterprise customers.



Industry-leading ecosystem brings virtual Business CPE reference design


In December, Wind River introduced a reference design for a virtual Business CPE, following extensive collaboration with four Network Functions Virtualization (NFV) software partners. Partnering with Brocade, Check Point, InfoVista and Riverbed, we have demonstrated a complete, multifunction vBCPE running on Titanium Server CPE.

 

The vBCPE reference design includes virtual network functions (VNFs) from these four partner companies. All these VNFs have been on-boarded to run on Titanium Server CPE through these companies’ collaboration in the Wind River Titanium Cloud ecosystem.

 

 

Other goodies in the latest Titanium Server release


Titanium Server CPE was launched at the same time as the latest release of Titanium Server. That release includes a wide range of new features and functions, which are too numerous to cover here but were outlined in a recent post.

 

 

Please feel free to contact us to talk about any of these topics or to suggest other areas that we should investigate. The industry is moving quickly to focus on early use cases like virtual CPE that will deliver strong Return on Investment and Wind River will continue to deliver the NFVI platform features that are required for these applications.

Filter Blog

By date: By tag: