Over the past couple of years, “CRAN” has become a term with multiple meanings. Depending on the author and the context, the “C” might stand for “Cloud”, “Centralized”, “Cooperative” or even “Clean”. In fact we posted a blog last year asking whether it was time to refine these acronyms to get some clarity.
Regardless of which “C” you chose, all the approaches to CRAN are intended to reduce CAPEX and OPEX for service providers. In this post, we’ll outline a tweak to the standard CRAN architecture, which you can see demonstrated at Mobile World Congress later this month, and explain how it might accelerate the adoption of this technology.
By centralizing and potentially virtualizing those processing functions that were traditionally performed by dedicated networking equipment located at the antenna sites (cell sites), CRAN enables the cell site to be simplified to where it’s just the radio unit comprising power amplifiers, filters etc. and the antenna. The cost of the baseband and call processing functions drops dramatically thanks to COTS hardware and virtualization.
Simultaneously, the risk of network downtime is reduced and the overall experience for subscribers is improved.
If we consider the installed base of CRAN equipment, it’s clear that this technology already represents a significant share of overall telecom spending. An Infonetics report estimates that revenue from CRAN architecture equipment was $4.9B in 2014, up from $4.1B the previous year, and projecting $10B by 2018. So far, the majority of deployments are in Asia, with rollouts now starting in Europe, Latin America and North America this year.
The business benefits are compelling: SNS Research reports that China Mobile has shown a 30% reduction in CAPEX and a 53% reduction in OPEX in its CRAN trials.
The flavor of CRAN most widely deployed today is the traditional Centralized RAN architecture. In this approach, the basestation (BTS) is decomposed by decoupling its remote radio head (RRH) at the cell site from its baseband unit (BBU). Multiple RRHs are connected to a single, shared BBU over dark fiber, via a “fronthaul” interface that is typically Common Protocol Radio Interface (CPRI), though Open Basestation Architecture Initiative (OBSAI) is also used in some scenarios. Separating the RRHs from the BBUs reduces the cost (both CAPEX and OPEX) of the equipment at the cell site. Depending on the level of aggregation, it brings some economies of scale to the BBU, the cost of which is also reduced whenever it’s located indoors.
Despite these benefits, the fronthaul interface presents some limitations around performance, scalability and resource utilization.
First, CPRI interfaces require both high bandwidth and low latency. While these requirements can be addressed through the use of active or passive WDM technology, this necessitates additional transmission equipment which to some extent negates the potential CAPEX and OPEX benefits.
Second, the resource utilization challenges are a result of the dynamic nature of mobile communications. As the diagram indicates, areas such as office buildings, campuses and residential zones exhibit a “tidal wave” effect. For commercial locations, most networks experience peak traffic loads at the start and end of the business day, with moderate loading throughout the rest of the day and negligible traffic at night.
Fronthaul interfaces based on CPRI or OBSAI, however, transmit data streams even in the absence of traffic, resulting in inefficient data transmission and over-provisioned equipment.
Finally, as service providers look ahead to 5G, they recognize that ultra-dense, flexible, small-cell deployments will be needed to meet their capacity and density requirements. Problems introduced by frequent handoffs between cells and signal interference will need to be resolved through a centralized control plane, as well as closer air interface coordination between macro cells and small cells. It’s become clear that fronthaul interfaces will need to provide low-latency, high-bandwidth transmission services in order to meet the demands of 5G.
Recognizing these current and future challenges, China Mobile has proposed an evolved CRAN architecture called "Next Generation Fronthaul Interface” (NGFI), with first instantiations as mini CRAN, explained in detail in a comprehensive white paper available here.
As illustrated in the diagram below, the key principle of NGFI is that some of the functions of baseband units (BBUs) and remote radio units (RRUs) are redefined.
First, some baseband processing is shifted to the RRU, which leads to a change in the functions of the BBU and RRU. The BBU is renamed the ‘Radio Cloud Center’ (RCC) and the RRU becomes the “Radio Remote System” (RRS). The RRS implements functions that include the antennas and the radio aggregation unit (RAU, formerly part of the BBU’s baseband processing). Its coverage will be equivalent to that of a macro cell or a macro cell plus a micro cell (i.e. a HetNet). As a centralized baseband processing unit, the RCC incorporates the remaining functions of the former BBU as well as higher layer control functions, handles multiple cells and serving as a function pool.
The NGFI links the RRS and RCC functions, providing the interface between the restructured baseband processing and remote radio functions. Its design must consider the new demands that this revised structure places on bandwidth, delay and synchronization. It is implemented with three logical layers: the data layer, data adaptation layer, and physical carrier layer.
The NGFI data layer contains user data, control data, synchronization data and control data associated with all radio technologies. The adaptation layer adapts the radio data to the transport network, ensuring that the transmission features of this data remain well-matched to the features of the lower-layer transmission network. The physical carrier layer can be any suitable access network, typically Packet Transport Network (PTN), Passive Optical Network (PON) or Wavelength Division Multiplexing (WDM).
As one of China Mobile’s partners in the NGFI project, Wind River is proud to be collaborating in a demonstration of the technology at Mobile World Congress (Hall 3 Stand 3D30).
In the demo, the Titanium Server NFV Infrastructure software runs on a MaxCore™ Platform from Artesyn, one of our Titanium Cloud ecosystem partners. Titanium Server ensures full Carrier Grade reliability for the RAN along with maximum compute performance, while the MaxCore Platform represents ideal hardware infrastructure thanks to its compact form factor. This enables us to support up to 96 sectors on one system while maintaining low power consumption in a small space with an aggressive system-level cost.
If you’re at Mobile World Congress, please do stop by to see this demo.
NGFI has been architected to address key challenges in today’s CRAN implementations, including fronthaul performance, scalability and dynamic resource allocation. At the same time, its designers have anticipated the future needs of 5G which will bring a wealth of new user applications requiring large-scale RAN sharing functions and ultra-high RAN bandwidth.
We look forward to seeing you at Mobile World Congress and to showing you this new approach to CRAN, which has the potential to deliver significant business benefits no matter which definition of “C” is in vogue today.