Skip navigation
1 2 3 Previous Next

Embedded Community

210 posts

Just in time to assist industrial automation companies who are reviewing and refining their strategies at the start of 2018, ARC Advisory Group has released a fascinating paper titled “Virtualization and Industrial Control”. If you’re in the industrial control business, this paper should be high on your New Year reading list.

 

Written by Harry Forbes, Research Director at ARC, the paper explains why the industrial control systems (ICS) that automate production equipment in today’s factories and plants can also represent a formidable barrier to modernization and flexibility. Automation systems evolve at the rate of Operational Technology (OT) rather than Information Technology (IT), creating challenges around maintenance, upgrades, support and safety.

 

Forbes contrasts the slow evolution of industrial automation systems with the rapid progress in IT systems, which are driven by technologies such as system virtualization and cloud computing, as well as the integration of software development and deployment (DevOps). He highlights the work of the Open Process Automation Forum, an effort initiated by ExxonMobil in 2016, and discusses industrial control products such as the “Industrial Internet Control System” (IICS) solution introduced by GE in 2017.

 

Describing the business benefits of virtualization for ICS, the report states “The benefits of greater ICS virtualization for end users are clear. Lower TCO, longer ICS life, fewer disruptive changes, improved ability to manage change and implement continuous improvement. The result (and the largest payoff) comes in the form of improved operations”.

 

In a summary of potential disruptors in the industrial automation market, Forbes highlights Wind River’s Titanium Control platform, explaining that “An excellent example is Wind River’s Titanium Control platform, which combines Wind River’s long experience in real-time operating systems with extensive low latency virtualization and on-premise cloud computing technology that has been proven in mission critical applications." This is why we’ve seen tremendous customer interest in the Titanium Control platform since launching the product early in 2017.

 

Please download the ARC Advisory paper here.

 

If you’d like to know more about how Titanium Control streamlines the deployment and operation of virtualized industrial control applications, please check out the information online or contact us to arrange a face-to-face discussion.

Half listening to a phone call over a lousy cellphone connection, while munching on a snack and trying to finish an overdue presentation, it sounded like a trip that was too good to miss. Something about “Euphrates in December” seemed like the ideal business trip even if it promised a few visa challenges.

 

But then I started paying attention and realized the call was actually all about spending a week in Portland, Oregon in December, surrounded by test equipment, Ethernet cables, laptops, stale pizza and probably rain. So I’m pleased to report that Wind River was proud to have several of our expert engineers participating in last week’s OPNFV plugfest, and equally glad that they didn’t seem to need any assistance from a marketing guy like me, even though I’m a big fan of the Columbia River and microbreweries.

 

“Euphrates”, of course, is the name of the most recent OPNFV release, announced in October and the focus of attention at last week’s Plugfest hosted by Intel. (OPNFV is an open source project that facilitates the development and evolution of Network Functions Virtualization (NFV) components across various open source ecosystems.)

 

In due course, OPNFV will publish the final report from the plugfest. This will include a detailed report of the interoperability tests performed by experts from a wide range of hardware, software, systems and semiconductor companies. Based on the results of previous events, we can expect to see significant improvements in functionality and interoperability from this latest release.

 

As we talk to our customers about “compatibility”, we find the challenge is increasingly about demonstrating interoperability with other companies in the NFV ecosystem. From the service providers' point of view, open standards avoid the risk of vendor lock-in by encouraging the development of compatible and interoperable solutions by multiple companies. But service providers typically incorporate products from more than one vendor in the complete solution that they deploy, so they need proof that products that should work together seamlessly actually do so. As OPNFV continues to make progress, we expect it to become a de facto standard against which all NFV and NFVI vendors will have to test their solutions.

 

Plugfests like the one hosted by Intel last week are vital as the telecom industry transitions from traditional, fixed-function equipment to dis-aggregated virtualized solutions that leverage best-in-class products from multiple vendors. Events like this demonstrate that an industry-wide ecosystem can successfully form around open standards and deliver interoperable, compatible products.

 

At Wind River, we’re proud to have contributed to all four OPNFV Plugfests held so far and we look forward to even more exciting and advanced accomplishments at the next one. And I need to quit multitasking and start paying better attention to phone calls.

Google “OPA” and one of the first results you’ll see will be all about Oatmeal Pale Ale, from Hood River, Oregon. Apparently, “a refreshing beer, high on hop aromatics yet well balanced with modest bitterness…. OPA pours a beautiful golden hue with a dense rocky head.” Definitely now on my list for the next time I visit the beautiful Columbia River Gorge.

 

But scroll further down the page and you’ll see that “OPA” also stands for the Open Process Automation Forum. I wonder what they drink at their meetings?

 

It’s been almost a year since the launch of the OPA Forum a working group within The Open Group. OPA is a vendor- and technology-neutral industry consortium. The goal of the new OPA Forum is to develop a standards-based, secure, interoperable process control architecture that can be leveraged across multiple industries that include: food and beverage; mining and metals; oil and gas; petrochemical; pharmaceutical; pulp and paper; utilities.

 

ExxonMobil was the driving force behind the establishment of the OPA Forum. Back in 2010, they started an intensive R&D program based on a vision of a reference architecture that would replace legacy physical control systems with software-based solutions. Their thinking was heavily influenced by the use of open systems in avionics and the Future Airborne Capability Environment (FACE) consortium. They also became aware of the adoption of Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) in the telecom industry, a strategy to minimize lifecycle costs through automation, agility and optimized resource utilization.

 

During 2014, ExxonMobil developed the functional characteristics and started engaging key industry-leading companies. In 2015, they contracted Lockheed Martin to define detailed requirements and develop a prototype implementation. Then last year, 2016, they approached the Open Group to form the OPA Forum with the goal of developing open standards that would spur widespread adoption of their architectural vision, leading to multiple sources of compatible and interoperable products.

 

Fifty-seven people attended the first OPA Forum meeting. They represented thirty different organizations, most of which subsequently joined the Forum and the full list of current members is here. At Wind River, we are pleased to contribute as part of the team from Intel, our parent company and a silver member.

 

A significant element in the OPA Forum vision is the transition from discrete, physical control systems to software-based implementations that are typically combined with edge analytics software to drive real-time decisions. For critical infrastructure such as manufacturing and process control, the most fundamental requirement for this software is that it must run reliably, securely and safely, continuously gathering industrial data from an array of sensors and actuating responses in real time, while consolidating both operations and control functions.

 

Wind River’s Titanium Control software platform for critical infrastructure addresses these needs, enabling industrial control companies to leverage the concepts of the OPA Forum vision while ensuring the performance, reliability and security that they need.

 

Based on open industry standards, the Titanium Control platform enables virtualized software applications to run on standard, cost-effective IT-class servers while ensuring six nines (99.9999%) uptime and optimum asset utilization. It provides best-in-class security and threat mitigation, along with ultra-low latency system-level performance, leveraging technology originally launched in 2014 and proven since then in telecom industry deployments.

 

Titanium Control enables industrial control companies to improve their top-line revenue generation while simultaneously lowering their operating costs, maximizing their overall return on investment as they moved to software-based architectures.

 

For top-line revenue growth, Titanium Control allows operators to efficiently scale control systems to add capacity, while accelerating the introduction of new functions and services to optimize control processes. As a state-of-the-art software platform, it enables manufacturing companies to leverage the expertise of millennial programmers and innovative third-party software vendors.

 

To drive reductions in operational costs, Titanium Control eliminates the cost and risk of replacing physical control devices as well as reducing capital costs for new deployments and capacity expansions. Critically, it ensures regularly-updated end-to-end security for business operations and control functions.

 

This post has only mentioned a few of important features and capabilities of Titanium Control, all designed to streamline the deployment and operation of virtualized industrial control applications. If you’d like to know more about how Titanium Control supports the vision of the OPA Forum, please check out the information online or contact us to arrange a face-to-face discussion.

 

And if you’re a craft beer aficionado, please check back with me in a few months if you want my opinion on Oatmeal Pale Ale: I have a trip to Oregon coming up which just might allow time for a thorough evaluation or two.

If you’re heading to Sydney for next week’s OpenStack Summit, you should mark your calendar for an important session at 10.50am on Tuesday November 7th: “Tuning packet and interrupt latency for 5G applications”.

 

Two of Wind River’s foremost OpenStack experts, Chris Friesen and Ian Jolliffe, will present this session and explain how to use OpenStack successfully in challenging 5G applications.

 

Edge applications, 5G in particular, have some very tight packet and interrupt latency requirements. Packet latency can be impacted by the networking technology selected, such as a virtual switch, PCI pass-through or SRIOV. Interrupt latency can impact guest performance as well.

 

Addressing these challenges requires careful tuning of OpenStack. This involves multiple aspects of OpenStack, including host kernel configuration, tuning and isolation techniques, Neutron and Nova. Other factors such as how the guest is configured can also impact interrupt latency and timer accuracy, with critical implications for the design and configuration of the VM.

 

Chris and Ian will discuss how to select the appropriate networking technology while sharing benchmark results that include packet latencies. They will also explain how to configure both the host and guest for optimal interrupt latency and discuss real world trade-offs that must be considered for a viable OpenStack deployment.

 

So why is this so important? Why should you listen to Chris and Ian instead of heading for the Sydney Cricket Ground?

 

Industry analysts project that the majority of new revenue opportunities for 5G will be based around new edge use cases, as opposed to enhanced mobile broadband. Applications such as autonomous driving, industrial IoT, augmented reality, smart stadiums, immersive retail, tactile internet and smart cities are all edge computing use cases that require ultra-low latency along with telco-grade uptime. OpenStack will only be usable in these applications if the kind of techniques explained in this session are implemented and adopted.

 

So we hope that you’ll join Chris and Ian at 10.50am on Tuesday November 7th (Level 2, Parkside 2 Foyer). The topics that they will cover represent key enablers for the new revenue opportunities that the whole industry is expecting from 5G.

 

And then you have a couple of weeks to relax before the main event of the Australian summer which starts on November 23rd, The Ashes. Somehow that will have to be the topic for another post.

Radio Access Network (RAN) terminology is nothing if not confusing. Traditional RAN configurations are now termed “Distributed RAN”. The first phase of cost optimization through consolidation is typically called “Centralized RAN” or “CRAN”. Capacity improvements achieved through baseband pooling are often known as “Cloud RAN”, representing another, different use of the term “CRAN”. Currently, the ultimate approach to enhancements is “Virtual RAN” or “vRAN”. And we haven’t even mentioned “mini-CRAN”, proposed by China Mobile, let alone “Coordinated RAN”, “Collaborative RAN”, “Clean RAN” or “Advanced CRAN”, all of which have shown up in conference presentations over the past three years.

 

Regardless of all these flavors of RAN improvements, there’s consensus within the industry that a virtual RAN (vRAN) architecture enables service providers to achieve the best overall potential of cost savings, dynamic capacity scaling, better Quality of Experience (QoE) and rapid instantiation of new services.

 

In a vRAN architecture, the Baseband Units (BBUs) are virtualized, rather than being located at the cell site as physical equipment. The virtual BBUs are deployed on NFV infrastructure software platforms like Wind River’s Titanium Cloud, running on industry-standard x86 servers and consolidated in centralized data centers, while the Remote Radio Units (RRUs) remain at the cell sites at the edge of the network. vRAN leverages standard server hardware that cost-effectively scales up or down processor, memory and I/O resources based on dynamic changes in demand, infusing the RAN with capacity for application intelligence, which significantly improves service quality and reliability. In many configurations, the architecture also allows for Ethernet and IP fronthaul transport, which gives services providers more cost-effective options for RRU connections.

 

To accelerate the introduction of cost-effective vRANs, Wind River has collaborated with Altiostar, Amdocs and Dell EMC to develop a pre-integrated, validated end-to-end solution, ready for deployment by service providers.

Altiostar contributed their vRAN solution, which comprises a software-intensive LTE eNodeB combined with Ethernet fronthaul. Amdocs provided their Network Cloud Service Orchestrator (NCSO), an open, catalog-driven service orchestration solution, while their professional services expertise is also available for designing, deploying, operating and optimizing mobile networks. The solution leverages the Dell EMC PowerEdge R630, an ultra-dense, two-socket 1U rack server based on the latest Intel® Xeon processor E5-2600 v4 product family. Finally, Wind River’s Titanium Cloud platform is the industry’s only fully integrated, ultra-reliable, deployment-ready family of virtualization platforms that enables service providers to deploy virtualized services faster, at lower cost and with guaranteed uptime.

 

You can read all about the benefits of this integrated vRAN solution in our recently-published white paper “vRAN: the Next Step in Network Transformation”.

 

You’ll also want to check out the recording of our recent webinar hosted by SDxCentral, where representatives from all four companies jointly presented the solution and explained the business benefits that it delivers.

 

By extending the benefits of network virtualization from the core to the RAN, vRAN is the optimal solution for cost-efficiently increasing capacity, reducing costs and creating new services. The solution described in the white paper and the webinar delivers carrier grade reliability, predictable performance, low latency, unrivaled manageability, massive scalability, optimized resource utilization and flexible deployment options. These best-in-class technical features provide service providers with clear, quantifiable business benefits in terms of cost reductions, capacity improvements and accelerated deployment times.

 

If you’d like to know more about any of the Titanium Cloud products or solutions, please check out the information online or contact us to arrange a face-to-face discussion.

Want to improve your OpenStack-based cloud performance by 15x? Here’s how….

 

As communications service providers (CSPs) explore architectural choices for virtualizing their network infrastructure, OpenStack has emerged as the leading option for the cloud management software that manages the lifecycle of Virtual Machines (VMs). There are sound business reasons for this selection: OpenStack is an established open-source community project with tens of thousands of contributors worldwide, even though most of them are working on enterprise rather than telecom components, while the availability of multiple OpenStack-based solutions mitigates the risk of vendor lock-in.

 

During their evaluations of OpenStack-based platforms for telco clouds, CSPs typically focus heavily on four aspects of a platform:

  • First, they explore whether the platform effectively streamlines the installation, commissioning, maintenance and upgrades of the cloud, since these are all factors that drive the Total Cost of Ownership for their network infrastructure.
  • Second, they analyze the platform’s capability to ensure the level of service uptime that is necessary both to meet Service Level Agreements (SLAs) with their enterprise customers and also to comply with regulations pertaining to critical network services. Failing to meet either of these uptime metrics can result significant impacts to their top-line revenue.
  • Third, they investigate the end-to-end security implemented in the platform, looking to ensure the availability of comprehensive security features required to address today’s complex threat landscape as well as support for timely, dynamic updates as new threats emerge.
  • Finally, they measure the performance of the platform running a range of applications and services, because this directly influences the number of servers required for their data center and hence their anticipated CAPEX and OPEX spend.

 

By providing best-in-class functionality in all these areas, Wind River’s Titanium Cloud portfolio of OpenStack-based virtualization platforms has emerged as the industry’s leading solution for telco network virtualization.

 

While we’ve blogged extensively about all aspects of Titanium Cloud since its launch in 2014, a recently-published white paper written by GENBAND and Wind River contributes additional interesting, real-world information to the topic of system-level performance.

 

To evaluate the performance difference between “vanilla” (i.e. unmodified) OpenStack and an optimized open source solution, two different OpenStack Neutron networking implementations were tested with GENBAND’s Advanced Media Software (AMS) Virtual Network Function (VNF) software. The paper compares the performance of the GENBAND AMS VNF with vanilla OpenStack in one environment and Titanium Cloud in another environment.

 

The tests analyzed call load, packet loss, CPU utilization and the amount of CPU cycles required for each stage of processing. Metrics were collected in real time by the iMedia component of the GENBAND AMS VNF and displayed in dashboards leveraging the Representational State Transfer (REST) application programming interfaces (APIs) within the VNF Manager.

 

Summarizing some of the key results, the vanilla OpenStack environment processed 200 calls while the Titanium Cloud environment handled 3,000 calls, an improvement of 15x achieved with no difference in hardware cost.

 

The difference in CPU utilization was also significant between the two environments: 12% utilization for the vanilla environment while 60,000 packets were being processed, compared to only 3% utilization for Titanium Cloud while processing 400,000 packets. This frees up significant CPU resources for other, revenue-generating applications and services.

 

The white paper provides details of several other metrics that were evaluated and analyzed, including the call processing rate, the jitter associated with CPU utilization and the packet loss experienced by the system.

 

As you’ll see from reading the complete paper, the bottom line is that Titanium Cloud achieved a 15x performance improvement, processing 3,000 calls compared to 200 by vanilla OpenStack, while also delivering better call quality.

 

As the paper explains, these numbers are important not only because of the pure technical advantages that they enumerate but also because of the resulting business benefits delivered to CSPs. By selecting Titanium Cloud as their network virtualization platform, they can minimize both their CAPEX and their OPEX, while at the same time maximizing their top-line revenue by guaranteeing the level of service uptime that their customers expect. And all while mitigating the risk of vendor lock-in because Titanium Cloud is a 100% open solution compatible with all the relevant open standards, validated through its extensive partner ecosystem.

 

We encourage you to take a look at this new white paper. Then when you’re ready to know more, please contact us to arrange a face-to-face discussion

When you ask executives in the manufacturing or industrial control sectors about “Industry 4.0”, their responses typically fall into one of two categories. Those that haven’t yet spent much time exploring the concept are likely to dismiss the term as Marketing hype, or a gross over-simplification like “Web 2.0”. But companies who have investigated the principles behind a smart factory see this trend as an opportunity to significantly increase their revenues while at the same time slashing their operational costs. What’s not to like?

 

In this post we’ll outline some of the key business challenges presented by Industry 4.0 and explain how they can be addressed by selecting the appropriate platform for secure, on-premise critical infrastructure.

 

Technically, the term should be “Industrie 4.0” because the initiative started as a vision sponsored by the German government, which actively promotes the computerization of manufacturing. At the Hannover Fair in October 2012, the Industry 4.0 working group presented a set of recommendations to the German government and then they subsequently presented their final report at the same event in April 2013.

 

The “4.0” term comes from the generally-accepted progression of major phases in manufacturing. Phase 1 comprised steam and the initial machines that mechanized some of the work that our ancestors did manually. Phase 2 brought electricity, the assembly line and the start of mass production. Phase 3 included the advent of computers and the beginnings of automation, when robots and machines began to replace human workers on assembly lines.

 

As the next significant industry disruption, Industry 4.0 introduces what is commonly known as the smart factory, in which cyber-physical systems monitor the physical processes of the factory and make decentralized decisions. Physical systems are implemented as the Industrial Internet of Things (IIoT), communicating and cooperating in real time both with each other and with humans.

 

There are compelling business motivations for companies to adopt Industry 4.0. In a comprehensive report, PWC Global points out that companies who successfully implement Industry 4.0 no longer need to choose between focusing either on a better top line or a better bottom line, because they can improve both at the same time. Over the next five years, the companies that PWC Global surveyed expect to reduce costs by an average of 3.6% per year, while increasing annual revenues by an average of 2.9%. More than half the respondents expect their Industry 4.0 investments to yield a return within two years or less, given investments of around 5% of their annual revenue. If even half of these expectations are met, Industry 4.0 has the potential to reshape the competitive landscape and bring fundamental change to established manufacturing industries.

 

There are some significant challenges though, several of which relate to the compute platforms that run software-based control systems as well as analytics software that drives real-time decisions. For critical infrastructure such as manufacturing and process control, the most fundamental requirement for this software is that it must run reliably, securely and safely, continuously gathering industrial data from an array of sensors and actuating responses in real time, while consolidating both operations and control functions.

 

The first critical requirement for a critical infrastructure platform is low-latency virtualization. Virtualization enables a single compute platform to run the diverse set of functions and applications found in smart factories. The platform must provide ultra-low latency in order to deliver near real-time performance for critical applications. At the same time, it must optimize resource utilization system-wide in order to minimize both CAPEX and OPEX.

 

High availability is vital. In the event of hardware or software failure, the platform must be able to perform an automatic failover quickly enough to ensure the integrity of the control system. This implies a failover speed orders of magnitude faster than standard cloud platforms developed for IT applications and on a par with the speed required for virtualized telecom infrastructure. The virtualization platform must facilitate failover in a number of ways, such as restarting a clean backup software image without a reboot or turning control over to a full redundant server to overcome catastrophic hardware or software failures.

 

Finally, security. While traditional, physical control systems provide “box-level” security features with no provision for end-to-end threat protection or dynamic updates, the requirements for Industry 4.0 are significantly more demanding. In smart factories, it’s critical to mitigate security threats that target devices and subsystems across the facility, while recognizing that new, ever more sophisticated threats are emerging constantly. Critical infrastructure systems require a secure chain of trust that extends all the way from physical hardware or sensors extending into the Virtual Machines (VMs) that run the software-based control functions and analytics. Data must be encrypted and the network itself must ensure authentication, authorization and accounting with secure identities.

 

Fortunately, there’s a solution to these challenges. Wind River’s recently-introduced Titanium Control software platform for critical infrastructure addresses this need, enabling manufacturing companies to leverage Industry 4.0 concepts while ensuring the performance, reliability and security that they need.

 

Based on open industry standards, the Titanium Control platform enables virtualized software applications to run on standard, cost-effective IT-class servers while ensuring six nines (99.9999%) uptime and optimum asset utilization. It provides best-in-class security and threat mitigation, along with ultra-low latency system level performance, leveraging technology originally launched in 2014 and proven since then in telecom industry deployments.

 

As mentioned earlier, a key premise of Industry 4.0 is that manufacturing companies can improve their top-line revenue generation while simultaneously lowering their operating costs. Titanium Control facilitates both these objectives, maximizing the overall return on investment for a smart factory.

 

For top-line revenue growth, Titanium Control allows operators to efficiently scale control systems to add capacity, while accelerating the introduction of new functions and services to optimize control processes. As a state-of-the-art software platform, it enables manufacturing companies to leverage the expertise of millennial programmers and innovative third-party software vendors.

 

To drive reductions in operational costs, Titanium Control eliminates the cost and risk of replacing physical control devices as well as reducing capital costs for new deployments and capacity expansions. Critically, it ensures regularly-updated end-to-end security for business operations and control functions.

 

This post has only scratched the surface of Titanium Control, all designed to streamline the deployment and operation of virtualized industrial control applications. If you’d like to know more about Titanium Control, please check out the information online or contact us to arrange a face-to-face discussion.

As long as you select the right software platform, virtualization will bring a wide range of benefits to your industrial control application without compromising on the level of security delivered by traditional physical infrastructure. Moreover, virtualization will enhance your system-level security by enabling dynamic updates in response to emerging threats.

 

In industries such as energy, manufacturing, healthcare and smart buildings, there are countless examples of companies relying on industrial control systems installed twenty or thirty years ago. Increasingly, though, these legacy deployments present major business challenges such as: increasing operational expenses due to high maintenance and replacement costs; a dwindling pool of skilled technicians; limited flexibility resulting from sole-sourced solutions with proprietary programming models; slow product lifecycles out of step with fast-moving IT and mobile technologies; outdated box-level security features with no provision for end-to-end threat protection or dynamic updates.

 

To address these challenges, innovative companies are leveraging virtualization in order to slash their operational costs by deploying secure, robust, flexible software-based solutions as alternatives to legacy, fixed-function hardware.

With virtualization, Level 1 through Level 3 control functions are consolidated onto standard IT-class servers for significant CAPEX and OPEX savings, resulting in software-based digital controllers, PLCs, DCSs, SCADA software, HMIs, historians and applications. Updating software is significantly lower cost than replacing physical equipment. At the same time, companies can differentiate through software solutions while avoiding the cost, risk and downtime of removing and replacing proprietary hardware and software.

 

As companies evaluate the benefits of introducing virtualization into their systems, security is a critical factor and there are typically two aspects to consider. On one hand, software-based solutions offer end-to-end security both for the network and for the control functions, permitting dynamic updates in response to emerging threats while leveraging software firewalls, VPNs and Intrusion Prevention Systems from leading IT software vendors. On the other hand, hardware-based security functions are well understood and have a proven track record within critical infrastructure applications, so it’s important to ensure that that the adoption of virtualization does not in any way compromise the level of security compared to what’s available in state-of-the-art hardware solutions.

 

Wind River’s Titanium Control platform addresses this need, ensuring that virtualized control functions achieve a level of security at least as high as that provided by the latest hardware technologies.

 

This post focuses on support for the Trusted Platform Module (TPM) function within Titanium Control. Subsequent posts will address additional security features in the platform, also summarized in the product overview.

 

Summarizing the background to this, TPM is an International standard, based on the Intel® Trusted Execution Technology (Intel® TXT) specification, for a secure crypto processor, originally viewed as a dedicated microcontroller designed to secure hardware by integrating cryptographic data into devices. The TPM specification was written by the Trusted Computing Group (TCG) consortium and standardized in 2009, with the latest version being TPM 2.0.

 

TPM offers facilities to provide measurements of software and platform components so that system software as well as local and remote management applications may use those measurements to make trust decisions. TPM’s support of remote attestation creates a nearly unforgeable hash key summary of the hardware-plus-software configuration. TPM also offers facilities for the secure generation of cryptographic keys, binding which encrypts data using a TPM key and sealing which encrypts data in a similar manner to binding but also specifies the state in which TPM must be in order for the data to be decrypted (unsealed) and a random number generator.

 

Software uses TPM to authenticate hardware devices. Since each TPM chip has a unique and secret RSA key burned in during manufacturing, it is capable of performing platform authentication.

 

One of the key benefits of virtualization in industrial control applications is that it enables the software-based control functions to run on generic server platforms located on-premise, either in a stand-alone configuration or a mini data center. In either case, customers expect to be able to deploy industry-standard servers that may not include hardware TPM. Virtualization also allows Virtual Machines (VMs) to be dynamically migrated from one physical server to another, whether to eliminate downtime during software updates, to recover automatically from platform failures or to adjust resource utilization in response to changing workloads. Binding a VM to a physical TPM severely limits the scenarios under which dynamic VM migration is possible.

To address these challenges, the Titanium Control platform has introduced an industry-leading virtual TPM (vTPM) feature. The vTPM comprises a complete software-based implementation of TPM 2.0. The vTPM device is instantiated on the trusted/secure Titanium Control host (encrypted).  This enables Titanium Control to manage the vTPM device and its non-volatile data securely through the VM’s lifecycle, including migrating it along with the relevant VM. The vTPM device is configured exactly the same as a physical TPM and no changes are required to the application itself, which behaves exactly as if it had booted from a physical TPM.

For industrial control applications, it is critical to ensure an end-to-end boot process that is fully secure. With Titanium Control, vTPM is a crucial element of that boot process, which runs in the Unified Extensible Firmware Interface (UEFI) mode.

 

The vTPM feature in Titanium Control enables Wind River’s customers to achieve the full range of benefits from virtualized industrial control functions without compromising on the security that is an absolute requirement for critical infrastructure.

 

This post has only scratched the surface of the features in Titanium Control, all designed to streamline the deployment and operation of virtualized industrial control applications. If you’d like to know more about Titanium Control, please check out the information online or contact us to arrange a face-to-face discussion.

Unless you’ve been vacationing far off the grid, you’ll have read about Intel’s announcement of the Intel® Xeon® Scalable platform, the highest-performance, most versatile data center platform ever. The new platform brings massive advantages to customers such as cloud and communications service providers, enterprises, high-performance computing providers and artificial intelligence companies.

 

With up to 28 of the highest-performance cores, the all-new Intel Xeon Scalable platform can support up to 4.2x more Virtual Machines (VMs) and up to 5x more transactions per second than four-year-old systems. Configured for storage, it delivers up to 5x more IOPS at up to 70 percent lower latency to quickly commit data to storage or bring it forward for analysis. For communications, the Intel Xeon Scalable platform can power emerging 5G networks, offering up to 2.5x better performance for key networking applications.

 

As part of Intel, the Wind River software team has been working closely with our colleagues on the processor team to ensure that the Titanium Cloud portfolio of virtualization software products enables critical infrastructure applications to leverage the innovations in the new platform. The most recent version of Titanium Cloud, released just a few days ago, incorporates an extensive set of optimizations to achieve this. Fully validated with the Intel Xeon Scalable platform, the Titanium Cloud portfolio is the optimum infrastructure software solution for telecom and industrial control applications with stringent requirements for uptime, performance and security.

 

As one example of these optimizations, Titanium Cloud enables guest VMs to leverage the full AVX-512 instruction set, making the Titanium Cloud portfolio an ideal choice for deploying real-time critical, computationally intensive workloads such as those in 5G networks. Further, for 5G and other applications like Cloud Ran (CRAN) that require extremely low latency, the Titanium Cloud low latency profile can be leveraged to provide best-in-class latency to the guest.

 

All four products within the Titanium Cloud portfolio leverage the innovations in the new Intel Xeon Scalable platform:

  • Titanium Control for secure, on-premise industrial control infrastructure;
  • Titanium Core for service provider data centers, Central Offices (COs) and Point-of Presence (PoPs);
  • Titanium Edge for ultra-reliable telco edge applications like Multi-access Edge Computing (MEC);
  • Titanium Edge SX for small-footprint telco edge applications like Universal Customer Premise Equipment (uCPE).

 

If you’d like to know more about Titanium Cloud portfolio of products that enable the Intel Xeon Scalable Platform for critical infrastructure applications, please check out the information online or contact us to arrange a face-to-face discussion.

“Springtime in Paris”: what images does that phrase conjure up for you?

 

Well, if you’re a hotshot engineer working on the OPNFV project, it’s a vision of long days and late nights huddled over racks of hot, noisy servers while debugging complex interoperability problems. At least, that’s what we heard from our representatives to the recent OPNFV Plugfest….

 

OPNFV recently held their third Plugfest, hosted by Orange at their beautiful “Orange Gardens” facility in Paris from April 24th through April 28th, 2017. The plugfest was well attended, with 87 participants from 29 organizations that included six service providers.

 

The focus of this Plugfest was the new OPNFV Danube release and the final report was published a few days ago. It’s a fascinating summary of all the recent OPNFV achievements. As the report explains, “the Plugfest gave end-users and vendors the ability to try new hardware, installers, scenarios, test cases and tools while coming together as a community and solving problems collaboratively”.

 

Wind River’s Titanium Cloud NFV infrastructure software platform featured prominently in the week’s testing activities. Three different hardware platforms, from Advantech, Huawei and Lenovo, used Titanium Cloud for their NFVI software.

 

One application of the Titanium Cloud platform was successfully hosting the Dovetail test suite that forms the basis of the upcoming Compliance Verification Program (CVP). Another was in multi-site testing: a Titanium Cloud region was connected to a region deployed using OPNFV Danube, with the two regions sharing common OpenStack services that included Keystone, Cinder and Glance. Using this environment, Dovetail, Yardstick, Functest and Storperf tests were all successfully executed, one region at a time.

 

As we talk to our customers about “compatibility”, we find the challenge is increasingly about demonstrating interoperability with other companies in the NFV ecosystem. From the service providers' point of view, open standards avoid the risk of vendor lock-in by encouraging the development of compatible and interoperable solutions by multiple companies. But service providers typically incorporate products from more than one vendor in the complete solution that they deploy, so they need proof that products that should work together seamlessly actually do so. As OPNFV continues to make progress, we expect it to become a de facto standard against which all NFV and NFVI vendors will have to test their solutions.

 

Plugfests like the one hosted by Orange are vital as the telecom industry transitions from traditional, fixed-function equipment to dis-aggregated virtualized solutions that leverage best-in-class products from multiple vendors. Events like this demonstrate that an industry-wide ecosystem can successfully form around open standards and deliver interoperable, compatible products.

 

At Wind River, we’re proud to have contributed to all three OPNFV Plugfest held so far and we look forward to even more exciting and advanced accomplishments at the next one.

Telco edge applications bring new revenue opportunities

 

Telecom service providers worldwide are increasingly viewing applications hosted at the network edge as compelling business opportunities. While revenues from traditional broadband and voice services are essentially flat, edge applications present opportunities to sell new kinds of services to new kinds of customers. This provides the potential for increased market penetration and well as improved Average Revenue per User (ARPU) for those service providers who can be early to market with attractive offerings.

 

Some examples of edge-hosted applications that are generating wide interest are Multi-access Edge Computing (MEC), Universal Customer Premise Equipment (uCPE) and Software-Defined Wide Area Network (SD-WAN).

 

By bringing content and applications to data centers in the Radio Access Network (RAN), MEC allows service providers to introduce new types of services that are unachievable with cloud-hosted architectures because of latency or bandwidth constraints. Typical applications of MEC include smart stadiums, Augmented Reality (AR), Virtual Reality (VR), Vehicle-to-Everything (V2X) communication, mobile HD video and premium TV. Similarly, a combination of uCPE and SD-WAN enables the delivery of managed services to enterprise customers at lower operational cost and with greater agility than via traditional physical appliances, opening up new revenue potential.

 

Virtualization challenges

 

All these edge applications rely on virtualization to ensure maximum agility in their orchestration and management as well as optimized, dynamic resource allocation. At the same time, they typically need to be deployed on low-cost, low-power server platforms that are priced appropriately for small branch offices and other remote locations. To avoid costly truck rolls and service calls, the servers need to support automatic installation, provisioning and maintenance, while communicating with a centralized orchestrator.

 

These requirements place severe constraints on the virtualization software platform installed on these servers to run the virtualized applications. Despite running on low-cost hardware with limited CPU, memory and storage resources, the platform needs to support “hands off” deployment, remote management, high performance (many edge applications require real-time response) and of course robust, end-to-end security.

 

Introducing Titanium Edge SX

 

Enabling service providers to efficiently address a wide range of edge-hosted applications, Wind River has introduced the Titanium Edge SX edge cloud platform. As the latest product within the Titanium Cloud portfolio, Titanium Edge SX consolidates control, compute and storage nodes into an integrated software package that can be deployed in a “simplex” configuration on a single low-cost physical server, meeting the requirements described above.

 

“Interest is gaining among telecom service providers for edge applications such as managed services, served from CO or uCPE devices, and Multi-access Edge Computing (MEC) as a way to target new sources of revenue beyond their traditional customer base,” stated Michael Howard, Senior Research Director and Advisor, IHS Markit. “These use cases require a virtualization compute platform that delivers low latency, high throughput and end-to-end security with options to run on low-cost, small-footprint servers and devices. Service providers will want to examine Wind River’s Titanium Edge SX software, based on open standards and coupled with application software from their ecosystem partners.”

 

Based on standard open-source software such as Linux, OpenStack, KVM, DPDK and Ceph, Titanium Edge SX incorporates a wide range of enhancements to achieve the levels of performance, platform management, security and Virtual Machine (VM) lifecycle management that are required for service provider infrastructure. As an active contributor to all these open-source projects, Wind River upstreams patches back to the community, collaborating with other companies to important industry problems with features critical to the telecom market.

 

High performance with operational simplicity

 

To support interactive, real-time MEC applications like Augmented Reality, Virtual Reality, connected vehicles and tele-surgery, Titanium Edge SX delivers guest VMs an ultra-low average interrupt latency of 3µs, leveraging its Low Latency compute profile and powered by a comprehensive set of enhancements to the integrated KVM hypervisor. At the same time, its accelerated virtual switch (vSwitch) delivers up to 40x the switching performance of kernel-based vSwitches, thereby minimizing the number of processor cores required for switching and maximizing the number available for running VMs. This in turn maximizes the number of users supported on a single server, which is key to reducing the overall cost-per-subscriber.

 

Titanium Edge SX helps service providers to minimize their network operations costs, typically saving millions of dollars in installation, commissioning and maintenance when compared to Roll-Your-Own solutions. While the platform is delivered as a single, pre-integrated image installed with no manual intervention, the intelligent orchestrated patching engine allows up to hundreds of nodes to be upgraded quickly and minimizes the length of maintenance windows.

 

Comprehensive security

 

Edge computing applications are often installed by the end users themselves, in open environments away from service provider data centers, Central Offices (COs) or Points-of-Presence (PoPs). These factors present unique security risks, which are addressed in Titanium Edge SX by a comprehensive set of end-to-end security features. These include: Unified Extensible Firmware Interface (UEFI) secure boot; cryptographically signed images for host protection; virtual Trusted Platform Module (TPM) device for highest-security VM deployments; Transport Layer Security (TLS) with certificate storage in TPM hardware (where available); secure keyring database for storage of encrypted passwords.

 

Collectively, these features and others ensure that edge applications running on Titanium Edge SX are protected against threats wherever they originate.

 

End-to-end solutions

 

In order to accelerate the deployment of complete end-to-end solutions based around Titanium Edge SX, Wind River collaborates with industry-leading companies through the Titanium Cloud ecosystem. These collaborations include in-depth technical validations of hardware and software products, verifying that those products work correctly with Titanium Edge SX and thereby accelerating service providers’ deployment cycles for multi-vendor use cases.

 

As one example of a validated edge solution, Wind River has worked with DellEMC and Saguna on an integrated MEC platform, explained in this webinar. Similarly, collaboration with DellEMC and Versa Networks resulted in an optimized uCPE/vCPE solution described in this solution brief.

 

All in the family

 

Titanium Edge SX is the latest product in the Titanium Cloud portfolio of virtualization platforms. Under the overall Titanium Cloud architecture, the following products are now available:

  • Titanium Core for deployment in data centers, COs and PoPs with up to several hundred servers;
  • Titanium Edge for dual-server configurations at the network edge or customer premise;
  • Titanium Edge SX for single-server configurations at the network edge or customer premise;
  • Titanium Control for industrial control applications in areas such as energy, smart buildings, manufacturing and healthcare.

 

For more information

 

This post has only scratched the surface of the features in Titanium Edge SX, all designed to streamline the deployment and operation of telco edge-hosted applications. If you’d like to know more about any of the Titanium Cloud products, please check out the information online or contact us to arrange a face-to-face discussion.

May and June are intense trade show months for the telecom industry. Even skipping two or three that would have been relevant if we had unlimited bandwidth, over the past few weeks our Wind River team has been spread between NFV World Congress, OpenStack Summit, DellEMC World, Big Communications Event, Network Virtualization and SDN Europe, Canadian Telecom Summit, Interop Tokyo and OPNFV Summit. That’s a lot of events in a short amount of time, giving us some powerful insights into where service providers worldwide are focused right now in terms of growth opportunities. We’ve found that a hot topic in conversations and presentations at all these events has been Multi-access Edge Computing, or MEC.

 

This time last year, “MEC” stood for Mobile Edge Computing. Then, after a MEC group was formed by ETSI to work on Application Programming Interfaces (APIs), the leaders realized that the benefits of the technology were not limited to mobile access but equally applicable to wireless networks. Hence the new term Multi-access Edge Computing and no need to replace all the T-shirts that already say MEC.

 

Service providers worldwide are aggressively pursuing MEC projects because this application promises not only operational cost reductions but also significant opportunities to generate new revenue from new customers. By bringing content and applications to data centers in the Radio Access Network, MEC allows service providers to introduce new types of services that are unachievable with cloud-hosted architectures because of latency or bandwidth constraints.

 

As we talk to service providers about the specific new business opportunities that they see around MEC, we consistently hear four applications mentioned:

  • Small-cell services for stadiums and other high-density locations, where by deploying applications hosted at the network edge (i.e. in the stadium itself), stadium owners and service providers can offer a wealth of integrated services that include real-time delivery of personalized content to fans’ devices. In this case MEC also minimizes backhaul loading because the new traffic is both generated and delivered locally.
  • Augmented Reality (AR), Virtual Reality (VR) and tactile Internet applications, which are just not viable without super-fast response times, local image analytics and deterministic, low latency communications. Use cases like remote medical diagnostics and tele-surgery will demand millisecond response times, far quicker than those achievable by round-trip communications with a remote cloud data center.
  • Vehicle-to-Everything (V2X) communication with high bandwidth, low latency, guaranteed availability and robust security. Whether the use case is Vehicle-to-Vehicle or Vehicle-to-Infrastructure, it’s impossible to meet these performance requirements with a centralized, cloud-hosted compute model.
  • Mobile HD video and Premium TV with end-to-end Quality of Experience (QoE), enabling a superior user experience as well as smarter utilization of network resources. Content providers have learned the hard way that video quality is critical for subscriber retention: studies show that a one- second rebuffering event during 10-minute premium service clip causes a 43% drop in user engagement. MEC allows service providers to optimize video content, ensuring a fast start and smooth delivery.

 

No single company has expertise in all the product areas required for a complete end-to-end MEC solution. It’s a classic opportunity for effective ecosystem-based collaboration, which is how we’ve approached the market at Wind River. Through our Titanium Cloud ecosystem, we partner both with vendors who provide hardware ideally suited for running the Titanium Core and Titanium Edge virtualization platforms as well as with software companies who specialize in MEC applications.

 

For MEC, we’ve cooperated with two of our partners, DellEMC and Saguna, to deliver a validated, pre-integrated end-to-end solution that can be deployed immediately. We’ve demonstrated this solution at most of the events mentioned above, with massive interest from attendees.

 

This demonstration will be part of a webinar on Tuesday June 20th hosted by SDxCentral, titled “Enabling MEC as a New Telco Business Opportunity”. 

 

During the webinar, DellEMC, Saguna and Wind River will explain how the combination of DellEMC’s servers, Saguna’s MEC middleware and Wind River’s virtualization platform enables service providers to accelerate their deployment of MEC-enabled applications, benefitting from the business opportunity to sell new kinds to services to new kinds of customers.

 

Delivering MEC services that constitute a great user experience while minimizing operational costs is not easy. It requires a combination of application software, platform software and hardware that have been optimized for this application and integrated to work together seamlessly.

 

We suggest you register for the webinar now to learn more about this joint solution. Then we look forward to more conversations about how we can help in deployments of this technology that promises to address service providers’ twin goals of achieving top-line revenue growth at the same time as operational cost reductions.

It’s taken a few years, but the telecom industry seems to have finally accepted that OpenStack is viable for deployments in networks that require high levels of service availability, performance and security. The key is selecting the right OpenStack platform: you need one that’s been properly hardened, packaged and stress-tested to ensure it meets the stringent requirements of service providers.

 

Telco clouds based on Wind River’s Titanium Cloud architecture have been proven to address the critical challenges presented by deployments in a wide range of core, edge and access use cases.

 

If you’d like to learn more about how the Titanium Core and Titanium Edge virtualization platforms solve some very key problems for virtual CPE, the white paper “Overcoming OpenStack Obstacles in vCPE” provides an in-depth explanation.

 

For more on the vCPE topic, you might also be interested to watch the recording of our webcast with Peter Willis from BT, in which Peter discusses specific OpenStack problems identified by BT and we review how Wind River’s products have solved them.

 

Now it’s time to take a look at another important OpenStack topic: how to guarantee predictable performance for applications like telecom networks with performance requirements much more challenging than found in the kind of enterprise applications for which OpenStack was originally designed.

 

At OpenStack Summit in Boston this month, two of Wind River’s leading OpenStack experts will be presenting a paper titled “Behind the Curtains: Predictable Performance with Nova”. Ian Jolliffe and Chris Friesen will provide detailed insights not only into the techniques that enable predictable performance to be achieved but also into how to troubleshoot problems when something goes wrong. If you’re going to be at the OpenStack Summit you’ll want to check out this session and if not you can always watch the recording later.

 

As Ian and Chris will explain in their talk, there are five fundamental problem areas that must be addressed in order to develop an OpenStack implementation that can deliver performance that’s truly predictable: CPU, memory, PCI bus, networking and storage. (Actually at this Summit they’ll only cover the first four and you’ll have to join us at a subsequent event to learn about storage.)

 

The primary CPU-related challenges are all about contention. Multiple guests contending for the same resources can result in unpredictable performance for all, so various techniques must be employed to prevent this situation. These include reducing or disabling CPU overcommit, setting the correct CPU thread policy and using dedicated CPUs for each guest. CPU cache contention can be avoided by the use of an appropriate CPU thread policy and also by leveraging Intel’s Cache Allocation Technology. CPU contention between host and guest(s) should be eliminated by avoiding System Management Interrupts (SMIs) as well as by controlling CPU “stealing” by host processes and kernel threads.

 

There are three memory-related areas that need to be addressed to avoid performance problems. Memory contention can be avoided by reducing overcommit, by turning it off altogether or by enabling hugepages. Memory bandwidth contention can be more complicated: it’s important to ensure that each guest NUMA node is mapped to a unique host NUMA node, while also distributing Virtual Machines (VMs) across NUMA nodes to spread memory accesses. Finally, host TLB contention (in which the TLB caches a virtual page to a physical page mapping) can be minimized through the use of hugepages as well as dedicated CPUs.

 

PCI bus contention is eliminated by ensuring that each PCI bus is connected to a single host NUMA node, so that instances using PCI devices will be affined to the same NUMA node as the device itself.

 

Networking must also be considered. It’s important to avoid cross-NUMA traffic, so VMs should be located on the same NUMA as the physical switch they’re connected to, while virtual switch (vSwitch) processes should be configured with NUMA awareness of their physical NIC PCI termination and NUMA instance placement. PCI PassThrough and SR-IOV will also be impacted by crossing NUMA nodes. To avoid network bandwidth contention, all instances connecting to the same host network should use the same host NIC, which ideally should be a 10G NIC or better.

 

There are several approaches that should be considered in order to address limitations in host-guest network bandwidth. Emulated NICs are slow and para-virtualized NICs are faster, as long as the guest supports them. PCI PassThrough and SR-IOV are faster still because they ensure that the PCI device is “passed through” into the guest, though they require the presence of a suitable device driver in the guest, still have the overhead of virtual interrupts and can be challenging to configure initially. The fastest guest will be one that’s based on DPDK and leverages a Poll Mode Driver (PMD), though this does consume more power because the guest is running a tight loop.

 

Lastly, Enhanced Platform Awareness (EPA) is critical in ensuring predictable performance for the applications themselves. EPA enables optimized VM placement through techniques that include NUMA, memory requirements, NIC support, acceleration engines and hyperthreading awareness.

 

By implementing all these techniques and more, the Titanium Cloud architecture has been proven to deliver the level of predictable performance required for telco networks, which is one of the reasons why it’s being widely adopted as service providers transition to network virtualization.

 

As a leading contributor to the OpenStack community and with a major focus on solving telco-related problems in Nova, Wind River is pleased to have the opportunity to share more details of these performance techniques with the rest of the community. We hope you can join Ian and Chris for their upcoming OpenStack Summit session and if you can’t make it we encourage you to watch the recording afterwards.

This month, we’ve seen a couple of announcements from the Linux Foundation that are highly relevant to the networking and telecom markets. As a strong contributor to open-source projects for these markets, Wind River is glad to be deeply involved in both initiatives.

 

First, on April 3rd it was announced that the DPDK (Data Plane Development Kit) Project has moved to The Linux Foundation.

 

DPDK consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures. By enabling very fast packet processing, DPDK is making it possible for the telecommunications industry to move performance-sensitive applications like the backbone for mobile networks and voice to the cloud.

 

DPDK was created in 2010 by Intel and made available under a permissive open source license.

 

As a long-time contributor to DPDK, Wind River is a silver member of the project. DPDK support is integrated into products such as Wind River Linux and the Titanium Cloud portfolio of carrier grade virtualization platforms. Our customers and partners leverage the optimized packet processing enabled by DPDK to maximize the system-level performance of their software and to maximize the VM density achieved in virtualized environments.

 

A couple of days after the DPDK announcement came the news that another Linux Foundation project, the Open Network Automation Platform (ONAP) Project, had released its code and documentation to the community for increased collaboration. ONAP also welcomed additional members to the project, including Wind River.

 

The ONAP project was formed in February as the merger of the open source ECOMP and Open Orchestrator Project (OPEN-O) projects. The objective of ONAP is to allow end users to automate, design, orchestrate, and manage services and virtual functions. AT&T, China Mobile and many other leading service providers are driving ONAP, together with a diverse group of member companies.

 

Wind River was deeply involved in the OPEN-O project, validated the operation of OPEN-O with the Titanium Cloud virtualization platform. As leader of the multi-VIM driver project, we partnered with China Mobile, Huawei, Intel, VMware and ZTE to develop this code for inclusion in Release 2. At Mobile World Congress in February, we participated with China Mobile and Intel in a demonstration that featured OPEN-O as the service orchestrator managing Virtual Network Functions (VNFs) to deploy an Augmented Reality service running on a virtual eNodeB at a Cloud RAN (C-RAN) node.

 

All these functions in the demo were running on the Titanium Cloud platform, selected by China Mobile to guarantee the level of uptime that service providers require while at the same time delivering the ultra-low latency network performance that is critical for edge applications like C-RAN, Mobile Edge Computing (MEC) and virtual eNodeB. We’re well positioned to leveraging this experience in OPEN-O to help accelerate the success of ONAP.

 

As a major contributor to open-source projects covering multiple markets, we’re delighted to have the opportunity to collaborate with our colleagues in both these new Linux Foundation communities. Both DPDK and ONAP are key elements of our end users’ strategies and we look forward to helping them progress as quickly as possible.

A few weeks ago, we published a post “Tapas and Testing: Two Weeks in Madrid” summarizing Wind River’s involvement in ETSI’s first “NFV Plugtest” event, which was hosted by 5TONIC Laboratory in Leganes, near Madrid  from January 23rd through February 3rd. In that post, we summarized the success of Wind River’s Titanium Cloud network virtualization platform in demonstrating full interoperability with nine MANO solutions and fifteen VNFs, some of which were open-source projects while others were commercial offerings.

 

post.pngAs promised, ETSI has now published their report on the plugfest. Reviewing this 44-page document, it’s clear that the teams actually spent way more time on performing interoperability tests than in consuming tapas, because the results are extremely comprehensive and detailed.

 

This report is a great read for anyone interested in understanding the complexity of evaluating multi-vendor interoperability for end-to-end NFV solutions. Starting with a description of the technical and project management approach to the plugtest, it then summarizes the various functions that were tested, comprising VNFs, MANO solutions, VIMs and NFVI platforms. It provides a wealth of detail on the test infrastructure, the test procedures and the test plans themselves.

 

For many readers, the bottom line will be the comprehensive interoperability results presented in the report as well as the summary of the overall plugtest outcome.

 

All in all, this report is definitely recommended reading for anyone involved in NFV.

 

Plugtests like this one hosted by ETSI are vital as the telecom industry transitions from traditional, fixed-function equipment to dis-aggregated virtualized solutions that leverage best-in-class products from multiple vendors. Events like this demonstrate that an industry-wide ecosystem can successfully form around open standards and deliver interoperable, compatible products. And of course, there was occasionally some downtime for sampling the excellent tapas in Madrid.

 

Wind River was proud to contribute to this inaugural ETSI Plugtest and we look forward to even more exciting and advanced accomplishments at the next one.

Filter Blog

By date: By tag: