Skip navigation
1 2 3 Previous Next

Embedded Community

215 posts

Back in 2012, telecom service providers worldwide launched a collaborative initiative to establish open standards for the virtualization of telecom networking functions, which came to be known as Network Functions Virtualization (NFV). Given the fierce competition in this market as well as anti-trust concerns, this was very much the equivalent of Donald Trump sitting down for a chat with Kim Jong Un, though the modest venue for the first NFV conference was Darmstadt, Germany rather than a slightly more luxurious island off Singapore.

 

With service providers facing fierce price pressure as well as exploding demand for network bandwidth, the initial goals for NFV were to leverage software, virtualization, commodity servers and open standards, in order to accelerate the deployment of new revenue-generating services while at the same time lowering network operational costs (OPEX).

 

While some of these business objectives have been achieved in the early deployments of NFV, over the past couple of years service providers have concluded that more drastic advances are needed in order to meet their long-term goals of delivering compelling end-user experiences from highly-efficient operations that are based on modern technologies pioneered by the Web 2.0 giants. “Zero-Touch Automation” (ZTA) is the focus now and this has reshaped the industry’s attention to the point where an industry event like the former “NFV World Congress” quickly morphed into “NFV and Zero-Touch Automation World Congress”. Conference organizers are experts at micro-targeting.

 

This post outlines some of the key concepts behind ZTA and explains some examples of the benefits it will deliver to both customers and service providers.

 

“Zero-touch deployment” and “zero-touch provisioning” are familiar terms used to describe the automation of IT and data center infrastructure. In the telecom industry, ZTA extends this concept of automation beyond the initial installation phase to cover the entire lifecycle of network operations including planning, delivering, onboarding, monitoring, updating and, ultimately, decommissioning of services. ZTA will move telecom networks from today’s automatic functions to fully-autonomous operations that bring significant top-line revenue improvements as well as sustainable reductions in operational costs.

 

For most service providers, today’s service planning process is mostly manual and therefore both error-prone and time-consuming. An enterprise customer may deal with a network consultant who works on the design, a service manager who creates the actual service order and of course a salesperson who manages the commercial transaction. With ZTA, the customer can directly specify their service requirements via an online dashboard, the design automatically drives the order entry process and an API integrates that order directly into the supply chain. A process that used to take weeks now proceeds at the pace desired by the customer and can be completed in minutes with a dramatically reduced risk of errors.

 

In many cases, new services ordered by the customer will require the delivery of new equipment. In a typical flow today, that equipment has to be procured from a warehouse and delivered to a technician, who required to configure it manually (and hopefully correctly) based on the service order generated during the planning process. Then the equipment is shipped to customer site for installation. Typically, that’s a weeks’-long process. With ZTA, the equipment is configured automatically by a controller, which accesses device information automatically, minimizing the risk of configuration errors and shorten the shipping lead time from weeks to days.

 

For most service providers today, the service onboarding process is manual. Once an appointment is scheduled at the customer site, a “truck roll” is required for a technician to go on-site, set up the new service and test it. ZTA streamlines the process: once the new equipment arrives and the customer plugs it in, it “phones home” to the controller that automatically verifies service activation, integrates the service into the billing system and validates on-going usage. From the customer’s perspective, this is a self-on-boarding process that takes only a few minutes, with no need to schedule a visit from a technician.

 

Despite some level of automation in today’s networks, monitoring remains imperfect. In many cases, an irate phone call from the customer is the first notification the service provider receives of a problem with that customer’s service and that’s the trigger for an involved debugging process to isolate the root cause and fix the problem, which could be at the customer site, in the WAN or in the service provider infrastructure. ZTA equips the network to collect and analyze telemetry data automatically and increasingly with the assistance of Artificial Intelligence (AI). This enables proactive monitoring of customers’ services and the automatic escalation of any failures that cannot be addressed automatically. Skilled technicians can focus on complex problems rather than routine “break-fix” tasks that are handled automatically.

 

In many ways, the current update process for customers’ services resembles the initial planning process. It typically involves multiple service provider personnel, working on manual changes to the network design, the service order and the documentation, as well as an on-site technician who ultimately implements the update. With ZTA, service updates are an on-demand process initiated by the customer via their self-service dashboard. Implemented as virtualized software applications, services can be freely added, deleted, upgraded and restored remotely, with billing changes implemented automatically and accurately.

 

With today’s systems, decommissioning services for a customer is a manual process that involves multiple service provider staff who need to create a service order, execute the transaction, update the billing system, actually tear-down service and potentially ensure the return of the equipment. With ZTA, service deletion is an on-demand process initiated by the customer via their self-service dashboard, with billing changes implemented automatically.

 

ZTA will lead to massive improvements in customer experience while service providers will benefit from improvements in operational efficiency and sustainable reductions in OPEX. They will also be better positioned to benefit from the skills of software-oriented millennial engineers and innovative third-party software vendors.

 

At Wind River, we’re delighted to be contributing to the realization of the ZTA concept and the transformation of telecom networks from automatic to autonomous. The Wind River Titanium Cloud portfolio of software platforms delivers a range of features and capabilities that streamline many aspects of ZTA.

 

If you’d like to know more about Titanium Cloud, please browse the information available online or contact Wind River to arrange a face-to-face discussion.

Can you complete this list: Louisville, Durham, Paris, Portland, ….? If so, and if you correctly answered “Sophia Antipolis”, you’re either a geek who spends way too much time online reading network test reports or a battle-hardened engineer who’s lugged bulky test equipment through too many airports and is now returning home from two weeks in the South of France with a nice tan, an appreciation for French cuisine and stories of mangled IP addresses. Sophia Antipolis was the location of the just completed OPNFV plugtest.

 

Actually, this was the first plugfest to be co-located and run jointly by OPNFV and ETSI. If we add the two previous ETSI plugtests into the list, it now reads Louisville, Durham, Madrid, Paris, Portland, Sophia Antipolis and Sophia Antipolis again, which sort of spoils the guessing game.

 

As in the case of the previous six plugtests, the primary objective of this one was to determine interoperability of vendors’ solutions as well as compatibility with published specifications.

 

As we talk to service providers about “compatibility”, we find the challenge is increasingly about demonstrating interoperability with other companies in the NFV ecosystem. Open standards avoid the risk of vendor lock-in by encouraging the development of compatible and interoperable solutions by multiple companies. But service providers typically incorporate products from more than one vendor in the complete solution that they deploy, so they need proof that products that should work together seamlessly actually do so. At the same time, they expect to leverage innovative hardware and software products being developed to address this market.

 

Since Wind River’s Titanium Cloud NFV Infrastructure (NFVI) platform provides critical features and capabilities for NFV, we were delighted that three of our experts were able to participate in the plugtest and work with colleagues from many companies on wide-ranging interoperability testing.

 

In due course, ETSI and OPNFV will publish the final report from the plugfest. This will include a detailed report of the interoperability tests performed by experts from a wide range of hardware, software, systems and semiconductor companies. From our perspective, the results were impressive and confirm industry-wide progress towards compatibility with open standards and interoperability between vendors.

 

The Titanium Cloud platform was demonstrated to interoperate with 14 Virtual Network Functions (VNFs) and 10 orchestration (MANO) products. All these tests involved fully functional service chains comprising two or more VNFs. In many cases, this was the first time that we had tested these VNFs or MANOs with Titanium Cloud, indicating a high degree of maturity in the products out-of-the-box, with no tweaking required.

 

Titanium Cloud was also used as the NFVI platform in three demos. The first was a multi-VNF demo of network service orchestration that showcased the orchestration of firewall and load balancer VNFs. The second involved four blueprints for network, service and subscriber experience visibility in an NFV environment, including orchestration, a virtual tester and a virtual probe. The third was a 4G network steering application running on multiple NFVIs, one of which was Titanium Cloud. All these demos ran smoothly.

 

Plugfests like the one hosted by ETSI and OPNFV last week are vital as the telecom industry transitions from traditional, fixed-function equipment to dis-aggregated virtualized and cloudified solutions that leverage best-in-class products from multiple vendors. Events like this demonstrate that an industry-wide ecosystem can successfully form around open standards and deliver interoperable, compatible products.

 

At Wind River, we’re proud to have contributed to all the ETSI and OPNFV plugtests held so far. We look forward to even more exciting and advanced accomplishments at the next one, regardless of which exotic location is selected then.

The business-level benefits of virtualization are becoming well understood in applications such as industrial control and telecom. Companies worldwide have analyzed real-world use cases and concluded that major savings in lifetime operational costs are achievable by deploying software-based, virtualized systems compatible with open standards, instead of traditional fixed-function physical equipment based on proprietary, vertically integrated architectures. There is unstoppable momentum behind industry initiatives like Open Process Automation Forum (OPA) for industrial control and Open Platform for NFV (OPNFV) for telecom.

 

This move to virtualized infrastructure, however, has brought justifiable concerns about security. Software-based systems are increasingly deployed in network edge locations that are unattended and/or where physical security cannot be guaranteed.

 

A control system for a wind turbine on a hillside and a basestation at a remote antenna site are both vulnerable to malicious attacks and attempts to infiltrate the software. If a hacker can enter the network via one of these edge locations, the potential effects throughout the wider network can be financially damaging to a telecom service provider and potentially catastrophic in the case of industrial control applications.

 

At the same time, the long lifetime of systems deployed in these industries means that they become vulnerable to new threats over time that were not anticipated and protected against when the software was originally installed.

 

For all kinds of critical infrastructure applications, companies need to be certain that the software-based systems they deploy are protected by a robust security architecture that prevents unauthorized software from being installed, whether maliciously or unintentionally, protects the entire system as soon as it is powered on and enables properly-authenticated remote software updates to address new threats as they emerge.

 

Recognizing these challenges, Wind River has incorporated a comprehensive set of security enhancements into the latest release of the Titanium Cloud portfolio of virtualization platforms for critical infrastructure. For industrial control applications, these enhancements are integrated into the Titanium Control product, while telecom companies will select either Titanium Core, Titanium Edge or Titanium Edge SX, depending on the deployment location and configuration.

 

During update or patching cycles, Titanium Cloud now uses cryptographic signatures to validate both the integrity and authenticity of files when patches and ISO images are imported. Only patches and ISOs that have been cryptographically signed by Wind River can be applied to a Titanium Cloud installation. When the Titanium Cloud patches are developed by Wind River, they are signed at build time with a private key stored on a secure signing server and subsequently validated with a public key that’s built into the deployed Titanium Cloud platform. The signature for the patch is included in the patch archive and validated when the patch is uploaded. A similar process applies to Titanium Cloud ISO images, with the signature being stored in a separate file in this case. This use of cryptographic signatures ensures comprehensive protection of the software platform during maintenance operations.

 

The Titanium Cloud Integrity Measurement Architecture (IMA) uses a secure boot process to ensure the integrity of the initial boot loaders and the kernel for the host platform, guaranteeing the integrity of the user space environment while detecting and reporting executable files that have been maliciously or accidentally changed. IMA continuously monitors the integrity of a key subset of the hosts’ files, measuring and storing the hashed value of files as they are accessed in order to detect if files have been altered while logging any file integrity issues that may indicate that the host platform has been compromised.

 

The virtual Trusted Platform Module (vTPM) function within Titanium Cloud ensures that virtualized functions achieve a level of security at least as high as that provided by the latest hardware technologies. TPM is an international standard, based on the Intel® Trusted Execution Technology (Intel® TXT) specification, for a secure crypto processor, originally viewed as a dedicated microcontroller designed to secure hardware by integrating cryptographic data into devices. Software uses TPM to authenticate hardware devices. Since each TPM chip has a unique and secret RSA key burned in during manufacturing, it is capable of performing platform authentication.

 

As companies adopt virtualization for their critical infrastructure, they expect to be able to deploy industry-standard servers that may not include hardware TPM. To address this challenge, the Titanium Cloud vTPM comprises a complete software-based implementation of TPM, which enables Titanium Cloud to manage the vTPM device and its non-volatile data securely through a Virtual Machine’s lifecycle, including migrating it along with the relevant VM. The vTPM device is configured exactly the same as a physical TPM and no changes are required to the application itself, which behaves exactly as if it had booted from a physical TPM.

 

For critical infrastructure applications, the Titanium Cloud vTPM function ensures an end-to-end boot process that is fully secure.

 

One of the benefits in moving from fixed-function physical equipment to software implementations based around open standards is that companies can leverage a wide range of security products available from innovative software companies. Through the Titanium Cloud partner ecosystem, Wind River collaborates with providers of functions like firewalls, security gateways, Deep Packet Inspection (DPI) and Intrusion Prevention Systems (IPS). By validating the correct operation of our partners’ products with Titanium Cloud, Wind River enables our customers to take advantage of security software from industry-leading companies with the confidence of pre-validated, joint solutions.

 

This post has only scratched the surface of the features in the latest version of Titanium Cloud, focusing on those that ensure the end-to-end security of software-based critical infrastructure systems. Other innovations address areas like simplified installation, cost reduction for edge deployments and distributed cloud architectures.

 

If you’d like to know more about Titanium Cloud, please browse the information available online or contact Wind River to arrange a face-to-face discussion.

In the telecom world, true “green field” opportunities are rare. When Elon Musk fires up a few Falcon Heavies and puts a thousand people on Mars who can’t live without Facebook, that will represent a green field (or orange field) opportunity for an enterprising service provider. Until then, most operators who roll out new services face the challenge of implementing them on top of existing networks.

 

This topic is especially important for service providers exploring the new business opportunities enabled by edge compute. Applications like smart stadiums, augmented reality, industrial IoT, connected vehicles and HD video all represent the opportunity to sell new kinds of services to new categories of customers, boosting top-line revenue and market share.

 

These innovative new applications can’t be hosted on standard centralized IT infrastructure. They need a purpose-built edge compute platform, specially designed to provide ultra-low latency performance, comprehensive device-to-cloud security, telco-grade uptime and support for small footprint configurations. Typically, the ideal edge compute platform will be very different from the enterprise-class core infrastructure that is running in the service provider’s cloud data center.

 

Cost-effective federation between distributed clouds is key to efficiently instantiating edge and core clouds based on cloud-agnostic abstraction. At Mobile World Congress in Barcelona next week, VMware and Wind River will join forces to demonstrate the kind of federation that makes this possible. We will also demonstrate the use of ONAP to orchestrate multi-tenant services running across multiple clouds.

 

This MWC demo is the result of work done within the ONAP Multi-VIM / Cloud Project, for which both VMware and Wind River are leading contributors. The goal of this project is to enable ONAP to design and deploy cloud-agnostic infrastructure environments, for example OpenStack and its different distributions, public and private clouds, micro services containers, etc.

 

When you visit this demo at MWC, you’ll see ONAP orchestrating services that run in a federated configuration across two different clouds: Wind River Titanium Cloud at the edge and VMware vCloud NFV OpenStack Edition in the core. In one use case, the service is a virtual CPE and in a second example, it’s a virtual IMS. In both scenarios, the edge component of the service runs on Wind River Titanium Cloud while the core component runs on VMware VIO. ONAP orchestrates and automates the flow of data between the edge and core clouds, controlling overall service provisioning and lifecycle management.

 

This kind of federation between heterogeneous networks (VMware VIO and Wind River Titanium Cloud) is exactly what is required for service providers as they look to extend their existing networks to add new, high-value edge services. It enables them to quickly deploy new services that boost their top line, while leaving their legacy core network infrastructure unchanged.

 

At MWC, you can see this federation demo in the Intel booth, number 3E31. While you’re there, you’ll also find Wind River products in action as part of demos as diverse as an air quality monitor, a smart city incorporating the FlexRAN virtual RAN platform and an open Broadband Remote Access Server (BRAS).

 

After you enjoy all the technology in the Intel booth, you’ll also find solutions based on Titanium Cloud in many of our customers’ and partners’ booths, including Advantech (#5L16MR), Amdocs with Altiostar (#3G10), China Mobile (#1F70), Chunghwa (#5A61), ITRI (#5A61), Lanner (#7E2), Ribbon Communications (#2I31), RIFT.io (#5J22) and Saguna (#5E41). You can see VMware’s solutions at #3K10.

 

We look forward to seeing you at MWC and please feel free contact us ahead of time to arrange a meeting with one of our experts.

This week The Linux Foundation announced the OPNFV Verified Program (OVP). Designed to simplify adoption of commercial NFV products, OVP establishes an industry threshold based on OPNFV capabilities and test cases. Wind River is proud to be one of the first companies to receive the OVP logo, along with Huawei, Nokia and ZTE.

 

OPNFV members, including network operators, worked closely to establish a framework and reached consensus for an initial set of capabilities that help operators establish entry criteria for their POCs and RFPs. OVP facilitates both vendor self-testing and third-party lab testing. The initial version will test and verify NFV infrastructure components and features, including NFVI, VIM, underlying cloud infrastructure, basic packet forwarding, IPv6 and VPN.

 

Since the launch of the Titanium Cloud portfolio of virtualization software platforms, Wind River has focused heavily on ensuring interoperability with other leading vendors’ products, recognizing the importance of multi-vendor solutions as service providers deploy NFV in their networks.

 

Both through the Titanium Cloud partner ecosystem, with over 50 member companies, as well as plugfests sponsored by ETSI and OPNFV, we have validated and demonstrated that Titanium Cloud is 100% compliant with all the relevant open industry standards.

 

From the service providers' point of view, open standards avoid the risk of vendor lock-in by encouraging the development of compatible and interoperable solutions by multiple companies. But service providers typically incorporate products from more than one vendor in the complete solution that they deploy, so they need proof that products that should work together seamlessly actually do so.

 

As OPNFV continues to make progress, we expect it to become the de facto standard against which all NFV and NFVI vendors will have to test their solutions, so we’re delighted to be amongst the first wave of companies to associate our products with the OVP logo.

 

If you’d like to know more about how Titanium Cloud accelerates the deployment and operation of virtualized telecom infrastructure, applications and services, please check out the information online and then contact us to arrange a face-to-face conversation.

Just in time to assist industrial automation companies who are reviewing and refining their strategies at the start of 2018, ARC Advisory Group has released a fascinating paper titled “Virtualization and Industrial Control”. If you’re in the industrial control business, this paper should be high on your New Year reading list.

 

Written by Harry Forbes, Research Director at ARC, the paper explains why the industrial control systems (ICS) that automate production equipment in today’s factories and plants can also represent a formidable barrier to modernization and flexibility. Automation systems evolve at the rate of Operational Technology (OT) rather than Information Technology (IT), creating challenges around maintenance, upgrades, support and safety.

 

Forbes contrasts the slow evolution of industrial automation systems with the rapid progress in IT systems, which are driven by technologies such as system virtualization and cloud computing, as well as the integration of software development and deployment (DevOps). He highlights the work of the Open Process Automation Forum, an effort initiated by ExxonMobil in 2016, and discusses industrial control products such as the “Industrial Internet Control System” (IICS) solution introduced by GE in 2017.

 

Describing the business benefits of virtualization for ICS, the report states “The benefits of greater ICS virtualization for end users are clear. Lower TCO, longer ICS life, fewer disruptive changes, improved ability to manage change and implement continuous improvement. The result (and the largest payoff) comes in the form of improved operations”.

 

In a summary of potential disruptors in the industrial automation market, Forbes highlights Wind River’s Titanium Control platform, explaining that “An excellent example is Wind River’s Titanium Control platform, which combines Wind River’s long experience in real-time operating systems with extensive low latency virtualization and on-premise cloud computing technology that has been proven in mission critical applications." This is why we’ve seen tremendous customer interest in the Titanium Control platform since launching the product early in 2017.

 

Please download the ARC Advisory paper here.

 

If you’d like to know more about how Titanium Control streamlines the deployment and operation of virtualized industrial control applications, please check out the information online or contact us to arrange a face-to-face discussion.

Half listening to a phone call over a lousy cellphone connection, while munching on a snack and trying to finish an overdue presentation, it sounded like a trip that was too good to miss. Something about “Euphrates in December” seemed like the ideal business trip even if it promised a few visa challenges.

 

But then I started paying attention and realized the call was actually all about spending a week in Portland, Oregon in December, surrounded by test equipment, Ethernet cables, laptops, stale pizza and probably rain. So I’m pleased to report that Wind River was proud to have several of our expert engineers participating in last week’s OPNFV plugfest, and equally glad that they didn’t seem to need any assistance from a marketing guy like me, even though I’m a big fan of the Columbia River and microbreweries.

 

“Euphrates”, of course, is the name of the most recent OPNFV release, announced in October and the focus of attention at last week’s Plugfest hosted by Intel. (OPNFV is an open source project that facilitates the development and evolution of Network Functions Virtualization (NFV) components across various open source ecosystems.)

 

In due course, OPNFV will publish the final report from the plugfest. This will include a detailed report of the interoperability tests performed by experts from a wide range of hardware, software, systems and semiconductor companies. Based on the results of previous events, we can expect to see significant improvements in functionality and interoperability from this latest release.

 

As we talk to our customers about “compatibility”, we find the challenge is increasingly about demonstrating interoperability with other companies in the NFV ecosystem. From the service providers' point of view, open standards avoid the risk of vendor lock-in by encouraging the development of compatible and interoperable solutions by multiple companies. But service providers typically incorporate products from more than one vendor in the complete solution that they deploy, so they need proof that products that should work together seamlessly actually do so. As OPNFV continues to make progress, we expect it to become a de facto standard against which all NFV and NFVI vendors will have to test their solutions.

 

Plugfests like the one hosted by Intel last week are vital as the telecom industry transitions from traditional, fixed-function equipment to dis-aggregated virtualized solutions that leverage best-in-class products from multiple vendors. Events like this demonstrate that an industry-wide ecosystem can successfully form around open standards and deliver interoperable, compatible products.

 

At Wind River, we’re proud to have contributed to all four OPNFV Plugfests held so far and we look forward to even more exciting and advanced accomplishments at the next one. And I need to quit multitasking and start paying better attention to phone calls.

Google “OPA” and one of the first results you’ll see will be all about Oatmeal Pale Ale, from Hood River, Oregon. Apparently, “a refreshing beer, high on hop aromatics yet well balanced with modest bitterness…. OPA pours a beautiful golden hue with a dense rocky head.” Definitely now on my list for the next time I visit the beautiful Columbia River Gorge.

 

But scroll further down the page and you’ll see that “OPA” also stands for the Open Process Automation Forum. I wonder what they drink at their meetings?

 

It’s been almost a year since the launch of the OPA Forum a working group within The Open Group. OPA is a vendor- and technology-neutral industry consortium. The goal of the new OPA Forum is to develop a standards-based, secure, interoperable process control architecture that can be leveraged across multiple industries that include: food and beverage; mining and metals; oil and gas; petrochemical; pharmaceutical; pulp and paper; utilities.

 

ExxonMobil was the driving force behind the establishment of the OPA Forum. Back in 2010, they started an intensive R&D program based on a vision of a reference architecture that would replace legacy physical control systems with software-based solutions. Their thinking was heavily influenced by the use of open systems in avionics and the Future Airborne Capability Environment (FACE) consortium. They also became aware of the adoption of Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) in the telecom industry, a strategy to minimize lifecycle costs through automation, agility and optimized resource utilization.

 

During 2014, ExxonMobil developed the functional characteristics and started engaging key industry-leading companies. In 2015, they contracted Lockheed Martin to define detailed requirements and develop a prototype implementation. Then last year, 2016, they approached the Open Group to form the OPA Forum with the goal of developing open standards that would spur widespread adoption of their architectural vision, leading to multiple sources of compatible and interoperable products.

 

Fifty-seven people attended the first OPA Forum meeting. They represented thirty different organizations, most of which subsequently joined the Forum and the full list of current members is here. At Wind River, we are pleased to contribute as part of the team from Intel, our parent company and a silver member.

 

A significant element in the OPA Forum vision is the transition from discrete, physical control systems to software-based implementations that are typically combined with edge analytics software to drive real-time decisions. For critical infrastructure such as manufacturing and process control, the most fundamental requirement for this software is that it must run reliably, securely and safely, continuously gathering industrial data from an array of sensors and actuating responses in real time, while consolidating both operations and control functions.

 

Wind River’s Titanium Control software platform for critical infrastructure addresses these needs, enabling industrial control companies to leverage the concepts of the OPA Forum vision while ensuring the performance, reliability and security that they need.

 

Based on open industry standards, the Titanium Control platform enables virtualized software applications to run on standard, cost-effective IT-class servers while ensuring six nines (99.9999%) uptime and optimum asset utilization. It provides best-in-class security and threat mitigation, along with ultra-low latency system-level performance, leveraging technology originally launched in 2014 and proven since then in telecom industry deployments.

 

Titanium Control enables industrial control companies to improve their top-line revenue generation while simultaneously lowering their operating costs, maximizing their overall return on investment as they moved to software-based architectures.

 

For top-line revenue growth, Titanium Control allows operators to efficiently scale control systems to add capacity, while accelerating the introduction of new functions and services to optimize control processes. As a state-of-the-art software platform, it enables manufacturing companies to leverage the expertise of millennial programmers and innovative third-party software vendors.

 

To drive reductions in operational costs, Titanium Control eliminates the cost and risk of replacing physical control devices as well as reducing capital costs for new deployments and capacity expansions. Critically, it ensures regularly-updated end-to-end security for business operations and control functions.

 

This post has only mentioned a few of important features and capabilities of Titanium Control, all designed to streamline the deployment and operation of virtualized industrial control applications. If you’d like to know more about how Titanium Control supports the vision of the OPA Forum, please check out the information online or contact us to arrange a face-to-face discussion.

 

And if you’re a craft beer aficionado, please check back with me in a few months if you want my opinion on Oatmeal Pale Ale: I have a trip to Oregon coming up which just might allow time for a thorough evaluation or two.

If you’re heading to Sydney for next week’s OpenStack Summit, you should mark your calendar for an important session at 10.50am on Tuesday November 7th: “Tuning packet and interrupt latency for 5G applications”.

 

Two of Wind River’s foremost OpenStack experts, Chris Friesen and Ian Jolliffe, will present this session and explain how to use OpenStack successfully in challenging 5G applications.

 

Edge applications, 5G in particular, have some very tight packet and interrupt latency requirements. Packet latency can be impacted by the networking technology selected, such as a virtual switch, PCI pass-through or SRIOV. Interrupt latency can impact guest performance as well.

 

Addressing these challenges requires careful tuning of OpenStack. This involves multiple aspects of OpenStack, including host kernel configuration, tuning and isolation techniques, Neutron and Nova. Other factors such as how the guest is configured can also impact interrupt latency and timer accuracy, with critical implications for the design and configuration of the VM.

 

Chris and Ian will discuss how to select the appropriate networking technology while sharing benchmark results that include packet latencies. They will also explain how to configure both the host and guest for optimal interrupt latency and discuss real world trade-offs that must be considered for a viable OpenStack deployment.

 

So why is this so important? Why should you listen to Chris and Ian instead of heading for the Sydney Cricket Ground?

 

Industry analysts project that the majority of new revenue opportunities for 5G will be based around new edge use cases, as opposed to enhanced mobile broadband. Applications such as autonomous driving, industrial IoT, augmented reality, smart stadiums, immersive retail, tactile internet and smart cities are all edge computing use cases that require ultra-low latency along with telco-grade uptime. OpenStack will only be usable in these applications if the kind of techniques explained in this session are implemented and adopted.

 

So we hope that you’ll join Chris and Ian at 10.50am on Tuesday November 7th (Level 2, Parkside 2 Foyer). The topics that they will cover represent key enablers for the new revenue opportunities that the whole industry is expecting from 5G.

 

And then you have a couple of weeks to relax before the main event of the Australian summer which starts on November 23rd, The Ashes. Somehow that will have to be the topic for another post.

Radio Access Network (RAN) terminology is nothing if not confusing. Traditional RAN configurations are now termed “Distributed RAN”. The first phase of cost optimization through consolidation is typically called “Centralized RAN” or “CRAN”. Capacity improvements achieved through baseband pooling are often known as “Cloud RAN”, representing another, different use of the term “CRAN”. Currently, the ultimate approach to enhancements is “Virtual RAN” or “vRAN”. And we haven’t even mentioned “mini-CRAN”, proposed by China Mobile, let alone “Coordinated RAN”, “Collaborative RAN”, “Clean RAN” or “Advanced CRAN”, all of which have shown up in conference presentations over the past three years.

 

Regardless of all these flavors of RAN improvements, there’s consensus within the industry that a virtual RAN (vRAN) architecture enables service providers to achieve the best overall potential of cost savings, dynamic capacity scaling, better Quality of Experience (QoE) and rapid instantiation of new services.

 

In a vRAN architecture, the Baseband Units (BBUs) are virtualized, rather than being located at the cell site as physical equipment. The virtual BBUs are deployed on NFV infrastructure software platforms like Wind River’s Titanium Cloud, running on industry-standard x86 servers and consolidated in centralized data centers, while the Remote Radio Units (RRUs) remain at the cell sites at the edge of the network. vRAN leverages standard server hardware that cost-effectively scales up or down processor, memory and I/O resources based on dynamic changes in demand, infusing the RAN with capacity for application intelligence, which significantly improves service quality and reliability. In many configurations, the architecture also allows for Ethernet and IP fronthaul transport, which gives services providers more cost-effective options for RRU connections.

 

To accelerate the introduction of cost-effective vRANs, Wind River has collaborated with Altiostar, Amdocs and Dell EMC to develop a pre-integrated, validated end-to-end solution, ready for deployment by service providers.

Altiostar contributed their vRAN solution, which comprises a software-intensive LTE eNodeB combined with Ethernet fronthaul. Amdocs provided their Network Cloud Service Orchestrator (NCSO), an open, catalog-driven service orchestration solution, while their professional services expertise is also available for designing, deploying, operating and optimizing mobile networks. The solution leverages the Dell EMC PowerEdge R630, an ultra-dense, two-socket 1U rack server based on the latest Intel® Xeon processor E5-2600 v4 product family. Finally, Wind River’s Titanium Cloud platform is the industry’s only fully integrated, ultra-reliable, deployment-ready family of virtualization platforms that enables service providers to deploy virtualized services faster, at lower cost and with guaranteed uptime.

 

You can read all about the benefits of this integrated vRAN solution in our recently-published white paper “vRAN: the Next Step in Network Transformation”.

 

You’ll also want to check out the recording of our recent webinar hosted by SDxCentral, where representatives from all four companies jointly presented the solution and explained the business benefits that it delivers.

 

By extending the benefits of network virtualization from the core to the RAN, vRAN is the optimal solution for cost-efficiently increasing capacity, reducing costs and creating new services. The solution described in the white paper and the webinar delivers carrier grade reliability, predictable performance, low latency, unrivaled manageability, massive scalability, optimized resource utilization and flexible deployment options. These best-in-class technical features provide service providers with clear, quantifiable business benefits in terms of cost reductions, capacity improvements and accelerated deployment times.

 

If you’d like to know more about any of the Titanium Cloud products or solutions, please check out the information online or contact us to arrange a face-to-face discussion.

Want to improve your OpenStack-based cloud performance by 15x? Here’s how….

 

As communications service providers (CSPs) explore architectural choices for virtualizing their network infrastructure, OpenStack has emerged as the leading option for the cloud management software that manages the lifecycle of Virtual Machines (VMs). There are sound business reasons for this selection: OpenStack is an established open-source community project with tens of thousands of contributors worldwide, even though most of them are working on enterprise rather than telecom components, while the availability of multiple OpenStack-based solutions mitigates the risk of vendor lock-in.

 

During their evaluations of OpenStack-based platforms for telco clouds, CSPs typically focus heavily on four aspects of a platform:

  • First, they explore whether the platform effectively streamlines the installation, commissioning, maintenance and upgrades of the cloud, since these are all factors that drive the Total Cost of Ownership for their network infrastructure.
  • Second, they analyze the platform’s capability to ensure the level of service uptime that is necessary both to meet Service Level Agreements (SLAs) with their enterprise customers and also to comply with regulations pertaining to critical network services. Failing to meet either of these uptime metrics can result significant impacts to their top-line revenue.
  • Third, they investigate the end-to-end security implemented in the platform, looking to ensure the availability of comprehensive security features required to address today’s complex threat landscape as well as support for timely, dynamic updates as new threats emerge.
  • Finally, they measure the performance of the platform running a range of applications and services, because this directly influences the number of servers required for their data center and hence their anticipated CAPEX and OPEX spend.

 

By providing best-in-class functionality in all these areas, Wind River’s Titanium Cloud portfolio of OpenStack-based virtualization platforms has emerged as the industry’s leading solution for telco network virtualization.

 

While we’ve blogged extensively about all aspects of Titanium Cloud since its launch in 2014, a recently-published white paper written by GENBAND and Wind River contributes additional interesting, real-world information to the topic of system-level performance.

 

To evaluate the performance difference between “vanilla” (i.e. unmodified) OpenStack and an optimized open source solution, two different OpenStack Neutron networking implementations were tested with GENBAND’s Advanced Media Software (AMS) Virtual Network Function (VNF) software. The paper compares the performance of the GENBAND AMS VNF with vanilla OpenStack in one environment and Titanium Cloud in another environment.

 

The tests analyzed call load, packet loss, CPU utilization and the amount of CPU cycles required for each stage of processing. Metrics were collected in real time by the iMedia component of the GENBAND AMS VNF and displayed in dashboards leveraging the Representational State Transfer (REST) application programming interfaces (APIs) within the VNF Manager.

 

Summarizing some of the key results, the vanilla OpenStack environment processed 200 calls while the Titanium Cloud environment handled 3,000 calls, an improvement of 15x achieved with no difference in hardware cost.

 

The difference in CPU utilization was also significant between the two environments: 12% utilization for the vanilla environment while 60,000 packets were being processed, compared to only 3% utilization for Titanium Cloud while processing 400,000 packets. This frees up significant CPU resources for other, revenue-generating applications and services.

 

The white paper provides details of several other metrics that were evaluated and analyzed, including the call processing rate, the jitter associated with CPU utilization and the packet loss experienced by the system.

 

As you’ll see from reading the complete paper, the bottom line is that Titanium Cloud achieved a 15x performance improvement, processing 3,000 calls compared to 200 by vanilla OpenStack, while also delivering better call quality.

 

As the paper explains, these numbers are important not only because of the pure technical advantages that they enumerate but also because of the resulting business benefits delivered to CSPs. By selecting Titanium Cloud as their network virtualization platform, they can minimize both their CAPEX and their OPEX, while at the same time maximizing their top-line revenue by guaranteeing the level of service uptime that their customers expect. And all while mitigating the risk of vendor lock-in because Titanium Cloud is a 100% open solution compatible with all the relevant open standards, validated through its extensive partner ecosystem.

 

We encourage you to take a look at this new white paper. Then when you’re ready to know more, please contact us to arrange a face-to-face discussion

When you ask executives in the manufacturing or industrial control sectors about “Industry 4.0”, their responses typically fall into one of two categories. Those that haven’t yet spent much time exploring the concept are likely to dismiss the term as Marketing hype, or a gross over-simplification like “Web 2.0”. But companies who have investigated the principles behind a smart factory see this trend as an opportunity to significantly increase their revenues while at the same time slashing their operational costs. What’s not to like?

 

In this post we’ll outline some of the key business challenges presented by Industry 4.0 and explain how they can be addressed by selecting the appropriate platform for secure, on-premise critical infrastructure.

 

Technically, the term should be “Industrie 4.0” because the initiative started as a vision sponsored by the German government, which actively promotes the computerization of manufacturing. At the Hannover Fair in October 2012, the Industry 4.0 working group presented a set of recommendations to the German government and then they subsequently presented their final report at the same event in April 2013.

 

The “4.0” term comes from the generally-accepted progression of major phases in manufacturing. Phase 1 comprised steam and the initial machines that mechanized some of the work that our ancestors did manually. Phase 2 brought electricity, the assembly line and the start of mass production. Phase 3 included the advent of computers and the beginnings of automation, when robots and machines began to replace human workers on assembly lines.

 

As the next significant industry disruption, Industry 4.0 introduces what is commonly known as the smart factory, in which cyber-physical systems monitor the physical processes of the factory and make decentralized decisions. Physical systems are implemented as the Industrial Internet of Things (IIoT), communicating and cooperating in real time both with each other and with humans.

 

There are compelling business motivations for companies to adopt Industry 4.0. In a comprehensive report, PWC Global points out that companies who successfully implement Industry 4.0 no longer need to choose between focusing either on a better top line or a better bottom line, because they can improve both at the same time. Over the next five years, the companies that PWC Global surveyed expect to reduce costs by an average of 3.6% per year, while increasing annual revenues by an average of 2.9%. More than half the respondents expect their Industry 4.0 investments to yield a return within two years or less, given investments of around 5% of their annual revenue. If even half of these expectations are met, Industry 4.0 has the potential to reshape the competitive landscape and bring fundamental change to established manufacturing industries.

 

There are some significant challenges though, several of which relate to the compute platforms that run software-based control systems as well as analytics software that drives real-time decisions. For critical infrastructure such as manufacturing and process control, the most fundamental requirement for this software is that it must run reliably, securely and safely, continuously gathering industrial data from an array of sensors and actuating responses in real time, while consolidating both operations and control functions.

 

The first critical requirement for a critical infrastructure platform is low-latency virtualization. Virtualization enables a single compute platform to run the diverse set of functions and applications found in smart factories. The platform must provide ultra-low latency in order to deliver near real-time performance for critical applications. At the same time, it must optimize resource utilization system-wide in order to minimize both CAPEX and OPEX.

 

High availability is vital. In the event of hardware or software failure, the platform must be able to perform an automatic failover quickly enough to ensure the integrity of the control system. This implies a failover speed orders of magnitude faster than standard cloud platforms developed for IT applications and on a par with the speed required for virtualized telecom infrastructure. The virtualization platform must facilitate failover in a number of ways, such as restarting a clean backup software image without a reboot or turning control over to a full redundant server to overcome catastrophic hardware or software failures.

 

Finally, security. While traditional, physical control systems provide “box-level” security features with no provision for end-to-end threat protection or dynamic updates, the requirements for Industry 4.0 are significantly more demanding. In smart factories, it’s critical to mitigate security threats that target devices and subsystems across the facility, while recognizing that new, ever more sophisticated threats are emerging constantly. Critical infrastructure systems require a secure chain of trust that extends all the way from physical hardware or sensors extending into the Virtual Machines (VMs) that run the software-based control functions and analytics. Data must be encrypted and the network itself must ensure authentication, authorization and accounting with secure identities.

 

Fortunately, there’s a solution to these challenges. Wind River’s recently-introduced Titanium Control software platform for critical infrastructure addresses this need, enabling manufacturing companies to leverage Industry 4.0 concepts while ensuring the performance, reliability and security that they need.

 

Based on open industry standards, the Titanium Control platform enables virtualized software applications to run on standard, cost-effective IT-class servers while ensuring six nines (99.9999%) uptime and optimum asset utilization. It provides best-in-class security and threat mitigation, along with ultra-low latency system level performance, leveraging technology originally launched in 2014 and proven since then in telecom industry deployments.

 

As mentioned earlier, a key premise of Industry 4.0 is that manufacturing companies can improve their top-line revenue generation while simultaneously lowering their operating costs. Titanium Control facilitates both these objectives, maximizing the overall return on investment for a smart factory.

 

For top-line revenue growth, Titanium Control allows operators to efficiently scale control systems to add capacity, while accelerating the introduction of new functions and services to optimize control processes. As a state-of-the-art software platform, it enables manufacturing companies to leverage the expertise of millennial programmers and innovative third-party software vendors.

 

To drive reductions in operational costs, Titanium Control eliminates the cost and risk of replacing physical control devices as well as reducing capital costs for new deployments and capacity expansions. Critically, it ensures regularly-updated end-to-end security for business operations and control functions.

 

This post has only scratched the surface of Titanium Control, all designed to streamline the deployment and operation of virtualized industrial control applications. If you’d like to know more about Titanium Control, please check out the information online or contact us to arrange a face-to-face discussion.

As long as you select the right software platform, virtualization will bring a wide range of benefits to your industrial control application without compromising on the level of security delivered by traditional physical infrastructure. Moreover, virtualization will enhance your system-level security by enabling dynamic updates in response to emerging threats.

 

In industries such as energy, manufacturing, healthcare and smart buildings, there are countless examples of companies relying on industrial control systems installed twenty or thirty years ago. Increasingly, though, these legacy deployments present major business challenges such as: increasing operational expenses due to high maintenance and replacement costs; a dwindling pool of skilled technicians; limited flexibility resulting from sole-sourced solutions with proprietary programming models; slow product lifecycles out of step with fast-moving IT and mobile technologies; outdated box-level security features with no provision for end-to-end threat protection or dynamic updates.

 

To address these challenges, innovative companies are leveraging virtualization in order to slash their operational costs by deploying secure, robust, flexible software-based solutions as alternatives to legacy, fixed-function hardware.

With virtualization, Level 1 through Level 3 control functions are consolidated onto standard IT-class servers for significant CAPEX and OPEX savings, resulting in software-based digital controllers, PLCs, DCSs, SCADA software, HMIs, historians and applications. Updating software is significantly lower cost than replacing physical equipment. At the same time, companies can differentiate through software solutions while avoiding the cost, risk and downtime of removing and replacing proprietary hardware and software.

 

As companies evaluate the benefits of introducing virtualization into their systems, security is a critical factor and there are typically two aspects to consider. On one hand, software-based solutions offer end-to-end security both for the network and for the control functions, permitting dynamic updates in response to emerging threats while leveraging software firewalls, VPNs and Intrusion Prevention Systems from leading IT software vendors. On the other hand, hardware-based security functions are well understood and have a proven track record within critical infrastructure applications, so it’s important to ensure that that the adoption of virtualization does not in any way compromise the level of security compared to what’s available in state-of-the-art hardware solutions.

 

Wind River’s Titanium Control platform addresses this need, ensuring that virtualized control functions achieve a level of security at least as high as that provided by the latest hardware technologies.

 

This post focuses on support for the Trusted Platform Module (TPM) function within Titanium Control. Subsequent posts will address additional security features in the platform, also summarized in the product overview.

 

Summarizing the background to this, TPM is an International standard, based on the Intel® Trusted Execution Technology (Intel® TXT) specification, for a secure crypto processor, originally viewed as a dedicated microcontroller designed to secure hardware by integrating cryptographic data into devices. The TPM specification was written by the Trusted Computing Group (TCG) consortium and standardized in 2009, with the latest version being TPM 2.0.

 

TPM offers facilities to provide measurements of software and platform components so that system software as well as local and remote management applications may use those measurements to make trust decisions. TPM’s support of remote attestation creates a nearly unforgeable hash key summary of the hardware-plus-software configuration. TPM also offers facilities for the secure generation of cryptographic keys, binding which encrypts data using a TPM key and sealing which encrypts data in a similar manner to binding but also specifies the state in which TPM must be in order for the data to be decrypted (unsealed) and a random number generator.

 

Software uses TPM to authenticate hardware devices. Since each TPM chip has a unique and secret RSA key burned in during manufacturing, it is capable of performing platform authentication.

 

One of the key benefits of virtualization in industrial control applications is that it enables the software-based control functions to run on generic server platforms located on-premise, either in a stand-alone configuration or a mini data center. In either case, customers expect to be able to deploy industry-standard servers that may not include hardware TPM. Virtualization also allows Virtual Machines (VMs) to be dynamically migrated from one physical server to another, whether to eliminate downtime during software updates, to recover automatically from platform failures or to adjust resource utilization in response to changing workloads. Binding a VM to a physical TPM severely limits the scenarios under which dynamic VM migration is possible.

To address these challenges, the Titanium Control platform has introduced an industry-leading virtual TPM (vTPM) feature. The vTPM comprises a complete software-based implementation of TPM 2.0. The vTPM device is instantiated on the trusted/secure Titanium Control host (encrypted).  This enables Titanium Control to manage the vTPM device and its non-volatile data securely through the VM’s lifecycle, including migrating it along with the relevant VM. The vTPM device is configured exactly the same as a physical TPM and no changes are required to the application itself, which behaves exactly as if it had booted from a physical TPM.

For industrial control applications, it is critical to ensure an end-to-end boot process that is fully secure. With Titanium Control, vTPM is a crucial element of that boot process, which runs in the Unified Extensible Firmware Interface (UEFI) mode.

 

The vTPM feature in Titanium Control enables Wind River’s customers to achieve the full range of benefits from virtualized industrial control functions without compromising on the security that is an absolute requirement for critical infrastructure.

 

This post has only scratched the surface of the features in Titanium Control, all designed to streamline the deployment and operation of virtualized industrial control applications. If you’d like to know more about Titanium Control, please check out the information online or contact us to arrange a face-to-face discussion.

Unless you’ve been vacationing far off the grid, you’ll have read about Intel’s announcement of the Intel® Xeon® Scalable platform, the highest-performance, most versatile data center platform ever. The new platform brings massive advantages to customers such as cloud and communications service providers, enterprises, high-performance computing providers and artificial intelligence companies.

 

With up to 28 of the highest-performance cores, the all-new Intel Xeon Scalable platform can support up to 4.2x more Virtual Machines (VMs) and up to 5x more transactions per second than four-year-old systems. Configured for storage, it delivers up to 5x more IOPS at up to 70 percent lower latency to quickly commit data to storage or bring it forward for analysis. For communications, the Intel Xeon Scalable platform can power emerging 5G networks, offering up to 2.5x better performance for key networking applications.

 

As part of Intel, the Wind River software team has been working closely with our colleagues on the processor team to ensure that the Titanium Cloud portfolio of virtualization software products enables critical infrastructure applications to leverage the innovations in the new platform. The most recent version of Titanium Cloud, released just a few days ago, incorporates an extensive set of optimizations to achieve this. Fully validated with the Intel Xeon Scalable platform, the Titanium Cloud portfolio is the optimum infrastructure software solution for telecom and industrial control applications with stringent requirements for uptime, performance and security.

 

As one example of these optimizations, Titanium Cloud enables guest VMs to leverage the full AVX-512 instruction set, making the Titanium Cloud portfolio an ideal choice for deploying real-time critical, computationally intensive workloads such as those in 5G networks. Further, for 5G and other applications like Cloud Ran (CRAN) that require extremely low latency, the Titanium Cloud low latency profile can be leveraged to provide best-in-class latency to the guest.

 

All four products within the Titanium Cloud portfolio leverage the innovations in the new Intel Xeon Scalable platform:

  • Titanium Control for secure, on-premise industrial control infrastructure;
  • Titanium Core for service provider data centers, Central Offices (COs) and Point-of Presence (PoPs);
  • Titanium Edge for ultra-reliable telco edge applications like Multi-access Edge Computing (MEC);
  • Titanium Edge SX for small-footprint telco edge applications like Universal Customer Premise Equipment (uCPE).

 

If you’d like to know more about Titanium Cloud portfolio of products that enable the Intel Xeon Scalable Platform for critical infrastructure applications, please check out the information online or contact us to arrange a face-to-face discussion.

“Springtime in Paris”: what images does that phrase conjure up for you?

 

Well, if you’re a hotshot engineer working on the OPNFV project, it’s a vision of long days and late nights huddled over racks of hot, noisy servers while debugging complex interoperability problems. At least, that’s what we heard from our representatives to the recent OPNFV Plugfest….

 

OPNFV recently held their third Plugfest, hosted by Orange at their beautiful “Orange Gardens” facility in Paris from April 24th through April 28th, 2017. The plugfest was well attended, with 87 participants from 29 organizations that included six service providers.

 

The focus of this Plugfest was the new OPNFV Danube release and the final report was published a few days ago. It’s a fascinating summary of all the recent OPNFV achievements. As the report explains, “the Plugfest gave end-users and vendors the ability to try new hardware, installers, scenarios, test cases and tools while coming together as a community and solving problems collaboratively”.

 

Wind River’s Titanium Cloud NFV infrastructure software platform featured prominently in the week’s testing activities. Three different hardware platforms, from Advantech, Huawei and Lenovo, used Titanium Cloud for their NFVI software.

 

One application of the Titanium Cloud platform was successfully hosting the Dovetail test suite that forms the basis of the upcoming Compliance Verification Program (CVP). Another was in multi-site testing: a Titanium Cloud region was connected to a region deployed using OPNFV Danube, with the two regions sharing common OpenStack services that included Keystone, Cinder and Glance. Using this environment, Dovetail, Yardstick, Functest and Storperf tests were all successfully executed, one region at a time.

 

As we talk to our customers about “compatibility”, we find the challenge is increasingly about demonstrating interoperability with other companies in the NFV ecosystem. From the service providers' point of view, open standards avoid the risk of vendor lock-in by encouraging the development of compatible and interoperable solutions by multiple companies. But service providers typically incorporate products from more than one vendor in the complete solution that they deploy, so they need proof that products that should work together seamlessly actually do so. As OPNFV continues to make progress, we expect it to become a de facto standard against which all NFV and NFVI vendors will have to test their solutions.

 

Plugfests like the one hosted by Orange are vital as the telecom industry transitions from traditional, fixed-function equipment to dis-aggregated virtualized solutions that leverage best-in-class products from multiple vendors. Events like this demonstrate that an industry-wide ecosystem can successfully form around open standards and deliver interoperable, compatible products.

 

At Wind River, we’re proud to have contributed to all three OPNFV Plugfest held so far and we look forward to even more exciting and advanced accomplishments at the next one.

Filter Blog

By date: By tag: