Skip navigation


Data analytics is crucial in the IoT realm. To address this issue, Dellis launching IoT gateway solutions in collaboration with Intel. Using the headroom in analytics power offered by embedded Intel® processors, Dell is implementing a strategy of aggregating and analyzing data closer to the IoT edge. Dell’s gateway utilizes hard¬ware and software technologies from the Intel® IoT Gateway reference design to provide customers with the end-to-end connectivity, security, and manageability needed to help realize an IoT vision. The company’s iSERIES WYSE 3290 gateways are compatible with Wind River Linux*, Ubuntu, Microsoft Windows* 10 IoT, and other popular operating systems for maximum flexibility and adaptability to business needs.


Read this article on>>

Medical/healthcare professionals can benefit from the Internet of Things (IoT) due to the wealth of information it provides. In fact, greater data access promises to help improve patient outcomes and provider efficiency. For example, the point-of-care terminal POC-W212, developed by Advantech, is a high performance point-of-care computer that’s capable of displaying, rendering and transmitting medical images and videos for medical professionals. Designed from the ground up for enhanced safety and reliability, the fanless POC-W212 computer is UL60601-1/EN60601-1 certified and features an IP65-rated front panel and IP54-rated system enclosure.


Read this article in>>

One vertical industry that’s prospering from the Internet of Things (IoT) is digital security surveillance (DSS). The growth stems from the power and availability of the network combined with the abundance of inexpensive digital cameras and dependable analytics software. Kontron offers a series of network video recorders (NVRs) that use an open design system based on the Intel®Atom™ processor E3845 and its quad-core, 1.91-GHz performance. The NVRs can run standard video-management software (VMS).


Read this article on>>

The Smart Home market accounts for many of the early success stories in the build out of the Internet of Things (IoT). Features of the Smart Home include energy management, occupant comfort, asset management, and security access. Note that most of these same features will apply to the Smart Building.


By leveraging the Intel® IoT Gateway reference design, which includes technology from Intel® Internet of Things Solutions Alliance members Wind River and McAfee, technology partners can expand the connectivity, improve the interoperability, and shorten the time to market for developers looking to transform their IoT business and operations.


Elitegroup Computer Systems, generally known as, has turned its own office into a smart building in an effort to replicate a "super" smart home (Figure 1). The Taipei, Taiwan–based maker of computing products, and a General member of the Alliance, has deployed 90 IoT gateways on its 25 floors across an area of 32,000 m2. ECS’s office building receives readings from different types of sensors, including temperature sensors, motion detectors, O2/CO2 sensors, etc., which give the office administration the option to automatically adjust the environment and save energy in real-time. ECS claims to have achieved an 8% power savings so far this year and the company is now aiming for energy savings as high 30% next year.


Image 1 - ECS.jpg

Figure 1. Elitegroup Computer Systems (ECS) is using gateways based on the Intel® IoT Gateway design to monitor a wide range of systems in their own building.


Administrators can also remotely control lighting, HVAC, and meeting room equipment. Moreover, employees can only access a floor through their ID cards, which are supervised by a central security system for protection and access control. The video makes it clear. In the smart home, consumers could have access to lighting, HVAC, and door locks, amongst other things, as shown in the Smart Home video.


A key issue that must be dealt with in Smart Home and Smart Building environments is how to handle the plethora of wired and wireless protocols. ECS’s GWS-QX gateway, with full wired and wireless connection interface, achieves the minimum infrastructure changes on already installed wire devices, as well as connects to various wireless legacy devices during installation (Figure 2). For the use of Smart Home the GWS-QX2 gateway is also equipped with custom changeable I/O ports (RS-232 and RS-485 interfaces), as well as LAN and GPIO ports to support most Smart Home applications. The GWS-QX2, smaller than a television set-top box, integrates the Wind River* Intelligent Device Platform XT customizable middleware development environment that provides security, connectivity, rich networking options, and device management. It simplifies the development, integration, and deployment of gateways for the Internet of Things.


Image 2 - ECS.png

Figure 2. The blueprint of a Smart Building.


ECS works closely with Tatung Group, ECS's parent company and a system integrator. Tatung uses IoT gateways like the GWS-QX and GWS-QX2 with its cloud framework for monitoring sensor data such as power consumption levels.


Both the GWS-QX and GWS-QX2 gateways for Smart Home applications are powered by Intel®  Quark™ Soc X1021, which supports an extreme temperature (up to 85°C) and secure boot. The Intel® Quark SoC embedded processors have been designed from the ground up for thermally constrained, fanless, and headless designs, which simplifies integration into smart homes. Intel Quark SoC offers rich I/O capabilities and an array of connectivity options for both wired and wireless protocols. Moreover, it seamlessly interfaces with sensors and various memory options through several expansion ports.


For developers looking to target similar technology for industrial applications, ECS will offer another IoT gateway early next year, the GWS-BTI, which is aimed at industrial applications like manufacturing automation that involve high voltage and heavy industrial equipment. Such industrial use models require constant surveillance with predictive maintenance to avoid shutdowns or component failure. The gateway provides the connectivity needed to capture edge data in real-time. This data is stored in the cloud with Tatung's Stream Analytics software that helps the gateway identify the status of equipment and individual power consumption by subcomponents to ensure reliability and robust operations.


The GWS-BTI's industrial design can deal with a challenging environment at temperatures from -40°C to 70°C. A full-featured I/O port allows the gateway to be connected to a variety of sensors, including those for energy management (Figure 3). Moreover, 3G cellular and Wi-Fi features enable real-time remote supervision and instant event reporting.


Image 3 - ECS.png

Figure 3. Energy management in a wide range of markets can be handled through IoT gateways based on the Intel® IoT Gateway design.


Note that the GWS-BTI device is designed with a more powerful processor, from the Intel® Atom™ processor E3800 product family, also specified for Intel IoT Gateway designs. This Intel® Atom™ processor supports up to 8 GB DDR3L.


ECS is also employs Intel® Trend Analytics Software (TAS) which, when combined with ECS’s home-grown Energy Management Sensor, can enable IoT gateways like the GWS-BTI to perform energy predictive maintenance, energy monitoring, and vibration detection functions via a standard USB port.


See the Solutions Directory for more products from ECS.


Learn More


·       Contact featured member:

Solutions in this blog:

·      GWS-QX2 gateway

·       WindRiver* Intelligent Device Platform XT

·® Quark X1021 SoC

·® IoT Gateway

·       Intel® Trend Analytics Software

Related topics:

ECS is a General member of the Intel® Internet of Things Solutions Alliance


Richard Nass


OpenSystems Media by special arrangement with the Intel® Internet of Things Solutions Alliance.

Post1.pngThe recent SDN & OpenFlow World Congress in Dusseldorf attracted a fascinating mix of attendees. On one side were long-time veterans of the telecom industry exploring the opportunities that virtualization is bringing to service provider networks. On the other side were IT and cloud experts working on the challenges of extending their infrastructure to support telecom services.

The topic bringing these two groups together, of course, is Network Functions Virtualization (NFV). The promise of NFV is that a combination of virtualization and “cloudification” will enable service providers both to reduce their OPEX through improved network efficiency and to improve their top-line revenue through the agile delivery of new, value-added services. In order to successfully achieve this goal, IT teams and networking teams are going to have to work together in unprecedented ways. Each group approaches the challenges from a different perspective and with a different set of experiences.

One area that causes a lot of confusion and misunderstanding for folks with a background in IT and cloud infrastructure is the whole topic of “Carrier Grade” reliability for telecom services. More and more vendors are starting to use Carrier Grade terminology in connection with their products, but the requirements and challenges of Carrier Grade reliability are very different from what many of people have had to deal with before, while the telecom industry of course brings its own alphabet soup of confusing acronyms and terminology.

In this post, we’ll outline some of the myths about Carrier Grade that we often encounter when we’re demonstrating NFV solutions to conference attendees whose main focus until now has been on enterprise-type applications.


Myth #1: Carrier Grade reliability has no direct impact on service provider revenues

Post2.pngIn 2014, Heavy Reading published a detailed analysis titled “Mobile Network Outages & Service Degradations” that discussed the business impact of network outages. The report calculated that during the twelve months ending October 2013 service providers worldwide lost approximately $15B in revenue through such outages, representing between 1% and 5% of their total revenues. All major service providers were affected.

There are several sources of this lost revenue. First, there’s the increased rate of subscriber churn (dissatisfied customers take their business elsewhere). Second, there are the operational expenses incurred to fix the problems. Third, service providers lose the ability to capture revenue from a billable service if it’s unavailable. Fourth, future revenues are impacted due to damage to brand reputation. Fifth, refunds must be paid to enterprise customers with Service Level Agreements (SLAs) that guarantee a certain level of uptime. And finally there are inevitably legal costs relating to SLA issues.

It’s important to note that this analysis relates to a 12-month period ending in 2013, when service providers’ infrastructure was completely based on physical equipment, typically with high reliability proven over many years’ deployments and before any adoption of network virtualization.

NFV has the potential to make this situation much worse: services and applications will now be virtualized; they will be new and unproven; VMs will be dynamically reallocated across servers, racks and even data centers; traffic flows will be more complex and hard to debug; solutions will inevitably be multi-vendor rather than from a single supplier.

As they progressively adopt NFV, it’s a business imperative for service providers to maintain Carrier Grade reliability for their critical services and high-value customers. Otherwise their overall uptime will decrease, further impacting their revenues and negating one of the key reasons (top-line growth) for moving to NFV in the first place.


Myth #2: Carrier Grade reliability is a stand-alone “feature” that you can add to your infrastructure

It’s extremely difficult to develop network infrastructure that delivers Carrier Grade reliability. Multiple, complex technologies are needed in order to guarantee six-nines (99.9999%) reliability at the infrastructure level so that services can achieve five-nines uptime.

Looking first at what it takes to guarantee network availability for virtualized applications, an optimized hypervisor is required that minimizes the duration of outages during the live migration of Virtual Machines (VMs). The standard implementation of KVM, for example, doesn’t provide the response time that’s required to minimize downtime during orchestration operations for power management, software upgrades, or reliability spare reconfiguration. In order to respond to failures of physical or virtual elements within the platform, the management software must be able to detect failed controllers, hosts or VMs very quickly launch self-healing actions, so that service impact is minimized or eliminated when failovers occur. The system must automatically act to recover failed components and to restore sparing capability if that has been degraded. To do this, the platform must provide a full range of Carrier Grade availability APIs (shutdown notification, VM monitoring, live migration deferral, etc.), compatible with the needs of the OSS, orchestrator and VNFs. The software design must ensure there is no single point of failure that can bring down a network component, nor any “silent” VM failures that can go undetected.

Second, network security requirements present major challenges. Carrier Grade security can’t be implemented as a collection of bolt-on enhancements to enterprise-class software, rather it must be designed-in from the start as a set of coordinated, fully-embedded features. These features include: full protection for the program store and hypervisor; AAA (Authentication, Authorization and Accounting) security for the configuration and control point; rate limiting, overload and Denial-of-Service (DoS) protection to secure critical network and inter-VM connectivity; encryption and localization of tenant data; secure, isolated VM networks; secure password management and the prevention of OpenStack component spoofing.

Post3.pngThird, a Carrier Grade network has stringent performance requirements, in terms of both throughput and latency. The host virtual switch (vSwitch) must deliver high bandwidth to the guest VMs over secure tunnels. At the same time, the processor resources used by the vSwitch must be minimized, because service providers derive revenue from resources used to run services and applications, not those consumed by switching. The data plane processing functions running in the VMs must be accelerated to maximize the revenue-generating payload per Watt. In terms of latency constraints, the platform must ensure a deterministic interrupt latency of 10µs or less, in order for virtualization to be feasible for the most demanding CPE and access functions, such as C-RAN. Finally, live migration of VMs must occur with an outage time less than 200ms, using a “share nothing” model in which all a subscriber’s data and state are transferred as part of the migration. The “share nothing” model, used in preference to the shared storage model in enterprise software, ensures that legacy applications are fully supported without needing to be rewritten for deployment in NFV.

Finally, key capabilities must be provided for network management. To eliminate the need for planned maintenance downtime windows, the system must support hitless software upgrades and hitless patches. The backup and recovery system must be fully integrated with the platform software. And support must be implemented for “Northbound” APIs that interface the infrastructure platform to the OSS/BSS and NFV orchestrator, including SNMP, Netconf, XML, REST APIs, OpenStack plug-ins and ACPI.

You can’t achieve these challenging requirements by starting from enterprise-class software that was originally developed for IT applications. This type of software usually achieves three-nines (99.9%) reliability, equivalent to a downtime of almost nine hours per year.


Myth #3: Carrier Grade reliability can be implemented in the network applications themselves

There’s been a lot of industry discussion recently about Application-Level High Availability (HA). This concept places the burden of ensuring service-level reliability on the applicPost4.pngations themselves, which in an NFV implementation are the VNFs. If it’s achievable, it’s an attractive idea because it means that the underlying NFV Infrastructure (NFVI) could be based on a simple open-source or enterprise-grade platform.

Even though such platforms, designed for IT applications, typically only achieve three-nines reliability, that would be acceptable if the applications themselves could recover from any potential platform failures, power disruptions, network attacks, link failures etc. while also maintaining their operation during server maintenance events.

Unfortunately, Application-Level HA by itself doesn’t achieve these goals. No matter which of the standard HA configurations you choose (Active / Standby, Active / Active, N-Way Active with load balancing), it won’t be sufficient to ensure Carrier Grade reliability at the platform level.

In order to ensure five-nines availability for services delivered in an NFV implementation, you need a system that guarantees six-nines uptime at the platform level, so that the platform can detect and recover from failures quickly enough to maintain operation of the services. This implies that the platform needs to deal with a wide range of disruptive events which cannot be addressed by the applications because they don’t have the right level of system awareness or platform management capability.

For anyone involved in architecting, developing or deploying any part of an end-to-end NFV solution, this new white paper “NFV: The Myth of Application Level HA” is required reading. It provides a detailed technical analysis of the tradeoffs between Application-Level HA and Carrier Grade platforms and gives a clear direction to follow.


Myth #4: Carrier Grade reliability is something you get from the OPNFV project

Formally launched in September 2014, the Open Platform for NFV (OPNFV) project is an open source reference platform intended to accelerate the introduction of NFV solutions and services. OPNFV operates under the Linux Foundation and the primary goal of the project is to implement the ETSI specification for NFV.

Several service providers have been quoted publicly as confirming that they see the OPNFV reference platform as a way to accelerate the transition from the standards established by ETSI to actual NFV deployments. Of course they recognize that OPNFV code can’t be directly deployed into live networks, anticipating that software companies will use OPNFV as the baseline for commercial solutions with full SLA support.

Post5.pngOPNFV’s initial focus is NFV Infrastructure (NFVI) and Virtualized Infrastructure Management (VIM) software, implemented by integrating components from upstream projects such as OpenDaylight, OpenStack, Ceph Storage, KVM, Open vSwitch and Linux. Along with application programmable interfaces (APIs) to other NFV elements, these NFVI and VIM components form the basic infrastructure required for hosting VNFs and interfacing to Management and Network Orchestration (MANO).

The first OPNFV release “Arno” became available in June 2015. Arno is a developer-focused release that includes the NFVI and VIM components. The combination offers the ability to deploy and connect VNFs in a cloud architecture based on OpenStack and OpenDaylight. The next release “Brahmaputra” is planned as the first “lab-ready” release, incorporating numerous enhancements in areas such as installation, installable artifacts, continuous integration, improved documentation and sample test scenarios.

Neither Arno nor Brahmaputra, however, incorporates any features that contribute to delivering Carrier Grade reliability in the NFVI platform. This is an example of an area where companies with proven experience in delivering six-nines infrastructure will continue to add critical value.

Solutions such as Wind River’s Titanium Server build on community-driven reference code and enhance it with functionality that is an absolute requirement for platforms deployed in live service provider networks, while remaining fully compatible with all the applicable open standards.


Post6.pngAt SDN & OpenFlow World Congress, we enjoyed exploring these topics with attendees who stopped by our booth to see a comprehensive demonstration of a proven Carrier Grade NFV cloud solution that’s already been selected by multiple customers. The folks whose background was primarily in IT or cloud applications quickly developed a whole new appreciation for the complexities associated with guaranteeing the level of reliability that’s an absolute requirement in the world of telecom.

If you missed us in Dusseldorf, or simply want to learn more about how we deliver NFVI with the performance and uptime that service providers require, visit our Titanium Server website.

The potential of the Internet of Things (IoT) is huge, but the challenges can be daunting. Take the Industrial IoT, or IIoT, which can be fragmented amid interoperability issues, thereby making it difficult to demonstrate its true value. Connecting devices and systems with potentially incompatible protocols and fieldbus technologies can be daunting. The next difficult task is securing dependable transmission of collected data and functionality in harsh and demanding industrial environments. By leveraging Intel® IoT Gateway designs that include technology from McAfee and Wind River, technology partners like ADLINK Technology can address this challenge. For more information, please watch the video.


Read this article on>>

To be competitive, innovative solutions are needed at each step of the IoT value chain. Take industrial automation, for instance, where it’s imperative to avoid production disruptions that can result in significant revenue losses. Avoiding unplanned downtime requires remote monitoring and management, system health diagnostics, and knowledge of each machine’s operating parameters to enable failure prediction. One must also ensure that when failure is predicted, steps are taken to make sure that system recovery, if required, occurs in a timely fashion. A combination that can help achieve that phenomenon is Wind River’s Helix Device Cloud solution and Predixion Insight™, Predixion’s advanced analytics software.


Read this article on>>

It’s been a busy 18 months for Wind River’s Titanium Server NFV Infrastructure (NFVI) platform. We announced Titanium Server at Mobile World Congress back in February 2014, as the industry’s first commercial Carrier Grade solution for NFVI, and it’s still the only platform in this category. We delivered the General Availability product release in October 2014 and shortly after that revealed that HP had adopted the technology for their Carrier Grade Helion solution: just the first of several customer announcements. We launched the Titanium Cloud partner ecosystem in June 2014 and this has quickly expanded to a rich set of validated solutions from industry-leading companies.

In this post, we’ll discuss how our customers’ target use cases have evolved during this time and highlight some of the key enhancements that we’ve made to Titanium Server reflecting this shift in focus within the industry.

Post-1.pngAt the beginning of 2014, the industry was emphasizing applications such as virtual EPC and virtual IMS as initial applications for NFV. The expectation was that these network core functions would not only yield significant OPEX savings but would also be suitable for early deployment.

That position seems to have changed in the intervening months. Our customers are now telling us that they see greater near-term business potential in applications such as virtual business CPE (vBCPE), Mobile Edge Computing (MEC) and virtual RAN (vRAN) use cases.

We consistently hear that these edge and access use cases will significantly accelerate service providers’ ability to deploy new services in response to customer requests, while also providing OPEX savings that are both significant and quantifiable. At the same time, they can be implemented without the need for comprehensive Management and Orchestration (MANO) solutions, which is helpful since the relevant ETSI standards are still under discussion.

In response to these shifting industry priorities, we’ve moved aggressively to implement new features in Titanium Server so that we can continue to support our customers’ target applications with an NFVI cloud that provides the reliability, performance and cost structure that they need. Our most recent release, now in use by customers and partners worldwide, includes a wide range of enhancements to the original platform.

Low system cost is critical to the viability of many vBCPE, MEC and vRAN solutions. These applications are often hosted either in a customer premise or in a local service provider Point of Presence (PoP) where large server racks are not cost-effective. So we’ve added a small-footprint version of Titanium Server that can be deployed in only two servers.

Post-2.pngIn this configuration, each of two redundant servers is partitioned into Compute, Control and Storage functions. The control and storage functions can each run on as few as a single processor core, leaving the lion’s share of cores available for the compute function which hosts revenue-generating services.

Unlike other enterprise-class platforms that require a third redundant control node (and therefore a third server) to arbitrate between the other two in the case of failures, Wind River has unique technology that avoids “split brain” conditions and enables Titanium Server to achieve full Carrier Grade reliability using just two servers, resulting in significant CAPEX and OPEX savings for our customers.

OPEX savings are critical for many of our customers, so  processor resources need to be provisioned dynamically and optimally based on the actual network traffic at any given time, rather than over-allocated in anticipation of peak demand. To accomplish this, we’ve added a sophisticated CPU scale-up / scale-down capability to Titanium Server that provides full dynamic scaling without compromising Carrier Grade reliability. As traffic through a VM increases to the point where the VM is close to saturating the processor cores that it’s running on, Titanium Server automatically allocates additional processor cores to the VM. Similarly, when the load on a VM drops so that it needs fewer resources, processor cores are automatically removed. All this happens without any need to restart or reboot the VM, ensuring that there’s no risk of service downtime during the scale-up / scale-down process. Further, the triggers which initiate the scaling actions are flexible and policy driven, enabling full control over the process.

A key component of a service provider’s OPEX calculation is VM density, in other words the number of VMs that can be supported per server. In order to maximize the VM density in an NFV deployment, it’s important for the NFVI platform to support the very latest in high-performance Network Interface Cards (NICs). With Titanium Server, we work closely with the industry’s leading NIC providers and make sure that we implement optimized support for their high-performance solutions targeted at NFV applications. As an example, Titanium Server now supports the Intel® Ethernet Controller XL71 (formerly known as “Fortville”) as well as the Mellanox CX3 10G/40G NICs. We’ll continue to add support for additional, new high-performance NICs as they become available from our partners.

Post-3.pngAs shown on the Titanium Cloud website, a large number of Virtual Network Function (VNF) suppliers are now supporting Titanium Server. It’s important for most of these partners to optimize the performance of their VNFs and ensure they’re fully leveraging the performance-oriented features of the platform, such as the Accelerated vSwitch.

We’ve added a vSwitch packet trace tool to Titanium Server, enabling these partners to efficiently tune their VNFs so that they can deliver the highest possible performance to our mutual customers.


Many of our VNF partners use the Intel® Data Plane Development Kit (Intel® DPDK) software library as a way to maximize the packet processing performance of their applications (and we also use DPDK within the Titanium Server Accelerated vSwitch). While some of those partners have migrated to the latest version of DPDK, revision 2.0, many are continuing to use earlier versions. With Titanium Server, we implemented support for VNFs based on DPDK 2.0 as soon as it became available and we also migrated the Accelerated vSwitch to DPDK 2.0. Uniquely, we continue to support VNFs based on older DPDK versions, enabling multiple simultaneous versions to be running in VNFs concurrently; there’s no requirement for our partners to move to DPDK 2.0 in the guest even though the host uses that version. This feature is key to OPEX savings, ensuring that service providers can choose when to upgrade their VNFs rather than being compelled to do so when an obscure platform limitation is exposed.

To simplify and accelerate the migration of network functions to Titanium Server, we’ve added support for standard “QinQ” tunneling. This ensures that complex applications implementing their own VLAN network segregation schemes don’t have to be rewritten when transitioning to the virtualized environment provided by Titanium Server. Applications can continue to employ their own VLAN tags while Titanium Server’s Accelerated vSwitch transparently tunnels traffic across and between nodes and networks, uniquely encapsulating and protecting each VNF’s traffic.

The final new feature that we’ll highlight in this post is one that’s critically important to service providers deploying an NFV cloud based on Titanium Server. For service providers, infrastructure deployment costs represent a significant portion of their overall OPEX and many have expressed nervousness about the learning curve for their IT teams as they roll out new platforms. We’ve addressed this concern through a new bulk provisioning capability. This graphical, Wizard-like tool greatly simplifies the automated deployment of large, distributed Titanium Server clusters and supports accelerated installation from a boot server. All part of our focus on ensuring ease-of-use for our customers while also maximizing service-level performance and reliability.

Besides the features that we’ve touched on above, the latest release of Titanium Server includes a wealth of other enhancements in areas such as: huge page support; enabling VMs to span NUMA nodes; accelerated and distributed virtual routing; enabling scheduler hyperthreading awareness; Link Aggregation Control Protocol (LACP) and more.

Please feel free to contact us to talk about any of these topics or to suggest other areas that we should investigate. The industry is moving quickly to focus on early use cases that will deliver strong Return on Investment and Wind River will continue to deliver the NFVI platform features that are required for these applications.

Filter Blog

By date: By tag: