1 6 7 8 9 10 Previous Next

Market Applications

300 posts

train-photo.jpg

Until recently, the United States has taken a decidedly old-fashion approach to running its rail systems. Most of the country’s signaling, switching, and train operation has been handled manually, leading to inefficiencies and unnecessary hazards. The Metrolink commuter train accident in 2008 highlighted just how dangerous manual controls could be, prompting Congress to pass the U.S. Rail Safety Improvement Act of 2008. This act mandated that approximately 73,000 miles of rail and transit infrastructure would have PTC systems in place by 2015.


PTC is a computerized system for monitoring and controlling the movement of trains. The top priority is collision avoidance, but PTC systems can also automatically regulate the speed of trains in response to rail conditions, on-board equipment malfunctions, temporary speed restrictions, and other safety considerations. While the Federal Railroad Administration envisions a National Differential Global Positioning System (NDGPS) to enable seamless train tracking and control, to date the infrastructure is still a patchwork of legacy systems and disparate approaches.

 

SDR vs. the Tower of Babel

One company trying to address that shortcoming is Santa Clara based Lilee Systems. Lilee’s unique software defined radio (SDR) technology and Intel®-based hardware provides complete end-to-end wireless mobility management, enabling trains moving cross country, for example, to interact intelligently with a wide range of legacy safety systems along the way.

 

According to Jon Adams, Lilee’s VP of Strategic Development, “There are many components to Positive Train Control. We do the onboard radios, the onboard networking processors, the wayside radios and messaging processors, and the back office mobile IP abstraction. It’s all standards-based and high security. Every train, every piece of equipment in the field—whether it’s fixed or moving—has a fixed IP address, so it becomes straightforward to manage your assets.”

 

Why resort to something as complex as SDR to handle what would seem to be a relatively straightforward problem? “The answer has less to do with technology than with FCC regulatory domains,” Adams explained. “If you look at the 217-222 MHz band (Figure 1), which is where much of the industry has decided it’s going to put their PTC systems, it’s under four different parts of the FCC [regulations]: it’s under Part 80, which is Maritime Mobile; it’s under Part 90, which is Business/Industrial; it’s under Part 95, which is Citizens Band; and it’s under Part 97, which is the Amateur Radio Service. You can’t operate under the Amateur Radio Service, but you can operate under the other three parts.”

figure1.jpg

Figure 1: The U.S. radio spectrum from 217-222 MHz is multi-layered.

 

“The challenge,” continued Adams, “is even if you build a radio that’s flexible in frequency, it still needs to meet the special requirements of whichever part in which it’s operating. But in those parts they don’t specify modulation type, data rates, coding, or other things. So having a fixed radio means you can only service one segment of a pretty small market. But a software defined radio enables you to throw a virtual switch and suddenly you’re completely compliant with Part 80 and are at 16 kbps; or you’re completely compliant with Part 90 and you’re at 9600 baud. That’s why we took the SDR approach.”

 

Intel Inside (and Outside) the Train

How do Lilee’s solutions leverage Intel technology? “If you look inside our Lilee Mobility Controllers—that go in the back office—or our Wayside Messaging Servers you’ll find an x86 Intel processor that’s running the whole application space. It’s a very robust architecture, and we chose it because it’s so well supported by operating systems and by the customer base. You need to look at the cost of maintaining a platform, and we felt that the Intel architecture really does help to mitigate the unknowns.”

figure2.jpg 

Figure 2: Lilee's LMC-5500 Mobility Controllers provide the backbone for an integrated PTC system.

 

Lilee’s LMC-5500 Series Mobility Controllers (Figure 2) provide radio device management with roaming control and enable a conduit between the remote network and the back office servers. LMC-series controllers are built around the Intel® Core™2 Quad Processor Q9400 (6M Cache, 2.66 GHz, 1333 MHz FSB) and the Intel® 3210 Chipset with 82801I9B I/O Controller Hub (ICH9). SDR radios within the network establish tunnels with the LMC-5000 to allow mobile radios to move across different segments of the network without having to be aware of the underlying network topology changes.

 

Lilee’s Intel® AtomTM-based WMS-2000 Connectivity and Application Controllers enable back office visibility of wayside status and alarm messages, providing an interoperable gateway for PTC and legacy train control systems.

 

One rail system that has completely committed to Lilee’s approach to PTC is Southern California’s Metrolink. “Metrolink is the commuter heavy rail link in Southern California with 219 miles of right of way with over 200 wayside locations for signals and switches where they need to talk to a train,” concluded Adams. “Lilee’s WMS-2000 messaging server is in every one of those. These units manage all the communications from the back office network to the train. In the back office Lilee LMC-5000 mobility controllers extract the IP address so the back office can always send a message to any particular device throughout their entire system.”

 

While engineers will continue to drive Metrolink’s trains, PTC backup systems are in place to insure against temporary distractions ever again leading to disastrous consequences.

 

LEARN MORE


Solutions in this blog:

 

Related topics:

 

Lilee Systems is a general member of the Intel® Intelligent Systems Alliance. Lilee Systems is dedicated to delivering the highest quality, most reliable products and solutions for mobile connectivity across multiple market areas including railway.


John Donovan
Roving Reporter (Intel® contractor), Intel® Intelligent Systems Alliance
Editor/Publisher,
Low-Power Design
Follow me on twitter: @jdonovan43

By Franz Walkembach, Senior Product Manager, Wind River

 

The GENIVI Alliance had recently released its latest compliance specification GENIVI 3.0, and Wind River Platform for Infotainment was among the first to achieve it. Why should this matter? It’s all about trust, scalability and quality.

 

GENIVI®, Autosar, Car Connectivity Consortium…these are all groups created in order to set up standards across the auto industry. A long-time expert in this domain, Wind River is a founding member of the GENIVI® Alliance, keeping pace with the latest IVI developments and even helping to drive some of the changes coming from the Alliance. Wind River participates on the GENIVI board, marketing council and several expert groups. Wind River’s engineers have been in charge of leading initiatives such as including Android inside a GENIVI design. They are also currently leading the BIT (base integration team).

 

As the BIT lead, Wind River plans and manages the ongoing team releases (every 6 weeks!), organizes and coordinates the BIT projects (e.g. Baseline automated testing). In addition, as Meta-IVI maintainer, Wind River releases and maintains a GENIVI baseline built on the Yocto project (also every 6 weeks). This baseline has to align with other GENIVI goals, like the IP policy and various contributors such as silicon vendors and users of the baselines.

 

GENIVI has been working towards a compliance program that further helps align the industry and simplify things such as automotive requirements. In October, the Alliance updated their compliance program, introducing the latest specifications for GENIVI Compliance 3.0. In the automotive world, where everybody knows everybody, and quality is of critical importance, compliance is increasingly becoming an effective way to show that a vendor is:

A) Well connected within the ecosystem,

B) Trusted with a solid reputation,

C) Up-to-date and drives innovative changes,

D) Offering software that meets the highest quality and latest requirements and is scalable across different platforms.

 

There are 19 parties registered in the compliance program so far. While most have received compliance with earlier GENIVI specifications (versions 1.0 or 2.0), just a few have achieved level 3.0 compliance. With every new compliance specifications release, a variety of new requirements are introduced for different stacks inside an IVI platform, like CE Connectivity, HMI-Application Framework, System Infrastructure, Networking and so on.

 

Currently, Wind River Platform for Infotainment has achieved GENIVI 3.0 compliance. However, compliance is just one part of it. Our platform also comes with a wealth of other features such as:Riv

  • A semi Linux kernel (based on Yocto Meta-IVI)
  • Optional solution accelerator such as an IPod software solution
  • Downstream support
  • ADK (Application developer kit) for user space development helps to develop applications and product demos in a couple of weeks
  • PDK (Platform developer kit) for kernel space development to configure software platforms
  • Patches to increase the quality of open source

 

Additionally, other silicon vendor BSP Compliance statements will follow soon.

Wind River Platform for Infotainment is a Linux-based, GENIVI-compliant run-time platform specifically tuned for the IVI market. It can be used as a fast track for demo development or a solid core for platform development. The platform is designed, packaged and tested to support an all set of automotive features.

 

The GENIVI compliance program was launched in August 2011, and it’s been gaining quite serious momentum since then.  And of course, Wind River will be sure to continue to track closely with the compliance program at every step!

 

For additional information from Wind River, visit us on Facebook.

As we all know, the Internet isn’t just for people anymore. But that doesn’t mean that bringing intelligent systems onto the Internet of Things and getting them to work together well is an easy task. Fortunately, a new article in Boards & Solutions magazine shows us it doesn’t have to be rocket science. You just have to look for solutions based on the new Intel® Intelligent Systems Framework

 

The article’s author is Jim Robinson, General Manager, Marketing and Business Operations, Intel® Intelligent Systems Group. In the article, he explains how this new framework provides a blueprint for hardware, operating systems, and tools that simplifies the delivery of interoperable solutions. He talks about the players behind this new specification and what they’re doing to support it. He shows how the specification enables scalability and cutting-edge connectivity, manageability and security across applications. And he explains the framework’s role in helping support solutions designed to turn big data into business intelligence.

 

Download the article now to learn how you can use this framework to streamline the development of your designs for the Internet of Things.

NFV (Network Function Virtualization) was obviously one of the hottest topics at MWC 2013. Several use cases were demoed by tier-1 platform vendors, telecom equipment manufacturers, service providers and operators to show the benefits of the NFV concept.

 

These use cases can be classified into two categories. The first one is the migration of functions traditionally located in at the customer premises into the operator’s infrastructure.

 

One example is the concept of a “virtual CPE” that can be applied either to consumer or enterprise markets. The new architecture relies on a very simple CPE (modem, switch and antenna) and all services run on a central server in a virtual network appliance. CAPEX is considerably reduced. Software updates, service configuration, introduction of new services, maintenance are also greatly simplified, so service providers and operators can significantly reduce their OPEX as well.

 

The cloud RAN (Radio Access Network) concept similarly reduces both CAPEX and OPEX for mobile networks. In the conventional network architecture, each antenna connects to a co-located eNodeB from which traffic is backhauled to the Evolved Packet Core (EPC) over an IP link. With the Cloud RAN approach, however, the eNodeB at the antenna is replaced by a low-cost Remote Radio Head. The digitized RF data is then connected over a fiber link to a centralized Baseband Unit pool in which the baseband processing is performed.

 

A second category of use cases addresses core network functions. In contrast to the first one, the equipment is already located in the core infrastructure. For instance, virtualizing EPC functions for mobile networks in virtual machines on generic platforms instead of dedicated equipment reduces the CAPEX. Dynamic allocation and management of network resources bring more flexibility and better usage of network resources, thereby reducing OPEX and increasing ARPU.

 

Will NFV be the real first transformation of the network and telecom business? These different uses cases seem to be very attractive for operators, enabling them to increase their margins and deploy differentiating services faster. To make this happen, end users will also need to see real  reductions in their Internet expenses.

 

However, all the concepts have to come back to reality. Virtualizing hundreds of CPE or complex EPC functions on a single server requires high performance network bandwidth to be distributed to the virtualized functions. Today, standard virtualized architectures that work perfectly well for computing won’t be able to deliver the required network performance without optimized solutions like our 6WINDGate software. Please check how many software layers a network packet has to go through before reaching an application running in a virtual machine.

With fully interconnected embedded devices now the norm in many factory settings, industrial design teams are implementing predictive maintenance strategies to reduce downtime, lower personnel costs, and increase production. There are basically three different approaches to equipment maintenance in any setting:

  1. Reactive: Run equipment until it fails then repair
  2. Preventative: Periodic shutdown to test and replace worn parts
  3. Predictive: Monitor equipment continually and repair as needed

The goal of predictive maintenance is to pinpoint when a failure is going to occur so that repairs can be made at a convenient time before the breakdown actually happens. Successful predictive maintenance requires real-time monitoring and analysis of important equipment parameters via remote sensors, management tools, and diagnostics along with universal connectivity.

 

To simplify this remote data collection process, Intel has developed a number of technologies that can be implemented in embedded systems supporting industrial automation applications.  One of the most recent introductions is the Intel® Intelligent Systems Framework (Intel® ISF) to simplify connecting, managing, and securing embedded devices. Intel® ISF combines processor architecture, operating system software, and other tools to create secure, interoperable platforms for intelligent systems.  The framework is built around system processors with Intel® vPro™ Technology (Intel® vPro ) providing built-in hardware support for remote management, virtualization, and platform security functions that can be used to extend the uptime of industrial embedded systems.


Intel® Active Management Technology (Intel® AMT) is a key element of Intel® vPro that enables real-time data collection from production equipment sensors that can be used in predictive maintenance analysis. Intel® AMT delivers certificate-based security allowing remote access to the embedded system for management and security tasks even when the system is powered off. This technology gives device support personnel a low cost technique to monitor operation, perform diagnostics, deliver product training, and manage required software updates from a remote location. In the event of a software failure, Intel® AMT enables a remote boot from an external operating system image over the network even if the system kernel has been completely corrupted. Most of this reboot process can be done with automated scripting and little human interaction which can significantly lower system downtime.

Several members of the Intel® Intelligent Systems Alliance offer off-the-shelf Intel® AMT compatible processor platforms that designers can incorporate into industrial systems to collect the necessary data. For example, the CEQM77 COM Express module family from Intel® Intelligent Systems Alliance member Radisys combines a 3rd generation Intel Core i7 processor and the Intel QM77 Express chipset in a 95mm x 125mm form factor for high performance industrial applications (See figure 1). The CEQM77 supports DirectX 11 and OpenGL graphics, up to 16GB of error correcting code (ECC) memory plus one PCI Express 3.0 x16 PEG port and seven PCI Express 2.0 x1 ports.  The module provides Trusted Platform Module (TPM) support as well as support for Intel® AMT enabling remote access and diagnostics via the Radisys Embedded Software Platform (eSP).


comexpress-CEQM77-FrontOblique.jpg


The analysis portion of a predictive maintenance strategy can range from simply verifying that remote measurements are within pre-established limits to much more involved prognostic algorithms. For example, the Center for Intelligent Maintenance Systems (IMS) at the University of Cincinnati has developed a Watchdog Agent Prognostics Toolkit that includes algorithms for neural networks, logistics regression, Gaussian mixture modeling, and statistical pattern recognition that can be used to detect and predict faults in everything from critical factory machinery to human organs (See figure 2). The toolkit adds signal processing and analysis functionality to the LabVIEW system design software suite from Alliance member National Instruments. There are five basic steps in the IMS approach to prognostics: data acquisition, feature calculation, principal component analysis, fault classification and health monitoring, and health prediction.


NI IMS.png


With remote data collection enabled by Intel® AMT in operation, industrial designers can implement multiple condition-based maintenance strategies to foresee equipment failures and reduce downtime. If you are involved in a predictive maintenance project requiring remote equipment management and data analysis, please share your questions and comments with fellow followers of the Intel® Embedded Community. You can keep up with the latest technical articles and product announcements at the Embedded Computing Design archives on industrial systems maintenance.

 

LEARN MORE >>

 

Solutions in this blog:

 

Related topics (blogs, white papers, and more):

 

Warren Webb
OpenSystems Media®, by special arrangement with the Intel® Intelligent Systems Alliance


Radisys and National Instruments are Associate members of the Intel® Intelligent Systems Alliance.

 

Many communication systems have the same requirements as military data processing and digital signal processing systems. Often commercial solutions can be used to solve challenging packet processing and digital signal processing (DSP) problems in military applications. Many sensors used for signal processing collect and process the signals before packetizing the data, similar to the way signals are processed and packetized in communication applications.

 

In the past, these processing functions have often required special-purpose hardware such as discrete Network Processing Units (NPUs), digital signal processors (DSPs), co-processors, or field-programmable gate arrays (FPGAs). However, recent enhancements to Intel® architecture processors, together with advanced software, are providing developers a viable alternative, whereby they can use a single blade architecture for consolidation of all their application, control, signal, and packet processing workloads on Intel® architecture. Intel processors such at the Intel® Xeon® E5-2400/2600 are ideal choices for data intensive packet processing.

 

Improvements in multi-core architectures combined with the latest DSP capability in Intel’s Advanced Vector Extensions (Intel® AVX) and the packet processing software enhancements provided by the Intel® Data Plane Development Kit (Intel® DPDK) make Intel® architecture increasingly attractive. Additionally, by consolidating DSP and packet processing with other workloads on a Intel® multi-core processor, it is possible to reduce hardware costs, simplify the application development environment, and reduce time to market—with all the factors combining to reduce overall total cost of ownership.

Emerson Networking Power Embedded Computing, a Premier member of the Intel® Intelligent Systems Alliance, with extensive experience in the telecommunications industry and the defense industry, has seen the potential for using Intel processors combined with AVX and the DPDK in the role of a signal and data processing. At Embedded Tech Trends 2013, Emerson Networking Power Embedded Computing proposed a model using an AdvancedTCA blade-based platform as a solution for challenging DSP and data processing military applications.

 

They list the following benefits to using AdvancedTCA for DSP and packet processing:

  • AdvancedTCA blades efficiently supply many processing cores and adequate memory.
  • Multi-core Intel® Xeon® E5-2400/2600 processors that are well suited to process complex data.
  • 40G Ethernet direct connection between the AdvancedTCA fabric and the processors
  • Inherently rugged nature of blade-in-chassis design suitable for rugged benign environments such as shipboard, manned airborne, or transit case applications.
  • AdvancedTCA is an open standard with many suppliers.

 

The Emerson Networking Power Embedded Computing model combines the best of AdvancedTCA with the Advanced Vector Extensions for performing DSP algorithms and the DPDK for data packet processing to construct a platform ideal for demanding signal processing.

 

Inbound packetized sensor data enters into the AdvancedTCA switch over 10G (40G in the near future) fabric where flow control software on the ATCA switch flows the data to the appropriate processor board that can perform the DSP algorithms before being routed over the fabric to outbound destinations.

 

In a typical scenario, an application running in the control plane of the processor board, along with flow control software ensures that the data is directed to the correct destination. Algorithms to process the data are assigned and dispatched to specific threads on specific cores within the processor by the control plane application. Packet processing extensions defined by the DPDK eliminate interruptions in the operation of the algorithms. With the assistance of AVX, the DSP algorithms are completed without interruption. The control plane application steps in again to direct the results to the next destination. For instance, to another thread running a different algorithm, to a different processor board, or out of the system through the fabric switch.


Emerson Image.jpg


The main goal of the Intel® DPDK is to provide a fast simple framework for fast packet processing in data plane applications. The DPDK is freely available from Intel to developers. Developers may use the code to understand some of the techniques employed, to build upon for prototyping or to add their own protocol stacks. Alternative ecosystem options that use the Intel® DPDK are available.

 

The Intel® DPDK is designed to operate with Intel® Xeon® class processors. It provides optimized data plane libraries and optimized drivers that operate in user space. It operates under Linux, which separates high level control functions from algorithms running as threads on specific dedicated processor cores, splitting the load to run efficiently in the multi-core environment. The DPDK has a low overhead run-to-completion model that is optimized for the fastest possible algorithm performance.


Additional libraries and drivers in the Intel® DPDK provide support for data packet processing:

  • Memory Manager (huge page tables to optimize performance)
  • Buffer Manager (optimized memory allocation tool that eliminates need to lock)
  • Queue Manager (manage incoming and outgoing data to the cores)
  • Flow Classification (IP flow management, optimized around Ethernet controller)
  • Poll Mode Drivers (user mode drivers eliminating interrupts for threads running algorithms)

 

While the DPDK handles data packet issues, Intel ’s Advanced Vector Extension (AVX) addresses the digital signal processing demands. AVX extends 128 bit SIMD instructions to 256 bits. This potentially doubles floating-point operation performance when using single precision floating-point numbers. Each processor core supports AVX instructions so that the DSP algorithms can be assigned as necessary.

 

Intel supplies optimized libraries, Integrated Performance Primitives (IPP), for AVX, while optimized Vector Signal Image Processing Libraries (VSIPLs) are available from Intel® Intelligent Systems Alliance members.

 

Ecosystem Options

The Wind River Systems portfolio of network acceleration solutions is optimized for Intel’s packet processing architecture. Wind River Linux* has built-in high performance optimizations for the Intel® Data Plane Development Kit (Intel® DPDK).


6WIND SA provides packet processing software to meet both the wire-speed performance and time-to-market requirements of mobile infrastructure, network security, high-frequency trading, and deep packet inspection applications. 6WINDGate* typically delivers up to ten times the performance of standard OS networking stacks while maintaining full compatibility with standard application APIs. 6WIND provides the Intel® DPDK libraries pre-integrated within the 6WINDGate software, further accelerating the development process for OEMs.

 

NASoftware Ltd, in collaboration with Intel, has development conversion tools to help programmers switching from PowerPC/AltiVec to Intel® Streaming SIMD Extensions (Intel® SSE) processors and Intel® Advanced Vector Extensions (Intel® AVX) processors.

 

LEARN MORE >>

 

Related topics:

 

Emerson Networking Power Embedded Computing is a Premier member of the Intel® Intelligent Systems Alliance.

Wind River Systems is an Associate member of the Intel® Intelligent Systems Alliance.

6WIND SA is an Affiliate member of the Intel® Intelligent Systems Alliance.

NASoftware Ltd is an Affiliate member of the Intel® Intelligent Systems Alliance.

 

Jerry Gipper

OpenSystems Media®, by special arrangement with Intel® Intelligent Systems Alliance

Editorial Director, OpenSystems Media, VITA Technologies

Mobile data demand is growing more quickly than operators can deploy new network capacity and mobile operators need to use policy control applications to manage data traffic and ensure subscribers receive appropriate quality of experience (QoE). QoE is a more subjective measure than quality of service (QoS) and requires a more sophisticated approach to policy control and quality measurement and analysis.  To ensure adequate QoE for subscribers, operators need to do more than just implement basic policy enforcement and traffic management.

 

In this blog I am reviewing a proof of concept from Advantech and Qosmos that extends a deep packet inspection (DPI) platform, used to implement the Policy and Charging Enforcement Function (PCEF),  to also analyze the quality of video passing through the network to subscribers. Advantech is a Premier member of the Intel® Intelligent Systems Alliance. Qosmos is an Affiliate member of the Alliance.. The 200-plus members of the Alliance collaborate closely with Intel® to create hardware, software, tools, and services to help speed intelligent systems  to market.

 

Mobile Traffic Growth and Quality of Experience

Mobile data traffic is growing rapidly driven by new subscribers acquiring smart phones and existing smartphone users consuming more network bandwidth. Video is the key application driving much of this growth and mobile video traffic exceeded 50% of data traffic for the first time in 2012. The Cisco VNI Mobile Forecast from February 2013 shows mobile device Internet usage growing by an average of 66% per year from 2012 to 2017 to reach 11.2 Exabytes per month.

 

To meet the growing demand for data traffic Mobile operators are deploying a mix of 3G and LTE services. The PCEF implements operator policies in the 3G and LTE packet core. As mobile bandwidth grows, the PCEF becomes a bottleneck requiring high-performance deep packet inspection and support for 40Gbit/s or greater packet processing. The Intel® Xeon® Processors E5-2600 and E5-2400 Series and Intel® Data Plane Development Kit (Intel® DPDK) are being widely used to implement the PCEF and other elements in the3G and LTE packet core.

 

Video, voice, gaming and browsing require different network characteristics. By implementing policies that are application-specific mobile operators can increase quality of experience without necessarily using more network capacity. The quality of Video, for example, is dependent on frame rate, delay, jitter, size and codec. By extracting these parameters from the packet stream and analyzing video quality mobile operators can implement policy to ensure appropriate user quality of experience when playing video.

Leveraging DPI for Quality of Experience

Figure 1 shows the Qosmos ixEngine®. The DPI engine is built on top of the Intel® DPDK and extracts metadata and content from the packets flowing through the network. The Qosmos ixEngine® can be used with any Intel multicore processor supported by the Intel® DPDK including the Intel® Xeon® Processors E5-2600 and E5-2400 Series.

 

The Qosmos ixEngine® supports real-time layer 7 IP flow analysis and identification of protocols and applications based on flow passing and statistical analysis. The Qosmos ixEngine® will extract over 6,000 types of information for up to 1,000 protocols. For video the parameters extracted include frame rate, delay, jitter, size and codec.

Qosmos DPI Engine.png

Figure 1. Qosmos ixEngine®: DPI and Metadata Engine.

 

The metadata extracted by the Qosmos ixEngine® can be used in a policy control platform to implement application specific policy. Additional content extracted from the packet stream can be used to further analyze application-specific performance. Qosmos has worked with Argon Design and Advantech to develop a proof of concept that uses data extracted by the Qosmos ixEngine® to analyze video quality in the network.

 

DPI and Video Analysis Applied to Mobile Video Streaming

Figure 2 shows a video analysis demonstration using the Advantech FWA-6510 network appliance and Qosmos ixEngine. The Advantech FWA-6500 acts as a packet generator streaming canned Internet traffic and running a video streaming server for live viewing. Users can browse the server and stream videos through a wireless network connected to an Advantech FWA-6510. The Advantech FWA-6510 is running the Qosmos ixEngine and Argon Design qualmon to capture and analyze packets in real time. The video quality results are viewed through a web browser.

Video Analysis Demonstration using Advantech FWA-6510 and Qosmos ixEngine.png

Figure 2. Video Analysis Demonstration using Advantech FWA-6510  and Qosmos ixEngine.

 

The Argon Design qualmon recognizes the video streams, and extracts details of the stream and displays them. It also extracts a representative thumbnail from the video stream and analyses the data to assess visual quality and predict when the user device will be starved of data and the user will experience video freeze. This information can then be used to drive application and user specific policy enforcement.

 

Scalable Solution

The initial proof of concept has been developed using the Advantech FWA-6510 Network Appliance with dual Intel® Xeon® Processors E5-2600 Series and up to eight 10GbE interfaces. Larger systems can be built using an AdvancedTCA (ATCA) platform with up to 12 ATCA blades each integrating dual Intel® Xeon® Processors E5-2600 Series. The Advantech FWA-6510 Network Appliance and Advantech ATCA Blade are shown in Figure 3.

Advantech FWA-6510 and MIC-5332 ATCA Blade Cropped.png

Figure 3. Advantech FWA-6510 Network Appliance and ATCA Blade.

 

LEARN MORE >>

 

Solutions in this blog:

 

Related topics:

 

Advantech is a Premier member of the Intel® Intelligent Systems Alliance. Qosmos is an Affiliate member of the Alliance.

 

Simon Stanley

Roving Reporter (Intel® Contractor), Intel® Embedded Alliance

Principal Consultant, Earlswood Marketing

Follow me on Twitter: @simon_stanley

People who are under medical care are often at their most vulnerable. The equipment used to monitor, medicate, diagnose, and treat them can’t be.

 

In the past, medical device security focused on the endpoint—the device itself. But Tony Magallanez, senior systems engineer for McAfee’s embedded sales group, explains that the days of focusing solely on device-level security is over; today’s medical devices need to be at the center of a web of security with multiple layers. “We advocate that concept because it lets you understand what’s happening on the device, and also what’s going on around the device,” Magallanez says. “It’s important because as threats proliferate through the network that surrounds these systems, they become more vulnerable.”

 

These connected devices may include monitoring equipment within hospitals or in patients’ homes; bedside (wired) or implanted (wireless) infusion pumps that deliver medication; networked radiology and surgical equipment; nurses’ stations, charting devices, and administrative systems; and telemedicine equipment that brings medical care to remote areas of the world. Entire networks that manage vital data and instructions are associated with these devices.

 

McAfee looks at the vulnerability aspects of everything the network implies, including the device’s physical security, data protection, and encryption as well as the behavior of the people using it, to make sure that the device and the network that surrounds it are secure. This level of security requires a layered approach that blankets the entire network.

layers.jpg

Security in layers

While personal health information can be accessed through sophisticated malware, low-tech risks, such as employees who accidentally or deliberately provide access, are just as dangerous. Securing personal health information to meet HIPAA and other requirements demands access control in situations where the device can be vulnerable. That’s especially important with the proliferation of easily accessed (and misplaced) mobile devices, including laptops, tablets, and smartphones. Security also relates to monitoring network traffic, including the sites that employees access on the Internet. Even legitimate sites can be compromised, which can then compromise sensitive data within the healthcare network.

 

Both the network and individual devices need to be monitored, maintained, and controlled; ideally using automated, 24/7 processes that don’t require the cost and inefficiency of onsite human intervention. McAfee’s Magallanez says, “We’re finding in the hospital space that margins are thinner and thinner, and administrators are trying to be as efficient as possible. Operating costs can be overwhelming.” Even “green” initiatives that are designed to reduce carbon footprint and make operations more energy-efficient can have security implications. For instance, if a threat is identified on a number of devices on the network, but other devices are powered off, historically there wasn’t a way to identify whether the threat had spread without sending technicians to power up, analyze, and patch those devices onsite.

 

Now administrators can use McAfee’s ePolicy Orchestrator (ePO) Deep Command. The ePO centralized console shows the network administrator where a security threat manifested and the scope of the problem, and defines resources to mitigate the threat. Deep Command uses the Intel® vPro™ Active Management Technology (AMT) to allow secure remote access, even if the device isn’t powered on, which allows the administrator to remotely patch and reboot even large numbers of infected devices.  Deep Command can remotely power systems on, apply security and other maintenance protocols, and power the system back down to ensure safe operation when workers return. This eliminates the need to police employee compliance to security patch instructions, and can work around the 24/7 schedule of healthcare environments.

deep defender.jpg

 

Balance security and performance in medical devices

The ongoing compromise for device developers is how to balance security and performance requirements. McAfee has successfully deployed new technologies to help developers mitigate risk while optimizing performance. McAfee Embedded Control provides application whitelisting that blocks unauthorized applications and changes on fixed-function devices with very little performance overhead. If the application is attacked or changed, the software locks down the system so the virus is intercepted and terminated before it can run. This provides a high level of security and peace of mind for both the hospital administrator and the device manufacturer. Because of stringent safety certifications (such as the FDA) that restrict changes to certified systems, a change can require the equipment to be sent back to the manufacturer to be reimaged, resulting in service costs as well as loss of revenue while the system is out of use.

 

Device developers can also take advantage of the Intel® AES New Instructions (Intel® AES-NI) encryption instruction set that accelerates the encryption of data in the Intel® Xeon® processor family and the 3rd-generation Intel® Core™ processor family. Encryption technology historically required the operating system to handle encryption algorithms, which can slow performance. McAfee integrates with the Intel AES-NI to offload the encryption engine to the CPU, with no reduction in performance and with full FIPS 140-2 certification.

 

Medical Device Innovation, Safety and Security (MDISS) Consortium

Looking ahead, Intel and McAfee, along with leading service care providers, device manufacturers, IT providers, research organizations, and others, are active in working groups of the Medical Device Innovation, Safety and Security (MDISS) Consortium. MDISS is focused on optimizing the relationship between the quality of healthcare and the process of assessing and ensuring that devices and systems are secure and functioning safely and appropriately. While MDISS is not a standards organization, its goals include the development of security best practices for safe, secure medical devices and associated networks.

 

LEARN MORE>>

Solutions in this blog:

Related topics:

 

McAfee is an Associate member of the Intel® Intelligent Systems Alliance.

 

Cheryl Coupé

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Freelance technology writer and editor

Today’s cars are so complex electronically that they’re perhaps best thought of as mobile computer networks. The cars of tomorrow—which are already starting to appear today—will be increasingly connected—to the Internet, to each other, and to roadside wireless infrastructure.


The U.S. Department of Transportation (DOT) has designated IEEE 802.11p as the basis for Dedicated Short Range Communications (DS_RC), by which a vehicle can communicate with other vehicles and roadside infrastructure. DSRC enables cooperative cruise control—cruising as part of a pack on the freeway—as well as collision avoidance, electronic road pricing and toll collection, electronic parking payment, and even braking for a red light that you may not have noticed. Beyond paying for tolls and parking DSRC could turn your car into a 4-wheeled wallet, enabling you to drive through your favorite fast-food or coffee outlet without having to dig out your credit card.

 

In order to provide all the functionality in your car of your smart phone—including navigation, communication, multimedia, gaming, and location-based services (“Where’s the nearest Italian restaurant?”)—the average new car may have as much as a mile of wiring inside and contain over a hundred separate electronic control units (ECUs) that communicate over a variety of networks and buses. Add to that all the cool functionality that DSRC can enable and the system gets exceedingly complex.

 

The very complexity of in-vehicle infotainment (IVI) systems raises serious security issues, since you’re connecting systems with consumer-grade security with mission-critical systems that control the operation of the vehicle.

 

Getting on the bus
One weak point is the CAN bus (Figure 1), over which the various ECUs communicate. While devices on the bus may be secure, the bus is not—which means the system as a whole is not. CAN is a message-based protocol with no built-in security features.

figure-1.jpg

Figure 1: The CAN bus ties together most automotive electronic control units (ECUs).

 

A couple of years ago the Center for Automotive Embedded Systems Security (CAESS) demonstrated the fragility of the underlying system structure. They connected a packet sniffer to the On-Board Diagnostics II (OBD-II) port to analyze CAN bus traffic. Using a wireless link they were then able to use that information to start and stop the car, race the engine, lock individual brakes, unlock the doors, and pretty much control the entire car.

 

Taking their hacking to the next level the CAESS team was then able to take over control of a vehicle remotely through its telematics system. They demonstrated that it’s possible to hack a car with malware inserted into an MP3 player or transmitted over a Wi-Fi connection. Devices relying on an 802.11p wireless connection may be particularly vulnerable.

 

Virtual IVI
While standards bodies are working on protocol vulnerability, auto makers are moving to reduce complexity by having a single ECU handle multiple functions. In these mixed-criticality systems real-time, safety-critical components must coexist with consumer infotainment applications. Developers can meet this goal with Intel® Atom™ processor-based platforms featuring Intel® Virtualization Technology (Intel® VT) and the INTEGRITY Multivisor from Green Hills Software.


“When you’re mixing consumer-grade applications and you want security, you’re always going to have maliciousness or just software that doesn’t work the way it’s supposed to,” explains Robert Redfield, Green Hills’ Director of Business Development . “That’s why you have to start at the very lowest level of software. If you’re going to have virtualization, it has to be at the microkernel level.”

figure-2.jpg

Figure 2: INTEGRITY Multivisor securely partitions off guest operating systems from mission-critical applications.

 

INTEGRITY Multivisor is both a secure Type-1 hypervisor and an RTOS. At the heart of INTEGRITY Multivisor is a certified microkernel that provides trusted partitioning of guest operating systems, applications, and peripheral driver software (Figure 2). Multivisor supplies only a minimal set of critical services, such as process management, exception handling, and interprocess communications. Multivisor is the only code that runs in supervisor mode, while the overlying operating systems and applications run in user mode, accessing only those resources deemed appropriate by the system engineer. For example, Multivisor will prevent a guest operating system from accessing physical memory beyond what was originally allocated to the guest’s virtual machine. This prevents a stack overflow, which malware can use to take over control of a system.

 

To address the security issues mentioned earlier, “You would put the drivers for the CAN bus and the Wi-Fi and the cellular radio in the mission-critical part of the operating system,” continued Redfield, “where they’re under the control of Multivisor. Multivisor is built on the most highly certified real-time operating system on the planet, that is INTEGRITY. So if you put one of those communication drivers in its own partition, if something goes wrong it’s contained.”

 

Complete IVI platform
Mission-critical applications need to operate in near real time, which is made possible by Intel’s AtomTM processor. Intel AtomTM N2000 and D2000 processors (codename Cedar Trail) provide hardware-accelerated virtualization. Intel® Virtualization Technology (Intel® VT) speeds up the transfer of control between the hypervisor and the guest operating systems; it assists in trapping and executing certain instructions for the guest operating system, thereby accelerating performance. Intel VT is optimized for maximum virtualization performance, and its on-chip GPU accelerates 3D graphics to one or more screens while making minimal demands on the CPU.

 

The combination of INTEGRITY Multivisor and an Intel Atom processor provides a secure IVI platform that can run multiple guest operating systems and protected real-time applications simultaneously, using secure partitions to ensure real-time responsiveness and fault tolerance (Figure 3).

figure-3.jpg

Figure 3: The combination of INTEGRITY Multivisor and an Intel Atom processor provides a secure IVI platform.

 

LEARN MORE

Solutions in this blog:

Related topics:

 

Green Hills Software is an Affiliate Member of the Intel® Intelligent Systems Alliance and plays a critical role in developing and delivering robust operating systems with virtualization and advanced development tools and embedded solutions for embedded markets such as automotive, industrial, medical, military/government, and telecommunications.


John Donovan
Roving Reporter (Intel® contractor), Intel® Intelligent Systems Alliance
Editor/Publisher,
Low-Power Design
Follow me on twitter: @jdonovan43

Smartphone interfaces have changed people’s expectations for how they interact with computers and other devices, and those expectations have extended even to sophisticated medical equipment. According to Darshan Patel, director of Linux product management for Wind River, Android is starting to show up in almost any kind of medical device that includes a display, from bedside monitors to CT scanners to programming devices for pacemakers. Why? Essentially the same reasons that Android is now the dominant operating system on smartphones.

Wind River Android.jpg

Medical staff and patients are used to the look and feel of Android-based applications, and they want the same advantages in their professional devices as they have in their smartphones, tablets, and even in-vehicle infotainment systems.

 

Wind River’s Tim Skutt, senior staff technical marketing engineer, says: “The touch interface has become ubiquitous. If we’re going to push equipment out into the non-specialist realm, we need to put that equipment in a paradigm they’re used to.” Medical devices with Android-based user interfaces (UI) take advantage of people’s comfort with touching, swiping, and flipping through menus and options. This familiarity can ultimately make training and usability quicker, easier, and more efficient.

 

Even though legacy technologies still maintain full functionality, the ease-of-use factor is becoming more important to aid user adoption. For patients, an Android interface on a medical device looks familiar and current, in contrast to traditional interfaces that can feel outdated. This is especially important as medical equipment goes into patients’ homes or is used by other care providers such as physical therapists. With today’s pressures to reduce healthcare costs, patients may be referred to less-costly outpatient care, or may be sent home from the hospital earlier but with devices that allow their care provider to monitor them remotely. In these cases, the patient’s confidence and comfort level with the technology can be an important element in encouraging its regular use.

 

Even beyond the UI, Android offers advantages for developers, such as programming flexibility, robust computing, built-in sensor interfaces, and built-in communications support, such as Wi-Fi and Bluetooth. For developers who previously used Linux or a real-time operating system (RTOS) with a UI on top, Android offers a quick path to market with a fresh new look. Wind River’s director of engineering Rick Anderson explains that even if developers don’t need all of Android’s “bells and whistles” for their medical device today, the fact that those features are available lets manufacturers explore their options to address new market opportunities.

 

Android, security, and virtualization

Security and privacy are key issues for any kind of medical software. There’s a wide spectrum of medical device types, however, from those that are primarily designed to view information or handle records, to those that monitor or control medical processes, such as delivering medication or controlling a ventilator. For the first type of product, security is important for privacy reasons (and to meet new patient-protection laws such as HIPAA); for the second, with human lives at stake, security is a critical element. While Android could still provide an excellent UI for these safety-critical applications, virtualization gives an option to provide a rich, graphical UI, but isolate it from functions that control or monitor safety processes.

 

medical tablet.jpgThe typical approach to virtualization is to have multiple virtual machines running concurrently, using a certified embedded hypervisor to separate them, such as the Wind River Hypervisor that provides rich safety-certification evidence. Another approach, for non-safety-critical applications, is lightweight partitioning. This allows developers to isolate processes that don’t need to run simultaneously. For example, a doctor who uses a tablet for personal and diagnostic purposes could use lightweight partitioning to protect medical records from personal files and applications. Or a mobile device that moves with a caregiver from one patient to another could provide access only to the records for the patient in the current room, then remove access to those records when the device is taken to another room with a different patient.

 

Android on medical devices also brings requirements beyond the smartphone

For all its benefits, developing medical devices with Android—as with any operating system—also brings challenges. Quality assurance, compliance to medical standards, and long-life version support are all critical elements for a medical device, and developers may need to add unique market-specific functionality beyond the operating system (OS).

Anderson explains the number one misconception developers have about Android and medical device development: many believe that because Android is open source, they can grab it and they’re almost done. “Like any other operating system project, Android is a starting point, that’s all,” Anderson says. “The version that Google releases is ever-changing, and it’s not hardened well enough for critical industries like medical.”

 

Wind River offers a range of products and services to make Android more appropriate for medical devices, and more efficient for developers. The Wind River Platform for Android is a “hardened” OS release that has been rigorously tested for specific hardware platforms, and that will be maintained for the long lifetime of certified medical devices. This is a commercially supported OS that incorporates the latest Android open source project software and optimizes it for performance so that developers can immediately innovate to meet their application-specific needs.

 

Wind River also offers a set of market-specific “Solution Accelerators,” including the Solution Accelerator for Android, Medical that helps speed time-to-market, supports product differentiation, and solves challenges that are unique to the medical market. These add-on software components follow the Continua Health Alliance guidelines to help device manufacturers build personalized, interoperable, and fully compliant products. For example, the Continua Health Alliance specifies the use of IEEE 11073 as the standard format for information exchange between personal health devices. The Wind River Solution Accelerator for Android, Medical supports the IEEE 11073 stack for sensor devices that include pulse oximeters, heart rate and blood pressure monitors, thermometers, scales, glucose meters, and a range of activity, fitness, and medication monitors. The Continua Health Alliance also establishes the Bluetooth Health Device Profile (HDP) as the wireless transmission protocol between devices, which is integrated into the Wind River Solution Accelerator for Android, Medical to provide interoperability between Android devices and medical sensors.

 

Finally, the Wind River Professional Services Medical Practice provides a full range of services for customers to help them develop compliant, safe, and secure connected medical devices and systems. Anderson explains that medical customers need their technical problems solved, just like customers in any other segment. But because medical software has to meet stringent quality and security requirements, Wind River can add value with its established processes and tools, and long years of experience working with certification bodies such as the FDA.

 

Android offers opportunities for innovation

These products and services address the second misconception that developers have: that Android doesn’t have a place in medical devices. Even a year ago, that might have had some validity, but not anymore. Anderson says, “The reality is, if you do the right things with Android: raise the quality level, take out what’s not needed, and add security and isolation, you have a very robust platform for medical devices.” Anderson adds one final thought about Android misconceptions in medical: the belief that if a developer puts Android on a medical device that it has to look like phone or tablet. The reality is that Android is already present in many devices, such as automotive dashboards and wristwatches, that don’t look like a tablet or smartphone. With Android’s customization capabilities, medical device developers can take their creativity and innovation to a whole new level.

 

LEARN MORE>>            

Solutions in this blog:

Related topics:

 

Wind River is an Associate member of the Intel® Intelligent Systems Alliance.

 

Cheryl Coupé

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Freelance technology writer and editor

In recent years the number of vehicles occupying roadways in major cities has rapidly intensified. The result is increased commuter travel time, reduced traffic safety and higher traffic-related pollutant emissions. With limited land space making new roads very costly, the use of traffic management techniques is the only practical option for relieving this congestion.

 

Enabling communities to proactively respond to traffic related events, Digital Security and Surveillance (DSS) intelligent traffic control (ITC) systems use Intel® Core™ and Atom ™ processor-based devices to perform real time analysis of traffic conditions, determine vehicle and driver behavior and autonomously manage traffic flow.

 

What’s more, these systems can be used by municipalities worldwide not only to battle city congestion and pollution but, by controlling evacuation in an orderly manner, as a powerful tool during disaster management situations (floods, storms, wild fires, etc.). ITC can help authorities know how to distribute law enforcement resources, preempt traffic lights’ normal operation and allow safe passage of emergency vehicles at an intersection. During disasters ITC systems also can temporary permit road shoulder use, regulate ramp metering and provide motorists with timely warnings.

 

Legacy traffic control systems rely on inductive-loop traffic detectors, installed on the surface of a road to detect vehicles passing or arriving at a certain point, for instance an intersection or motorway entrance. When a vehicle passes over the loop or is stopped within the loop, the vehicle induces eddy currents in the wire loops, which decrease their inductance. The decreased inductance actuates an electronics unit output which sends a pulse to the traffic signal controller signifying the passage or presence of a vehicle

 

One of the main shortcomings of these under-pavement traffic loop detectors is that they are expensive to install and maintain.  Moreover, they cannot be replaced or fixed without disturbing traffic.

 

For all of these reasons video-based traffic flow analysis is getting more attention. Video sensor systems utilize sophisticated algorithms to detect traffic incidents, measure traffic flow and track vehicles across multiple lanes. These traffic control strategies interconnect closed-circuit television, video analytics, video recorders to store content for reference later on and network data communicators to pass this information on to remote, mobile or central traffic control management teams (see Fig. 1 below)

 

Figure 1.png

 

Figure 1 A typical video-based DSS surveillance system

 

Multi-functional devices such as the Axiomtek rBOX104 Embedded Field Controller provide the traffic control system with the required traffic data and at the same time transmits video images of the junction or motorway section under surveillance to a processing center. The DIN-rack, fanless embedded computer supports low power Intel® Atom™ processors Z510PT (1.1 GHz) or Z520PT (1.33 GHz).  With an extended temperature range of -40°C to 70°C it can be used in extremely hot or cold environments. Multiple built-in serial ports, high-speed LANs and USB 2.0 ports enable fast and efficient data computation, communication and acquisition. Supporting Power-over-Ethernet on ports 1 to 4 makes it ideally suited for outdoor applications that employ PoE IP cameras.

 

A Network Video Recorder (NVR) Interfaces with these IP cameras. Its primary focus is on video decoding and display and local video storage. An NVR will likely require more LAN ports with more robust throughput capabilities than a stand-alone digital video recorder (DVR) because of the incoming IP packets that carry encoded video information from the connected IP cameras.

 

With Nexcom’s NViS 2280 mobile megapixel surveillance becomes reality. The NVR (Fig. 2, below) is powered by the Intel® Atom™ Dual Core D2700 2.13 GHz processor, which combines high performance with low power consumption. The unit can control  up to 16 IP cameras with HD resolution and. is further equipped with Dual Display function via VGA and DVI-D,  2x Hot Swap HDDs for up to 1.5TB and an accelerometer. Moreover, an eSATA interface allows connection of an additional external hard disk. Unlike video capture card solutions, Nexcom’s NViS hybrid solutions bring video decoder functionality onboard and can render images without additional hardware.

 

The NViS is also-suitable for deployment in vehicles, because it is outfitted with comprehensive wireless communication features such as 3.5G, WiFi as well as GPS modules. Using the GPS function one can link video records with location coordinates. Moreover, the device is designed for power input between 9V and 36VDC.

 

630d7731-ec52-432e-ab4e-e9c82b5de1f2.jpg

 

Figure 2 The Nexcom NViS NVR can be used from a mobile or a fixed base

 

 

The Trenton Systems TVC4502 bundled video wall controller provides a hardware and software solution for advanced video display walls in a traffic control situation room. The core of the TVC4502 fanless video display controller is a Trenton motherboard with an Intel ® Xeon®   C5500 Series processor and multiple I/O and network interfaces. This device comes ready to deploy right out of the box, featuring up to four 1080p high-definition inputs delivered across four 1080p high-definition outputs. Two front access drive bays support up to two 2.5” removable and hot-swappable storage drives. The system’s front panel also includes a slim-line optical media bay, two USB ports, diagnostic LEDs, power and system reset switches.  The accompanying software bundle is built on the Microsoft® Windows 7 operating system and enables local or network video wall control, scheduling, video source switching and compositing, bezel management and edge overlap with edge blending projectors.

 

As demands on DSS traffic systems become more complex, equipment manufacturers and hardware integrators need to deliver increasing sophisticated video solutions that will handle traffic forecasting as well as real-time monitoring. For these systems, the workloads handled by the CPU and GPU are substantial. They must control video encoding, video decoding, video scaling and display, hard disk video storage, video packetization for streaming, as well as running the software video management GUI as the interface for live video viewing. And all of this must be done in a manner that does not sacrifice the accuracy of extracted traffic flow data.

 

The 3rd generation Intel® Core™ processor family alleviates many of these issues with much more powerful integrated graphics; 3D graphics performance is up to 60% higher than that of the prior generation. What is more, the 3rd generation Intel® Core™ processor also is up to 15% faster than its predecessor, enabling faster, more sophisticated image and video analysis. Many of the visual surveillance algorithms currently in use can benefit from the Intel® Advanced Vector Extensions (Intel® AVX), which provide 256-bit floating point processing. Video streaming and storage is enhanced by Intel® Quick Sync Video 2.0, which performs encoding and decoding in hardware for effortless integration of full 1080p HD video and high-resolution cameras. Among other benefits, Intel Quick Sync Video 2.0 performs video transcoding at twice the speed of the previous generation, and greatly reduces processor loading during transcoding..

 

LEARN MORE

 

Solutions in this blog

 

Nexcom NViS 2280

 

Trenton Systems TVC4502

 

Axiomtek

 

Axiomtek, Nexcom, Trenton Systems and  Microsoft are Associate members of the Intel® Intelligent Systems Alliance.

 

Related topics:

•          Digital Security & Surveillance  - Top Picks (blogs, white papers, and more)

 

             Sensing and Analytics - Top Picks (blogs, white papers, and more) 

 

Murray Slovick

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

In the ruthless quest to squash the use of buzzword-based marketing speak in business communications, phrases such as world-class, best practices and paradigm shift are rarely seen any more – which is actually a step in the right direction given how abused these phrases were just a few years ago. However, there are times when these phrases capture exactly the right idea and should be used to describe a new trend or idea.

 

This is one of those times. Within the context of current technology trends and transitions occurring within the telecommunications and enterprise networking worlds today, I want to bring paradigm shift out of the buzzword dog house and use it to frame today’s ideas on Network Convergence.

 

In basic terms, a paradigm shift is a change from one way of thinking to another.  To really abuse the marketing-speak buzzwords, in high-tech a paradigm shift is a technology-driven shift from one set of best practices to another set to achieve world-class performance. (I apologize!). The reason I like the phrase is that it implies that this shift is revolutionary, not evolutionary.  When there is a paradigm shift in high-tech, new essential technologies emerge, business models are changed, old skills have to be updated, familiar players have to re-invent themselves and new players emerge.  We’re talking big-time change in a 5 to 10 year period of time.

 

So what’s happening to the Network Convergence idea?  It’s been around for decades.

 

Back in the 90’s, Network Convergence in the telecom world referred to the consolidation of the separate voice, video and data communications networks into a single network used to deliver all forms of communications services. Today, we simply refer to this era as the ‘transition to IP-based communications’ or something like that. Instead of working really hard to build bridges between the different networks, we finally decided we had to replace them all with a single network.

 

At the same time, in the enterprise networking world, Network Convergence had a similar impetus.  To simplify their networks in order to lower their costs and improve manageability, companies started to replace the myriad of vendor-specific networking technology, such as IBM’s SNA, Novell’s IPX/SPX, Apple’s AppleTalk, Digital’s DECnet and Microsoft’s NetBEUI to name a few, with the Unix-based TCP/IP protocol, the transport mechanism of the Internet.

 

Today, while the internet is ubiquitous, the networking world we live and work in is characterized by two ‘best-practices’ that helped us through this 1st era of Network Convergence.  First, the carrier-grade telecom networking world is still vastly different from the enterprise or IT networking worlds and far too frequently one camp wants nothing to do with the other. And second, the networking infrastructure of both worlds is still hardware-driven based on proprietary technology.

 

These last two holdovers from the past are starting to outlive their usefulness.  Simply put, the CAPEX and OPEX costs associated with building and maintaining today’s highly secure mobile and cloud computing network infrastructure are staggering.

 

Today, the paradigm shift that is Network Convergence (2nd Era) has 3 key pieces:

  1. To lower CAPEX and OPEX costs, the transition from proprietary hardware to commodity multi-core platforms is essential.  In fact, this transition is already highly visible with much of the LTE infrastructure in the mobile world and cloud computing infrastructure being based on standard server platforms. The new Intel® Platform for Communications Infrastructure is a great example of how the latest multicore processor technology is being optimized for both application and communication processing.
  2. The ‘hardware-driven’ network will transition to a software-based network. Still in the early stages of study and pilot implementations, the ideas, protocols and products behind Software Defined Networking are just starting to emerge. Increased flexibility and improvement in resource utilization are the drivers here. And with a significant group of network operators, equipment vendors and network technology providers behind the effort, SDN will only gain momentum.
  3. The telecom network world and the enterprise (or Data Center) network world will become one-and-the-same.  Why? Again, very simply stated, they are both are moving to highly virtualized software-based networks built on standard server model.  The differences between the implementation approaches are vanishing.  One interesting question is ‘will the organizational (some would say political) differences vanish too?’

 

Network Convergence paradigm shift – absolutely, a revised definition of the concept. Revolutionary – absolutely, big-time changes that will be endorsed (and resisted) on many fronts. Essential to continue to create increasingly capable (and essential), cost-effective networks – absolutely.

 

Eric Carmès, CEO of 6WIND has shared some of his thoughts on the network convergence paradigm shift in an interview for SDNCentral. Check it out.

With more than 50 billion connected devices predicted by 2020, an interconnected world of intelligent systems is becoming a reality. As these devices become more pervasive, they must be properly connected, managed, and secured. Putting together the right combination of software to make these devices function properly is a major undertaking.

 

Software development platforms are an excellent way to accelerate the software design process. A well supported software development platforms brings all of the key elements needed by the software team to design, develop, test, and deploy an intelligent device. Wind River® has been developing software platforms for specific applications for many years. Recently they saw the need for a new platform that specifically addresses the needs of intelligent devices that are interconnected machine-to-machine (M2M).

 

Wind River Intelligent Device Platform is designed to enable M2M development by providing proven software and expertise to fuel rapid innovation and deployment of secure and reliable intelligent devices.  This latest offering adds the security, connectivity, and manageability required for machine-to-machine device development, and is compatible with the new Intel® Intelligent Systems Framework, a set of interoperable solutions designed to address connecting, managing, and securing devices.  It provides pre-integrated off-the-shelf components that can significantly reduce a manufacturer's development time. This new platform delivers on three essential concerns:


  • Connected: Simplifies device connectivity for wireless and wired networks, speeding time-to-market and reducing expense for device manufacturers.
  • Managed: Delivers pre-integrated and supported management software from best-in-class independent software vendors (ISVs), making it much easier to manage remote connected devices.
  • Secure: Provides powerful and customizable security capabilities for protecting devices and their data.

 

Wind River Intelligent Device Platform builds on the company’s existing M2M technologies included in its Embedded Development Kit for M2M Smart Services, a plug-and-play hardware and operating system/middleware platform that is truly "application ready." To complement its M2M capabilities, Wind River has a strategic set of M2M partners that includes independent software vendors, value-added resellers, board vendors, and distributors.


Wind River Intelligent Device Platform is a complete software development environment for M2M applications, based on Wind River's Linux 4.3, the company’s industry-leading commercial embedded Linux operating system, which is standards-compliant and fully tested. It contains security features designed for M2M development, smart and connected capabilities that enable rich network options, and validated and flexible device management software. The core of the platform is a layer that contains configuration and build information for integrating other M2M components to generate an optimized, functional run-time image.

 

 

Inside the Intelligent Device Platform

Wind River has brought together many key software elements and ecosystem members to form the building blocks of the Intelligent Device Platform. Wind River Intelligent Device Platform features the following:

  • Wind River Linux 4.3 operating system
    • Linux has become ubiquitous with its widespread use in devices of all types. This commercial embedded Linux operating system, is standards-compliant and fully tested. It contains security features designed for M2M development, smart and connected capabilities that enable rich network options, and validated and flexible device management software. Linux developers are very comfortable using this as the operating system in many types of intelligent devices.
  • Validated and supported middleware stack, including virtual machine, OMA DM, OSGi, TR-069, and home automation
    • OMA DM is a device management protocol specified by Open Mobile Alliance. TR-069 is a specification for remote management of deployed devices that are typically used in wired customer premises equipment (CPE)*.
    • The OSGi Alliance is a worldwide consortium of technology innovators that advances a proven and mature process to create open specifications that enable the modular assembly of software built with Java technology. Modularity reduces software complexity; OSGi is the best model to modularize Java.
    • The TR-069 agent included in Wind River Intelligent Device Platform is OneAgent from Works Systems. This agent runs on a deployed device and provides a management channel for a centralized management console
  • Wide range of connectivity options, including 3G, Bluetooth, Ethernet, Wi-Fi, ZigBee, and Z-Wave across PAN, LAN and WAN networks
    • The wide range of connectivity options enables device developers to design devices that can connect to any of the most popular networks.
    • Exegin's native ZigBee stack is a complete robust implementation that provides flexibility to integrate with various radio options and meet all ZigBee 802.15.4 specifications.
  • Secure Remote Management, featuring customizable trusted boot, grsecurity for resource control, TPM secure key migration and integrity measurement
    • Security has become the top concern for developers as devices become connected. Protecting them from external influences will be key to market of acceptance of intelligent devices.
  • Wind River development tools including Wind River Workbench, based on the Eclipse framework, and Wind River build system for software integration
    • These proven tools are familiar to many embedded software developers. They provide a familiar world-class development environment.
  • Web-based configuration management for device provision, setup, and management
    • Since the devices are already connected, web-based configuration management opens up a whole new world of services that can be installed and managed after deployment of the device.

 

Markets & Devices

 

Smart Energy

Wind_River_Intelligent_Device_Platform.jpg

Throughout the energy value chain, smart energy technology provides centralized monitoring of millions of devices to track peak usage patterns and energy disruptions, improving efficiency, reliability, and quality across the network. For example, Wind River Intelligent Device Platform software can provide intelligence to smart meters so that real-time voltage monitoring and control is possible using existing telecom service providers' infrastructure. As the Smart Grid develops, there are many possibilities for intelligent devices throughout the grid.

 

Medical

Through emerging applications such as patient monitoring and in-home patient care for seniors and patients with chronic conditions, mobile health technology can enable physicians and family members to cost-effectively improve care, respond to emergencies, and more accurately monitor and treat illnesses. Interconnected mobile devices for health management and care are quickly becoming a key component of the medical industry.

 

Transportation

From railways to personal automobiles, intelligent systems technology is enhancing traditional ways to monitor cargo, increase fuel efficiency, route around traffic or construction, and improve safety. Digi International is using Intelligent Device Platform to build a new family of cloud-connected wireless M2M solutions, including a fleet management system. New initiatives within the railroad industry to improve safety and efficiency are going to be heavily dependent on interconnected and intelligent devices.

 

Building and Home Management

Management and automation for smart buildings are being enhanced by intelligent systems technology in multiple areas such as security, energy efficiency, predictive maintenance, and asset management. Wind River customer Sumitomo is using Intelligent Device Platform to deliver a smart-home automation gateway for leading global telecom service providers. The addition of WiFi and ZigBee to all types of home devices, from lighting, environment control, monitoring, to entertainment are making these devices dependent on communication between devices.

 

Summary

The Wind River Intelligent Device Platform promises to raise the capability and quality of device software to levels not possible in a timely and cost effective way for most software developers. The features of the platform will allow software teams to focus on value add software that can sit on the platform, leveraging the work of experts that will ensure that your devices are properly connected, managed, and secured.

 

Learn More
Solutions in this blog:

 

Related Topics

 

Wind River is an Associate member of the Intel® Intelligent Systems Alliance.
Contact Wind River >>


Jerry Gipper

OpenSystems Media®, by special arrangement with Intel® Intelligent Systems Alliance

Editorial Director, OpenSystems Media, VITA Technologies

The combination of intelligent appliances, from light switches to trucks, and the prospect of pervasive connectivity is creating a network of things, the “Internet of Things”, that will share the telecom and cloud infrastructure with user devices that have very different requirements. The market opportunity for networks that support machine to machine communications is expected to be at least as large as that for user to user communication. The challenge for the Telecom and Cloud infrastructure industries is to enable services that can both meet the exacting requirements of different industries such as energy, health and automotive, and handle the vast volume of data, “Big Data”, that will be generated, and consumed, by the huge number of machines connected to the Internet.

 

In this blog I am going to explore the impact of the Internet of Things and Big Data on the telecom and cloud infrastructure industries and some of the steps being taken to deliver the required services. For this blog I have asked Sven Freudenfeld, Telecom Business Development Manager at Kontron and Mike Langlois, Vice President, General Manager Networking Strategy at Wind River for their views on this topic. Kontron is a Premier member of the Intel® Intelligent Systems Alliance. Wind River Systems is an Associate member of the Intel® Intelligent Systems Alliance. The 200-plus members of the Alliance collaborate closely with Intel® to create hardware, software, tools, and services to help speed intelligent systems  to market.

 

How will the emergence of the Internet of Things and Big Data impact the telecom and cloud infrastructure industries?

 

Mike Langlois: Intelligent & secure mass connectivity is the short answer – this is the bigger story around the internet of things and the creation of big data. Billions more devices and machines will connect to a network in the next decade.  And, these connected elements will be big generators of data and in turn the consumers of big data (intelligent information).

 

When regulated industries such as energy, health, automotive and others start connected their devices/machines to the global communication networks, they will place much more requirements on the network above and beyond unregulated smart phones, tablets and PCs.

 

Sven Freudenfeld: The “Internet of Things” or IoT, continues to emerge as an enigma to the telecom and cloud infrastructure industries. In the realm of IoT, the industry expects to see a semantic web that delivers content user-to-user, machine-to- machine (M2M), machine-to-user and user-to-machine in new ways never thought of before. The expansion of this technology into the new frontier of unattended “embedded” devices will drive overall Internet traffic to triple, and mobile Internet traffic to grow 11-fold. The exceptional spectrum of the data generated by M2M communication will be on grander scale, leading to the buzz behind what the industry has appropriately deemed the “Gigantic Data” problem.

 

However, even with the increase of data throughout the network, the profile of data packets passing through the network will be in different sizes than the traditional mobile device connected to the network and therefore a distributed computing approach in the cloud can mean a more efficient way to manage the cloud infrastructure. The new structure of IoT introduces Web 3.0 with a simplified and structured interaction for M2M communication without any human interaction.

 

Where do you see the greatest challenges for these industries?

 

Mike Langlois: There are two huge challenges: 1) maximizing traffic capacity, performance, and quality of service of existing networks, and 2) monetization of the network services. Within these two challenges are the other issues of security, reliability, and cost savings.

 

Sven Freudenfeld: Security and manageability are critical factures to be able to create a secure environment to deploy an M2M solution. From power plant to medical device to moving objects and machinery, the use of communication between two machines without human interaction will continue to expand and diversify in the future and could become a weak point for security threats.

 

How are you using Intel technology to solve these challenges?

 

Sven Freudenfeld: Current and predicted market conditions are driving vendors to innovate new design approaches to cloud computing platforms. Kontron is one example with the introduction of its SYMKLOUD series of cloud platforms. It took an entirely new approach to hardware design by integrating switching and load balancing with a modular and distributed framework of Intel® processors.

SYMKLOUD-REFL_900PX.png

Figure 1. SYMKLOUD MS2900 Fully Integrated Cloud Computing Platform.

 

Partnering with Intel, Kontron leverages the very low power, high performance Intel® Xeon® E3-1200 Processor Series to easily scale and share the workloads of web, M2M and mobile applications deployed in cloud infrastructure. Moreover, these Intel processors have also enabled Kontron to design a more comprehensive power management suite that will permit a more dynamic powering up and down when workloads change.

 

Mike Langlois: Wind River has integrated and fine-tuned our entire software platform and development tools portfolio to take full advantage of the advanced capabilities of Intel processors.  Wind River Intelligent Network Platform includes Intel® Data Plane Development Kit (Intel ® DPDK).  It is the only software platform that offers this unique processing technology integrated as part of its solution.    The platform takes full advantage of DPDK and the built-in hardware capabilities to enable a consolidated management and data plane software environment that delivers high performance layer 3 packet processing and deep packet inspection functionality.

Wind River INP.png

Figure 2. Wind River Intelligent Network Platform.

 

Wind River Simics, a full system simulator, is a development tool that has been fully optimized for the Intel Architecture and Xeon processors.  Design teams who want to get a time to market advantage can use Simics to accelerate all phases of the development life cycle. This allows design teams to prototype both hardware and software before huge investments before physical Intel hardware is available

 

What do you see as the next steps for the telecom and cloud infrastructure industries?


Sven Freudenfeld: Planning to face these new challenges head-on and understanding how they will impact the business of cloud service providers and hosted services will require a shift from legacy, purely processor driven hardware to more scalable and highly versatile cloud-enabled Web 3.0 infrastructure equipment.  As the requirements are changing with the advent of next generation data centers, network equipment providers and cloud service providers should seek out new hardware and software solutions that are fully integrated and application ready, provide improved power and cluster management, and more cost-effectively deliver High-Availability (5 nines) capabilities.

 

What will evolve in telecom and cloud infrastructure will be the development of compute density with better power efficiency and different methods for data transport. Along with this will be the platform cost for next generation platforms.  The future may also hold a path to use pure PCI-express as the transport method within the platforms as this will help serve to lower the cost.  As previously discussed, cloud infrastructure equipment must be truly scalable. Processing technology will continue to advance and an upgrade path will most likely take place more frequently as demand of cloud services increases.

 

Mike Langlois: Software and virtualization will play a greater role in defining and enabling network services. SDN is at its beginning.  As  more of the ecosystem participates it will start to take shape and we’ll see a new generation of network architectures.  Through SDN, the industry will begin to take full advantage of the Carrier Cloud and the new applications that have yet to be defined.

 

Preparing for the SDN and the Carrier Cloud means greater collaboration with the ecosystem that provides the underlying high-performance intelligent network platforms.  The industry needs to take full advantage of software to deliver greater value, find new sources of revenue, and keep your service competitive.  And, leverage the expertise of the technical teams at companies like Wind River and Intel to fully utilize the capabilities their integrated hardware and software platforms.

 

Solutions in this blog:

 

Related topics:

 

Kontron is a Premier member of the Intel® Intelligent Systems Alliance. Contact Kontron >

Wind River is an Associate member of the Alliance. Contact Wind River >

 

Simon Stanley

Roving Reporter (Intel® Contractor), Intel® Embedded Alliance

Principal Consultant, Earlswood Marketing

Follow me on Twitter: @simon_stanley

Within both public and private cloud data centers, the number of Virtual Machines per server blade is increasing rapidly, leveraging on-going improvements in the performance of the x86 processors used on those blades. Today, a typical server blade in a service provider data center hosts at least 50 VMs, with that number expected to grow to hundreds within a few years.

 

Because of this growth in the number of VMs running on each server blade, the data center network needs to expand beyond its current limit at the Top-of-Rack, to a model where a virtual switch on each server blade is used to distribute the increasing volume of network traffic to virtualized applications. This function is typically implemented using the open-source Open vSwitch (OVS) or an equivalent proprietary virtual switch.

 

For multi-tenant data centers, high-bandwidth VM-to-VM communication (VM2VM) is mandatory. To isolate and secure VM2VM, however, requires extensive routing, firewalling and load balancing services that extend beyond the basic Layer 2 features provided by a typical virtual switch. Also, multi-tenant architectures require traffic engineering (ACL, tunneling, QoS etc.) to be performed at the server edge in order to provide users with individual, differentiated services.

 

Advanced security policies are necessary to secure both physical and virtual traffic. Within a cloud, these ensure that only certain VMs can access the applications and data owned by other VMs. Between clouds, policies are enforced to ensure that data and traffic for one cloud is not visible to another.

 

Adding to the overall security- and networking-related workload that must now be supported on server blades is the increasing trend towards the use of overlay network technologies which avoid the 4,094 tunnels limitation of traditional VLANs. Data centers are now adopting VXLAN and NVGRE protocols, with 16-bit IDs that allow for 16 million tunnels.

 

From the point of view of the data center operator, it is critical that the solutions they select to achieve the virtual switch enhancements described above are fully-compatible with emerging options for data center orchestration such as OpenStack and for Layer-2/-3 management such as OpenFlow. In addition, Carrier Grade reliability is mandatory for enterprise-class data centers.

 

At 6WIND, we recognized these security-driven virtual switch challenges and enhanced the 6WINDGate™ software, already widely deployed in network appliances, to deliver performance improvements and advanced networking features for virtual switches (details here). We’ll be discussing our solutions in our booth number 854 at RSA Conference, hope to see you there!

Filter Blog

By date:
By tag: