Skip navigation
2012

This week I spent a day at the Cloud Expo conference in New York. Among several fascinating presentations on different cloud-related topics, John Engates, CTO of Rackspace Hosting, provided a very interesting perspective on OpenStack during his keynote address.

 

In his keynote, Engates observed that, despite the many business drivers for migrating to cloud computing, 80% of computing is still performed in private data centers, primarily because of the risks involved in moving to cloud environments, whether those risks are real or just perceived.

 

One of the main risk items is the lack of standards for interoperability between clouds implemented by different service providers. Engates identified OpenStack as a key platform for achieving this interoperability, pointing out that it has already been adopted by large organizations as diverse as CERN, Cisco, Dell, HP, NASA, Sony, Wikipedia and of course Rackspace.

 

Positioning OpenStack as the optimum “Cloud Operating System”, Engates described how it exposes APIs that connect applications to compute, storage and networking resources, enabling the appropriate level of end-user control and configuration.

 

Engates mentioned that Rackspace has been using OpenStack for two years. By running their cloud services on OpenStack, they are able to spin up hundreds of servers in just a few minutes in response to customer requests or increasing traffic patterns. OpenStack enables them to implement networks that are fully controllable via APIs, including fully-isolated Layer 2 networks optimized for both multicast and broadcast. They also run their block storage subsystems and cloud databases (e.g. MySQL) on OpenStack.

 

A key enabler for the widespread adoption of OpenStack is its inclusion in commercial Linux distributions from companies such as Suse, Red Hat and Ubuntu.

 

Engates summarized the value of OpenStack as being the only viable Data Center Operating System.

 

Were you at Cloud Expo this week? Do you agree with Rackspace’s assessment of OpenStack? What are the main benefits and risks in adopting it for data centers?

The concept of “virtual reality” has come a long way since the days of Star Trek’s holodeck. In this digital age, more and more activities – including something as mundane as grocery shopping – can be performed virtually via Skype, FaceTime, and other mobile or videoconferencing apps running on VoIP or cloud-based platforms.

 

The health care industry in particular has benefitted from ubiquitous computing and wireless Machine-to-Machine (M2M) technologies that enable medical professionals to remotely monitor and in some cases manage their patients’ health. Virtual care coordination is providing cost-effective monitoring and treatment for patients with long-term illness and chronic conditions through telehealth devices such as the Care Innovations QuietCare wireless resident monitoring system and the Care Innovations Guide, which combines traditional vital signs capture with advanced videoconferencing and customizable multimedia education. And with Electronic Health Records (EHRs) linking providers and patients to current and past medical data, health care delivery is becoming more accurate, timely, and efficient than ever before.

Figure 1.jpg

Care Innovations Guide

 

One major drawback to greater connectivity in the medical world is greater risk for security breaches. Increasing use of wireless technology in medical equipment has caught the attention of the U.S. Department of Homeland Security (DHS), which recently issued a bulletin warning how life-critical wireless medical devices, if compromised, pose a significant threat to the public and private sectors. While medical systems that use commercial Operating Systems (OSs) are as open to attack as standard computers, proprietary systems can also be exploited through their software update mechanisms, the DHS report stated.

 

Hospital IT organizations today face the challenge of managing and securing a variety of hardware and software systems, many of which are based on nonstandard or proprietary components that make it difficult to determine if these systems comply with the security policies of the purchasing hospital, says Santhosh Nair, senior director of intelligent platforms at Wind River.

 

“In the medical industry, the security discussion revolves around HIPPA, and in the absence of more regulation, medical device manufacturers are defining their own security,” Nair says. “This is one of the reasons why the medical industry is behind in terms of security – there hasn’t been a ‘medical Stuxnet’ that has prompted the industry to make more progress here.”

 

Medical systems based on nonstandard platforms present other shortcomings, including the downtime associated with sending them to the manufacturer for upgrades and the decreased ability to capitalize on the latest security advancements developed to secure IT infrastructure built with standards-based technology, Nair says.

 

In addition to managing these complexities, medical device development teams must consider intended use validation, design assurance, verification, and data integrity requirements as stipulated by medical regulations. OS vendors must be able to identify risk and operational interactions, functions, and capabilities across the platform to provide visibility throughout the development and test cycles, says Alan Boucher, director of software architecture and engineering at Intel-GE Care Innovations.

 

“Instrumentation of the application is critical to understanding and properly characterizing your platform design and development approach, the platforms unit test framework implementation, test automation design, and implementation,” Boucher says. “It makes all the difference in the long-term viability and sustainability of your product in the field.”

 

Wind River’s Workbench Integrated Development Environment (IDE) is one example of an embedded OS platform that enables developers to instrument core stack capabilities such as connectivity, device management, security OS middleware management, file systems, I/O, and multicore (threading models, tasks, processors, exception and error handling, etc.) in a standard way, Boucher says. Furthermore, the Wind River Platform for Medical Devices based on VxWorks provides a Vendor Qualification Summary (VQS) in accordance with FDA quality system regulation 21CFR820.50 to offer visibility into the controls and processes Wind River uses in developing its platform components, thus helping medical system manufacturers meet safety requirements and standardize on open platforms.

Picture 1.png

Wind River Platform for Medical Devices

 

To prevent hackers and malware from breaching OS platforms used in health care applications, security must be built into a device from inception to release, Nair says.

 

“Embedded device security needs to be integrated into the development life cycle of the medical device rather than being an afterthought,” he says. “That is why medical device developers are now taking a more holistic approach to device security. Development teams are considering security issues at every layer of the development stack: the hardware platform, the virtualization technology, the operating system, the network stack or other communications middleware, the packets of data being sent across the network, and the applications.”

 

Nair recommends several steps embedded systems designers can take to help ensure the highest possible levels of design assurance and data integrity at the embedded OS level:

 

  • End-to-end threat assessment: Evaluate the security threats to the device in the various contexts of its life cycle – development, operation, and maintenance.
  • Security-optimized design: Make security a No. 1 requirement and design consideration. Leverage modern separation and partitioning techniques, secure communications, and intrusion protection.
  • Secure runtime selection: Build your device from known secure components such as Commercial-Off-The-Shelf (COTS) OSs, middleware, and tools.
  • Application protection: Utilize whitelisting technology to exclude malware installation on the device.
  • Development life cycle and tools: Consider security to be part of the entire life cycle of the device and plan for updates and security fixes well into the product's lifespan.

 

In addition to these critical considerations, designers should perform design feasibility assessments and explicitly prototype critical core subsystems to understand and characterize platform risk and design complexity, Boucher says. Evaluation of the OS and data interactions could include the following questions:

 

  • What are the regulatory assessments of the device and its classification?
  • What are its intended uses?
  • What are the risks associated with the platform’s operation, operating environments, and environmental conditions?
  • What are the recovery mechanisms (local, network-based, cloud)?
  • Can the OS isolate application operation, thread behaviors, memory use, and operations from other elements of the platform and OS?
  • What are the types of faults that can happen?
  • How will data integrity be verified and validation?
  • What are the data encryption types and levels?

 

While no system is ever completely secure, medical embedded developers can perform these types of assessments and make security improvements to current and future projects, Nair says. Security concerns are also being addressed by the Medical Device Innovation, Safety, and Security Consortium (MDISS), a nonprofit organization created earlier this year that aims to advance computer risk management practices. The formation of this consortium and its goal of optimizing the relationship between the quality of health care and the process of ensuring that systems are secure are great steps in the right direction, but more efforts are needed, Nair asserts.

 

Developers can learn more about how to assess risks and implement defenses across a medical system by listening to a newly recorded Intel-Wind River webcast, “Eight Step Approach to Address the Multifaceted Security Threats Targeting Medical Devices.” And read more about how Care Innovations is using embedded computing technologies to enable virtual care coordination applications in a recent Embedded Computing Design Q&A with Alan Boucher.

To view other community content on healthcare applications, see “Top Picks - Medical.”

top_picks_image_medical.png

Jennifer Hesse

OpenSystems Media®, by special arrangement with Intel® Intelligent Systems Alliance

 

Wind River is an Associate member of the Intel® Intelligent Systems Alliance.

This week I attended the Linley Tech Carrier Conference in San Jose. There were many interesting presentations on trends in Cloud RAN and HetNets, along with details of new basestation SoCs and network processors. Given 6WIND’s focus on solutions for Software-Defined Networking (SDN), I was especially interested in a presentation by Christos Kolias, Research Scientist at Orange, on “Service Providers in the Era of Software-Defined Networking”.

 

Kolias started his presentation by outlining some of the key trends for service providers worldwide. He explained that, while CAPEX is declining for wireline services, it is increasing for wireless, with overall wholesale revenue margins shrinking as a result. Mobile data traffic has now surpassed mobile voice traffic, while sales of smartphones were double the sales of PCs in 2011, projected to grow to 1.5 billion units in 2016.

 

Given these trends, Kolias explained that service providers face major challenges in terms of traffic (more traffic does not mean more revenue), the sheer number of overlays in Telco networks (MPLS, ATM, VPV, IMS, SONET, CDN etc.), the OTT providers (Skype, Google Voice, YouTube, Hulu, Netflix, Facebook etc.), the opportunity to provide value-added services (managed hosting, security, conferencing, cloud services etc.) and the overwhelming need to move to a business model that is ARPU-driven rather than subscriber-driven. An ARPU-driven business model would be based on bytes rather than minutes and could also leverage the wealth of subscriber data that is available within the service providers. Ideally, bandwidth would be sold as a utility, like electricity. These are all valid concepts but in general the industry does not have strategies to achieve them.

 

In terms of SDN, Kolias presented this as a new paradigm and a shift in networking that allows for true innovation. The ability to remotely control network hardware, using software, leads to real programmable networks, while the decoupling of the control and data planes creates an abstraction and logical map of the overall topology. The use of standard (i.e. vendor-independent) interfaces to the hardware has the potential to change the way that service providers architect and build networks.

 

So what are the business drivers for SDN, from the perspective of a service provider? Kolias focused on cost (increasing APRU while minimizing both CAPEX and OPEX), performance (in terms of customer service quality, uptime, automation and resiliency), security, compliance and the ability to innovate and grow.

 

Kolias mentioned three interesting use cases for SDN. The first was traffic offloading: unpredictable traffic spikes are a serious network problem and SDN enables traffic flows to be managed intelligently based on criteria set (and modified dynamically) by operators with a macro-level view of the network.

 

Second, mobility management enables seamless roaming across heterogeneous wireless networks. Kolias showed a fascinating video of a car driving around Stanford with uninterrupted video streaming due to GPS-assisted handovers between WiFi and WiMAX, as well as flow rerouting and replication.

 

Finally, Kolias discussed the use of virtual networks (or network slicing). This concept enables the network administrator to set up and manage virtual networks where, for example, one would be dedicated to IPTV, one to VoIP, one to DSL and one to mobile traffic.

 

In summary, Kolias stressed that SDN provides major benefits to service providers through bringing intelligence, programmability and flexibility into their networks, while noting that SDN architectures will co-exist and complement the existing infrastructure through a hybrid approach (hybrid switches and hybrid networks). There are no plans to “rip and replace existing boxes” any time soon.

 

Were you at the Linley Tech Carrier Conference this week? What discussions did you find most interesting? Which of the many new product announcements will be most significant for carrier networks?

If you’re looking for the next big innovation that will propel new purchases of point-of-sale (POS) devices, look no further than your smart phone, tablet computer, or ultraportable. Manufacturers are already making mobile devices capable of near field communication (NFC)—the technology being used to establish radio communication between devices by touching them together or bringing them in very close proximity (see Figure 1).

 

Candy.jpg

Figure 1. A woman in Tokyo purchases candy using a NFC self checkout stand. (Photo courtesy NFC Forum.)

 

According to a recent article, analysts and other observers say that nearly 40 million NFC phones were shipped in 2011 and around 100 million will be in 2012. A recent press release from The Smart Card Alliance reports that Sony, Intel, and other leading companies at the NFC Solutions Summit 2012 agreed that “NFC is coming soon, and when it does, it will enable a broad array of applications that will enhance the consumer experience—at the point-of-sale (POS) and in everyday interactions.” Recognizing the importance of NFC, Intel has moved up from being a Principal member of the NFC Forum to the top position of Sponsor level (see Figure 2).

 

NFC.jpg

Figure 2. The NFC Forum is a non-profit industry association of leading mobile communications, semiconductor, consumer electronics companies, embedded manufacturers, and other interested parties. The Forum's mission is to advance the use of NFC technology by developing specifications, ensuring interoperability among devices and services, and educating the market.

 

The major credit card companies are pushing for widespread adoption. Visa, MasterCard and Discover are mandating retailers to adopt the technology. For example, after October 1, 2015, these companies say that any merchant that cannot accept a contactless card or payment presented by a customer will be liable for any fraudulent transaction, instead of the issuer. To encourage migration, Visa is incentivizing merchants with the Visa Technology Innovation Program (TIP). Starting in October 2012, merchants can receive incentive benefits once 75 percent of their total transactions (across all sales venues) are initiated through a POS device capable of NFC and EMV. (EMV stands for Europay, MasterCard and VISA, a global standard for inter-operation of integrated circuit cards.)

 

For developers this suggests a coming boom for NFC-enabled POS devices. Fortunately, Intel and Intel® Intelligent Systems Alliance members have just platform to build it on. All 3rd generation Intel® Core® processors support NFC. In fact, you’ve probably read a lot of press about all the NFC-enabled ultrabooks coming out with these processors (see Figure 3). The inclusion of NFC turns these ultrabooks into a potential payment device (or, for that matter, a payment-accepting device).

 

http://www.cnet.com.au/intel-gets-touchy-touting-ultrabook-future-339329321.htm#image6Mooly.jpg

Figure 3. Intel vice president, Mooly Eden, led a CES press conference in which Intel highlighted its ultrabook efforts, including a touch interface, NFC, and gesture-based recognition.

 

The real story though is going to be in the next generation POS devices that use these 3rd generation Intel Core processors. Why? Because these processors will enable a new breed of intelligent POS devices that, in addition to NFC, provide better performance for less power, better graphics, support up to three displays, and improved I/O throughput for faster transactions.

 

The first processor to use Intel’s 22nm fabrication process, 3rd generation Intel Core processors deliver up to 20 percent better performance in the same thermal envelope as the previous generation. This makes them ideal for quiet, fanless designs. This same fabrication process enables a serious upgrade to the integrated graphics engine. It provides an up to 2X boost in 3D performance and can run three independent displays. This pretty much eliminates the need for a separate graphics card, plus lowers system power and component counts. What will retailers do with three displays? Think one for the cashier, one for the customer being checked out, and one providing advertising and other information to the customers waiting in line.

 

This new generation of Intel Core processors also gets a major I/O upgrade. With support for PCI Express* (PCIe) 3.0 and USB 3.0, it can more than double I/O throughput. Such performance can be helpful in keeping transactions fast and speeding up communications with the back end.

 

Naturally, any time you’re talking about something that will be handling people’s personal information and money, security is a top concern. This latest version of the Intel Core processor platform includes Intel® OS Guard and Intel® Secure Key. Intel Secure Key, a hardware-based random number generator works with Intel® AES New Instructions to protect media, data and assets from loss. Intel OS Guard protects an operating system (OS) from applications that have been tampered with or hacked by preventing an attack from being executed from application memory. Intel OS Guard also protects the OS from malware by blocking application access to critical OS vectors.

 

The 3rd generation Intel Core processors also carry forward a number of important features introduced in earlier generations, such as the Intel® vPro™ suite of hardware-assisted security and management technologies (see Figure 4). These technologies include:

 

 

 

Security.jpg

Figure 4. 3rd generation Intel® Core™ processors include built-in, hardware-level security features.

 

Remote management is a big deal for retail chains using POS devices because of the sheer number of devices per store and the number of stores. For some good information on how Intel vPro technology enables and enhances remote management in a retail setting, I suggest you read a recent blog by my colleague, Maury Wright.

 

It’s important to remember that POS devices are going to be just one application for NFC in retail. There’s also going to be a market for NFC-based digital signs and kiosks that enable customers to download coupons or purchase out-of-stock items right in the store for home delivery. Nonetheless, it’s the need to enable payment at the checkout stand that will have retailers looking to update their POS devices in the near future.

 

Alliance members are just now coming out with boards based on 3rd generation Intel Core processors that can help manufacturers speed intelligent POS solutions to market. What do you think about the NFC revolution in retail payments? What kind of features do you think retailers will be looking for in new POS devices using Intel’s latest NFC-equipped processors?

 

retail icon.jpg

 

 

 

To view other community content on retail applications, see “Top Picks – Retail.”

 

 

 

 

 

Mark Scantlebury

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Associate Editor, Embedded Innovator magazine

Filter Blog

By date: By tag: