1 6 7 8 9 10 Previous Next

Market Applications

311 posts

By Elizabeth Press, Global Business Intelligence Lead for Telecom, Media & Entertainment and Web

 

application-modernization-604x200Mobile operators are under pressure to prove that their services are superior to those of over-the-top (OTT) and other non-traditional challenger service providers. Many operators, especially in Western Europe, are investing in a rich communications suites (RCS) as a means of positively differentiating their services through sticky content and evolved communications abilities such enhanced instant messaging, video calling, document sharing and social media connectivity.Application modernization will be critical for technology vendors and mobile operators as they adopt RCS. Many mobile operators have grown through mergers and acquisitions, creating hundreds of duplicate applications on various infrastructures. Rationalization and modernization of legacy applications will be important for deploying RCS efficiently and cost effectively. Mobile operators have been offering RCS in the form of enterprise software packages, as well as downloadable applications. Moreover, applications supporting RCS are run both in Cloud-hosted environments and on-premise. A solid application modernization strategy will need to take all of these considerations into account.How RCS will create a competitive advantage for mobile operators:

  • As LTE networks get rolled out, operators will look to recoup their investments by targeting additional value-added services and niche products to a growing base of LTE end users. Voice over LTE (VoLTE) will become an important offering. Mobile video services will become the hook, attracting users and enabling operators to compete with challenger voice over IP (VoIP) providers.
  • The first wave of operators, mostly in Western Europe, have launched RCS services and are bundling them into their full service offerings. RCS strengthens operators’ platforms and allows them to compete as a premium service provider.
  • The ability to integrate value-added services into a strong communications platform is necessary for satiating next-generation user demand. End users expect seamless interactive abilities and a broad choice of applications. Application Programing Interfaces (APIs) will be available to 3rd party vendors to enable them to offer their services to end user through the RCS platform to meet the growing end user expectations.

Interested in learning more about Dell Application Modernization Services? Please click here to contact our team.

Follow me on Twitter at @ElizabethAtDell.

By Marcel Moelands, OEM Enterprise Lead EMEA, Dell OEM Solutions

When I read about the Telecommunications market, I notice

For instance, LTE (Long Term Evolution or 4th Generation Networks) is setting a new standard in transporting voice calls. Mobile phone users are accessing rich content, which in turn creates a balancing act between OTT(Over-the-Top) traffic and players on one side and carriers looking to monetize OTT on the other.

The proliferation of smart devices has led to a new ecosystem of machine-to-machine platforms and applications, which contributes to data growth explosion.

Evolution in Telecommunications happens faster with each generation.  The cool factor of fashionable features, generational performance leaps in handset equipment, increased bandwidth and ease of ‘getting connected’ all contribute to the speed of evolution.

This is unlike traditional network functions like SMS relays, content delivery, and support functions like OSS (Operations Support) and BSS (Business Support) that serve individual functions and aren’t scalable.

As a result, the Telecom Cloud is increasingly appealing. It allows one shared pool of resources capable of running ‘softwarized’ network and support functions while meeting the service level agreements (SLA) of the Telecom network.

It’s like the difference between a buffet restaurant and having a personal gourmet chef with a fridge full of ingredients. The buffet restaurant offers you what has been prepared for the day — no more, no less. The chef can make you whatever you want — in specific portions — whenever you like (and more efficiently)

Characteristics

The “Cloud” remains a word with many definitions and implied uses.  There are many different cloud deployment models and, it can be argued, not all are really leveraging the cloud.

True cloud deployments should meet several criteria. I would describe the ideal (API) cloud as: controlled, scalable, flexible and pay-per-use resources offering a fully-transparent platform for virtual machine and software deployments.

Key players in the Telecom market envisioned this a long time ago and have been vocal on such cloud-oriented topics as the socialization of communities through technology adoption, the future growth of networked devices and the introduction of the Telecom cloud infrastructure.

Dell

Dell’s cloud offering is very broad, consisting of Public Cloud Services, on- and off premise Private Cloud Solutions and Services, Converged Infrastructure Solutions, Cloud Infrastructure Design, Software, Consultancy, Security, Integration platforms (Boomi).

These are supported by a range of products, engagement practices, reference designs and programs to offer our customers the best experience in their paths to transformation. The common philosophy amongst the offerings is the Open Standards based approach to deliver capable and affordable solutions that suit our customers’ needs.

Open Source and OpenStack

Back to looking at the Telecommunications market, especially the equipment providers in this market.

I foresee a large-scale adoption of open source components on standards-based hardware to offer Software-as-a-Service solutions and network components as well as turnkey Infrastructure-as-a-Service solutions. Carriers will eventually leverage existing network, business and operations- support assets, allowing them to further enhance offerings.

The choice of leveraging open source platforms makes a lot of sense as it allows flexibility without affecting the ubiquitous nature of a true cloud. OpenStack is a great example of this. OpenStack is on its 6th release and offers all of the building blocks required for the Telecom cloud, especially when taking the ecosystem of value-add software vendors into account. OpenStack is already widely used in test plans and conceptual designs despite its absolute need for teams of skilled software engineers to create the environments.

Cloud Hardware

Making the right choices when selecting hardware is important as hardware will greatly influence operational expenses. Dell has been a very active player in datacenter solutions for years. More specifically, Dell leads in market share in the hyperscale server category (according to IDC).

With the experience gained in designing datacenter and high-performance servers, Dell started designing and delivering server products specifically-featured for Cloud deployments with PowerEdge C products. Additional components, such as Force10 and PowerConnect Networking complete the offering for end-to-end, turnkey cloud infrastructure solutions.

Dedication

Alongside the development of cloud server products, a dedicated team was assembled with the sole purpose of developing enablers for OpenStack design and deployment like reference designs, integration of Cloud based Block Storage, Software Defined Networking and Big Data Solutions based on Hadoop. Today, this group of experts is represented in the OpenStack Foundation board of Directors and has submitted components to the OpenStack Community such as the Crowbar software framework.

Rob Hirschfeld offers a non-technical overview of how OpenStack offers a unique combination of features for building hyperscale clouds.

Application overhaul

For OEM customers in the process of moving to the cloud, there are some unique considerations for a successful deployment. From designing for elasticity and scalability to ensuring fault-tolerance, license and infrastructure management, security,  multi-tenant readiness and more, Dell’s OEM Solutions team works closely with the broader Dell teams to deliver revenue-generating end-to-end cloud solutions.

Have you initiated any cloud deployments in your organization? If not, is it something you’re considering over the next 12-18 months?

High performance embedded computing (HPEC) has made tremendous leaps in performance and capability, with new implementations adding to the advancements. Processors like the 3rd generation Intel® Core™ i7 are making HPEC a powerful and cost-effective solution, eliminating the compute performance bottleneck of previous generations.  However, a major challenge for HPEC platforms remains, “How do I get more high-bandwidth data to the processors?” Applications such as radar, sonar, and other signal processing systems generate gigabytes of data that need high GFLOP computational power.

 

IMG_7250.JPG

 

Kontron-VX3044.jpg

This week Kontron took a major step towards solving this problem with its HPEC proof-of-concept (PoC) platform, which delivers 40 Gigabit Ethernet (GbE) equivalent data transfer rates via PCI Express 3.0. The platform delivers breakthrough I/O data bandwidth in a small footprint for next-generation embedded radar/sonar applications. The PoC platform is based on VPX, which is known for its high performance, harsh environment capability, and small size. Designed as a complete system, the Kontron HPEC PoC integrates their 3U VPX SBC VX3044, the VPX PCIe Switch VX3905 and the high-end L2/L3 Ethernet switch VX3910 into one platform.


The Kontron PoC is unusual for its use of PCIe instead of 10 GbE, which is a popular serial fabric option for high performance computing platforms. Ethernet has the benefit of widespread adoption and excellent software support for the TCP/IP protocol. However, some applications require even higher throughput that is available through serial fabrics like PCIe 3.0, but these solutions have been hindered by their programming challenges and more limited support of communication protocols.

 

What sets the Kontron platform apart is the use of PCIe 3.0 to deliver 40GbE equivalent data transfer rates using common TCP/IP protocols. The combination of PCIe 3.0 and TCP/IP is achieved through Kontron’s VXFabric* middleware, which implements the TCP/IP protocol over the PCIe infrastructure to boost transmission bandwidth to nearly 40 GbE speeds. This allows the I/O data bandwidth to match up nicely with the capabilities of 3rd generation Intel Core i7 processors on the VPX blades while running a well established transfer protocol, minimizing software impact during system development, and enabling the quick transfer of legacy applications to the new platform with little or no modification. This technology makes it easier to fully utilize the processing potential of the 3rd generation Intel Core i7. Routing of the PCIe fabric is provided by the Kontron VX3906, one of the industry’s first PCIe 3.0 VPX switches. This switch roughly doubles the per-lane throughput compared to PCIe 2.0, providing a major performance boost.

Kontron-VXFabric.jpg

 

With VXFabric, the use of standard communication protocols, TCP/IP or UDP/IP, based on the socket API protects the application software investment. Legacy software can operate now and new software based on TCP/IP is ensured to be supported for years to come. OEMs and developers can enjoy an optimized total cost of ownership (TCO) and have a direct migration path from their existing application deploying today. VXFabric addresses all fast and low latency peer-to peer inter computer node communication within a chassis. VXFabric can deliver up to 4.2 gigabytes per second (GB/s) (over PCI Express) in data throughput between VPX boards in a rack.

 

In addition to the high performance interconnection between blades, there is the tremendous amount of processing power provided with the 3rd generation Intel Core i7-3612QE processor (4M cache, 4 execution cores, 8 threads) with the integrated graphics core, the Intel® HD Graphics 4000. The Intel HD Graphics 4000 provides 16 graphics execution units (EUs) that produce noticeable improvement in 3D performance, of as much as 2x. This combination, in the 22nm process, makes for low power consumption and the integration enables even smaller packages helping to increase the blade functional density that is so important to HPEC.

 

HPEC applications require high GFLOP performance. This is achieved with the Core/GPU combination of the 3rd generation Intel Core i7-3612QE processor and Intel HD Graphics 4000. This low voltage processor has a very favorable GFLOPS/watt ratio with the Intel® Advanced Vector Extensions (Intel® AVX) technology.

 

Graphics computing offers unprecedented application performance by offloading compute-intensive portions of the application to the Intel HD Graphics 4000 execution units, while the remainder of the code still runs on the CPU cores. The CPU cores are optimized for serial processing, while the graphic EUs are more efficient for parallel process performance. Many radar and sonar applications can be broken down into serial and parallel algorithms that can take advantage of this combination. The graphics EUs provide a massively parallel processing subsystem that can focus on many threads and parallel large data sets boosting the GFLOPs to high performance levels.

 

Looking forward, the performance story will get even better with the 4th generation Intel® Core processor family. These chips introduce the Intel® Advanced Vector Extensions (Intel® AVX) 2.0 instruction set, which doubles peak floating-point throughput, enabling a quad-core mobile class processor to achieve up to 307 GFLOPS at 2.4 GHz. The graphics engine is also upgraded, offering another 352 GFLOPS of the raw performance through OpenCL 1.2 programming—more than doubling overall compute potential—while adding only a few watts of power consumption.

 

The Kontron PoC platform is also notable tor integrating a wealth of hardware and software that simplifies development. The PoC platform includes a Linux distribution, diskless node support and parallel workload management software. Also integrated are compilers, optimized FFT benchmark code samples and a stress test application framework for benchmarking. Computer health management is enabled through a chassis management board content, system-wide PBIT and power/performance management at the system level.

 

Processor Graphics add GFLOPs to HPEC.png

Processor Graphics add GFLOPs to HPEC

 

The goal of the Kontron HPEC PoC is to help developers dramatically streamline the process from design to field deployment of next-generation radar and sonar that are expected to make a tremendous jump in a processing power and bandwidth.

 

The VPX-based PoC is a flexible design that can be specifically optimized for the most demanding applications. The configurable PCI Express switch fabric interconnect can be routed in ways most appropriate for the data transfer needs of HPEC platforms. Legacy application support with TCP/IP make the platform even more attractive, leaving it to the imagination of designers to utilize the 10x increase in bandwidth. Based on mainstream IT technology (TCP/IP, PCIe, Intel® processors), the Kontron HPEC PoC is also developed to address the U.S. military’s smart procurement initiatives that put into place more rapid and agile purchasing processes.

 

Related Topics

 

Kontron is a Premier member of the Intel® Intelligent Systems Alliance.

     Contact Kontron>>

 

Jerry Gipper

OpenSystems Media®, by special arrangement with Intel® Intelligent Systems Alliance

Editorial Director, OpenSystems Media, VITA Technologies

By Jeffrey Volkert, OEM Solution Consultant BeNeLux, Dell OEM Solutions

Today’s dynamic healthcare environment relies heavily on the latest technology to increase the speed and accuracy of patient diagnosis and treatment. Physicians are nowadays required to interact with a wide range of end user devices to access their medical applications. From a Dell precision workstation with locally hosted applications to latitude tablets with applications running in a cloud, healthcare professionals may have to wait before using a specific application at an end-point, delaying medical treatment.

For these physicians it would be most beneficial if they could have an environment where they can access their applications from any device with a single sign-on, reducing time consuming actions during the course of a normal workday.

Dell’s Mobile Clinical Computing (MCC) solution understands these issues and mitigates many of today’s Hospital key challenges:

  • Limiting the amount of time clinicians and nurses spend
    searching for available end points as well as the time spent accessing and then re-accessing applications
  • Providing a single sign-on instance that clinicians and nurses can use to authenticate seamlessly against endpoint
    and multiple applications
  • Providing quick and secure access to patient information on-demand using proximity and/or biometric authentication methods
  • Reducing and limit the time spent by IT staff managing client systems in their environment


According to recent trials held in Europe, Dell has proven that our MCC solution can deliver:

  • Up to 215+ minutes per user per week productivity gain (9% improvement)
  • Economic value of productivity can be up to £10,000 per user per annum
  • Appropriate Information Security delivered that’s workable for users
  • Improved patient safety, quality of care and patient satisfaction

 

I think you would agree with me that the latter is the most important one.

Dell Mobile Clinical Computing Solution is a desktop virtualization and identity access management solution built on top of the DVS Enterprise 6020 Solution and powered by the latest Dell 12th generation servers. The Dell MCC Solution is offered in both Rack or Blade server configurations with an option for storage on Dell EqualLogic or Compellent solutions which allow for automatic load balancing, tiering, snapshots and replication.

Dell PowerEdge 12th generation servers are designed with more memory capacity and more integrated I/O (input/output) than the previous generation and can provide up to 18 times more Microsoft® SQL Server® transactions per second when using PowerEdge Express Flash Storage solid-state drives (SSDs) (directly connected to the CPU and memory bus).

Healthcare environments requires end user devices to be up and running 24/7 as there is no timeframe on patient care. Dell WYSE thin client have no moving parts that can brake down, like fans or hard drives. WYSE thin client have a lifecycle that is twice the lifecycle of a traditional PC, reducing validation and transitioning costs. An added bonus is that with a thin client environment, centralized IT management is now possible to ensure optimal system performance.

A solution is never complete with out the right support for your environment. Dell offers customers the Dell ProSupport for MCC. Dell ProSupport MCC solution helpdesk and field support technicians are highly-trained specialists who are knowledgeable of every component of the lab validated MCC solution configurations including enterprise hardware and software technology, endpoint devices, desktop virtualization software, and identity access management software. These technicians are certified in VMware vSphere & View, Imprivata, and Wyse technology and receive ongoing training.

To learn more contact our OEM team.

This week, 6WIND announced extensions to the 6WINDGate™ networking software that deliver a 10x performance improvement for virtual switches. These new capabilities address three critical scalability challenges faced by service providers operating public/private cloud data centers as well as telecom infrastructure architected around Network Functions Virtualization (NFV) concepts.

The first scalability challenge relates to the number of Virtual Machines (VMs) per server blade. This VM density is increasing rapidly, leveraging on-going improvements in the performance of the x86 processors used on those blades. Today, a typical server blade in a service provider data center hosts at least 50 VMs, with that number expected to grow to hundreds within a few years.

Because of this growth in the number of VMs running on each server blade, the data center network needs to expand beyond its current limit at the Top-of-Rack (ToR), to a model where a virtual switch on each server blade is used to distribute the increasing volume of network traffic to virtualized applications. This function is typically implemented using the open-source Open vSwitch (OVS) or an equivalent proprietary virtual switch.

The second scalability challenge is the network bandwidth required by VMs. With the constant growth in rich media applications, individual VMs can require sustained network bandwidth of 1Gbps or more. As VM density increases, this bandwidth requirement can quickly outstrip the capacity of a standard virtual switch, constraining either the number of VMs that can be instantiated on a blade or the performance seen by the users of those VMs.

The final major challenge is a result of evolving traffic patterns within data centers. In traditional client-server data centers, network traffic was predominantly “North-South”: from the Internet, through the core switch and aggregation layer, to the ToR switch and then to the server blades. Within multi-tenant Web 2.0 data centers, however, VM mobility and VM sprawl cause the majority of traffic to be “East-West”: between VMs located on different physical server blades.

This growth in East-West traffic means that high-bandwidth VM-to-VM communication (VM2VM) is mandatory. To isolate and secure VM2VM, however, requires secure tunneling services, running on the virtual switch, that extend beyond the basic Layer 2 features provided by a typical virtual switch. Within a cloud, these ensure that only specific VMs can access the applications and data owned by other VMs. Between clouds, policies are enforced to ensure that data and traffic for one cloud is not visible to another.

Adding to the overall security- and networking-related workload that must now be supported on server blades is the increasing trend towards the use of overlay network technologies which avoid the 4,094 tunnels limitation of traditional VLANs. Data centers are now adopting VXLAN and NVGRE protocols, with 16-bit IDs that allow for 16 million tunnels.

 

The new extensions to the 6WINDGate networking software provide solutions to these three challenges, delivering a data plane solution that achieves 5x - 10x acceleration for the baseline Layer 2 switching function.

 

Thanks to this increase in Layer 2 switching performance, data center operators can achieve the increases in VM density that are enabled by on-going improvements in the performance of server processors. They can also deliver high network bandwidth to individual VMs, addressing the performance needs of users running streaming media applications or other bandwidth-hungry workloads.

At the same time, 6WINDGate delivers high performance on the necessary secure tunneling protocols such as IPsec, GRE, NVGRE, VLAN and VxLAN.

Within the virtual switch, the 6WINDGate data plane spies on 6WINDGate spies on configuration messages from the virtual switch control plane to its data plane, intercepts the appropriate packets and processes them within the 6WINDGate fast path, thereby achieving the 5x – 10x performance acceleration.

These high-performance switching, tunneling and security functions are achieved with no changes required to the virtual switch software itself.

6WINDGate is also fully-compatible with OpenFlow and completely transparent to the applications running on the platform, which do not need to be recompiled or re-verified in order to work with this high-performance solution.

We will be demonstrating the use of 6WINDGate to accelerate the standard Open vSwitch, in partnership with Big Switch Networks and Ixia, in our booth number 401 at the Open Networking Summit in Santa Clara, CA from April 15th through 17th. We hope to see you there!

If the lights in your home have ever flickered briefly during a storm—and didn’t then stay out—a lot of Smart Grid technology kicked in between the time the lights blinked out and the few hundred milliseconds later when they came back on again.

 

During that brief time an operation control center detected the loss of power to your area and redirected power from another substation to compensate, perhaps diverting power from a third source to compensate for the additional load on the second substation. When the break was repaired, the transformer replaced, or the breaker in your local substation automatically reset after a lightning strike, the control center automatically brought your substation back online and rebalanced the loads between all substations. In that way the Smart Grid is said to be self-healing, though humans will always have to repair line breaks and blown transformers.

substation.jpg

Figure 1: Electrical power distribution and transmission.

 

There are two types of substations: primary and distribution (Figure 1). Primary substations work on the supply side, taking power from a variety of primary sources—hydroelectric, solar, wind, geothermal, and nuclear—and putting it out on the grid. This involves synchronizing highly variable inputs such as solar—which is clearly only available during the day—with wind power, which peaks at night. The substations must also regulate the loads on the power sources, which may vary considerably in capacity.

 

For each primary substation there may be dozens of distribution substations, which work on the demand side, ensuring load sharing between residential, industrial, and transportation end users. When a substation starts nearing its peak capacity it signals the control center to bring other sources online to get it through peak demand, avoiding the ‘rolling blackouts’ that preceded the Smart Grid.


The Smart Grid works because substations can all communicate with each of the elements under their control, sending that information back to a master control center that controls all the substations. IEC 61850 is the IEC standard for substation automation, replacing a myriad of proprietary protocols whose lack of interoperability delayed the advent of the Smart Grid.

 

On the Level

There are three different levels in Smart Substation architecture: the Station Level, the Bay Level, and the Process Level. Advantech provides numerous Intel-based IEC 61850 certified Smart Substation solutions in each of these areas. Its UNO-4600 series Substation Automation Computers can operate as HMI/SCADA, Terminal (serial-port) Servers, Protocol or Communication Gateways, Cyber Security Servers (UTM), and Substation/Networking Recorders.

 

At the Station Level the Advantech UNO-4683 provides the communication gateway between the remote control center and all the environmental monitoring and control devices at the substation; it also provides cyber security for the substation. The UNO-4683 Automation Computer is based on an Intel® Core™ i7 running at 2.0 GHz with 4 GB of DDR3 SDRAM. It provides two RS-232/422/485 isolated serial ports with automatic flow control; 2 x 10/100/1000Base-T and 4 x 10/100Base-T Ethernet ports; and six USB 2.0 ports with three domain I/O expansions.

 

At the Bay Level (Figure 2) the Advantech UNO-4673A protocol server provides a data gateway between intelligent devices and the station-level controller. The UNO-4673A is based on a 1.66 GHz dual-core Intel Atom processor with 2 GB of DDR2 SDRAM. Sitting on the Ethernet backbone the Advantech UNO-4672 acts as a network recorder and analyzer, passing device data back up to the station level. The UNO-4672 is powered by either an Intel® Pentium® M running at 1.4 GHz or an Intel® Celeron® M at 1.0 GHz, each with 1 GB of on-board DDR DRAM.

bay level.jpg

Figure 2: Substation automation at the Bay Level.

 

Finally, at the Process Level either the Advantech UNO-4671A (Intel® Atom™ D510 @ 1.66 GHz) or UNO-4673A (dual-core Intel®  Atom™D510 @ 1.66 GHz) acts as an Intelligent Electronic Device (IED) that continuously monitors the status of transformers, circuit breakers, and switch gears, warning of excessive temperature, vibration, leakage or other issues that could cause device failure.

 

Getting Smarter

When the lights go out they don’t just blink for everyone—sometimes they go out for hours. The basic design of the electrical power grid is over 100 years old, and it’s only gradually being computerized. Most utilities have begun to automate the restoration process by installing supervisory control and data acquisition (SCADA) systems that monitor and control line reclosers and switches, but the system is still a long way from being completely automated. Smaller cities and other customers are usually connected to their local substation by a single radial feeder. Outages to these feeders are called in by a customer to the control center, which then dispatches a person to the area to manually restore power to customers.


Implementation of automated devices such as SCADA-enabled switches and line reclosers would cut outages. Distribution circuits could also be sectionalized with SCADA-operated devices between each section. Open points that connect to other circuits could be replaced with SCADA-enabled switches. Then in the event of a failure the system could automatically isolate the problem, opening adjacent switches and rerouting power to unaffected sections by closing connections to adjacent circuits.


The Smart Grid is getting smarter, and substation automation is the key element to its success. Advantech already has a wide range of Intel-based products that can provide a complete, automated solution. It’s just a matter of time before the Smart Grid all comes together and your lineman will need to find another line of work.


LEARN MORE

Solutions in this blog:

Related topics:

 

Advantech is a Premier member of the Intel® Intelligent Systems Alliance.

 

John Donovan
Roving Reporter (Intel® contractor), Intel® Intelligent Systems Alliance
Editor/Publisher,
Low-Power Design
Follow me on twitter: @jdonovan43

New machine-to-machine (M2M)-based cloud applications for home monitoring can help patients manage their health conditions at home, while giving healthcare providers access to up-to-date patient information. But these professional monitoring devices have consumer-like demands. They need to be easy to set up, connect, and use. They need to protect patients’ sensitive personal health information. And developers of these systems need to be able to quickly respond to market changes with new products, using standards-based platforms and network protocols. Our roundtable discussion explores how the Intel® Intelligent Systems Framework supports remote monitoring systems that deliver the management, connectivity, and security features required for effective patient care. Our roundtable participants are:

 

roundtable.jpg

 

Intel Roving Reporter: The market for remote medical monitoring devices is heating up, putting additional pressure on developers to get to market faster with competitive products. How can you help them get there?


Satish Ram, Kontron: Kontron provides a suite of application-ready platforms that are Intel Intelligent Systems Framework (ISF) ready, by offering various connectivity, scalability and security options as well as meeting healthcare-specific compliance requirements (PCI, HIPAA, EN50134, etc.). Our human-machine interface (HMI) and M2M products are examples of such products targeted at the healthcare vertical. These platforms come with a development platform that is already pre-integrated to connect up to other ISF-ready systems.

Kontron M2M platform.png

 

The Kontron M2M Developer Kit is a compact platform that integrates its COM Express® mini Pin-out Type 10 compatible COMe-mTT family of Computer on Modules (COMs) based on the Intel® Atom™ processor E640 1 GHz. The external USB port simplifies use of M2M SDKs, and offers sufficient storage for M2M smart service applications, middleware and OS on the 4 gigabyte (GB) internal MicroSD card. The kit also supports a full range of wireless interfaces, protocols and network topologies.

 

Mike Ueland, Telit Wireless Solutions: Telit offers an extensive portfolio of quality cellular, short-range, and GNSS modules, available in over 80 countries. By supplying business-scalable products that are interchangeable across families, technologies, and generations, Telit is able to keep development costs low and uniquely protect customers' design investments. Telit provides customer support, design-in expertise through its sales and support offices, and offers a global distributor network of wireless experts in over 30 Telit-designated competence centers that work with customers to develop concepts and ideas to realize their solutions.

 

Telit offers a form factor family concept, allowing developers to develop one application for different markets. M2M modules belonging to a family have the same form factor and functionalities—the same size and shape, same pin-outs, and same AT command set. Design once, deploy globally. The advantage for remote medical monitoring device developers and electronic manufactures is that all modules in a family are interchangeable. When the time comes, developers can easily replace family modules with successor products such as would happen in a switch from 2G to 3G. The unified form factor feature reduces development effort, time, and costs, protecting investment in core designs.

Telit xE910 Family.jpg

 

Steve Popovich, Digi International: Digi provides several easy-to-integrate connectivity products and services used for connecting medical devices and sensors in thousands of hospitals and home healthcare applications. We have commercial medical device customers that create prototype remote monitoring systems in just a few days.

Digi hmhealthmonitor.jpg

RR: What do designers need to know about connectivity standards—from wireless to cellular—as they develop these devices that may be sold for use anywhere?


Satish Ram, Kontron: Solutions designers need to become familiar with WPAN, WLAN, and WWAN connectivity options and the current adoption trends towards Bluetooth Low Energy for medical devices. In addition, for use of cellular WWAN solutions, designers need to ensure compliance with regulatory certifications such as FCC, CE, PTCRB, GCF or others depending on the country and deployment regions. Also, wireless data services can vary significantly from one country to another as do wireless frequency bands for cellular networks. Kontron can help customers navigate this complex ecosystem and make it easier for deploying solutions quickly.

 

Mike Ueland, Telit Wireless Solutions: There are a number of requirements to consider, including speed, coverage, deployment plans, costs , data plans and hardware, carrier relations, and the expected product lifecycle. In the majority of cases, the cellular module is the most important component in the process of integrating M2M into electronic devices like those used for remote medical monitoring.

 

Cellular data transmission speeds go up from 2G where GPRS speeds are rated in a few tens of Kbps to the current release of LTE (4G) of 100 Mbps. There are a number of trade-offs. The higher the connection speed you require, the higher the price of the module and associated connectivity data plans. Today, 3G enjoys nearly as much geographical and population coverage as 2G. 3G will soon have a larger footprint than GSM when U.S. carriers start sun-setting their 2G service. Therefore, 3G will continue to be the long term “everywhere” cellular coverage for many more years.

 

Steve Popovich, Digi International: One of cool benefits of our iDigi connectivity architecture is we can support ALL of the domestic and international wireless standards and little or no wireless experience is required by the developer! Developers using our iDigi architecture only need to create the business application and not be concerned with the details of what wireless technology is needed for deployment. For example we have healthcare customers that create a single business application that can incorporate several different wireless connectivity technologies to economically support multiple unique end customer use cases.

 

RR: Since these remote devices are being used by consumers, not healthcare or technical professionals, what are some strategies you advise developers to use to make sure these are easy to set up, connect, maintain, and use?


Satish Ram, Kontron: Focusing on ease of use and ‘ready out of the box’ solutions is key for consumer adoption. Leveraging frameworks such as ISF will help developers ensure seamless integration to various subsystems and automate much of the functionality to improve user experience. As an example, home healthcare devices should automatically detect new devices and ‘pair’ with them without any user intervention. So, the consumer can purchase a BLE weight scale that auto-connects and displays the data on his/her smartphone with a simple downloaded app.

 

Mike Ueland, Telit Wireless Solutions: There are development tools that can aid or make it easier for developers to design applications. Many firms have been successful using the Agile software development process. The Agile process promotes adaptive planning, evolutionary development and design with an interactive approach that encourages faster responses that are flexible to change. The Agile manifesto establishes that developers, in their approach to application development, limit the amount of work in progress to reduce the impact of sudden business changes.

 

Customer collaboration and the user experience with the application are critical in the development process. Therefore, continuous customer and stakeholder involvement is required. In order to deliver on this key aspect, Agile establishes smaller, faster development cycles that deliver functioning software sooner, which in turn limits the total body of work in progress. This has the effect of testing hypotheses about required functionality much earlier than other methods allowing developers and business sponsors to quickly prove or disprove them. The bottom line is that developers need be strategic about developing applications for the consumer.

 

Steve Popovich, Digi International: Our iDigi Manager Pro services provide an almost infinitely scalable connectivity platform for our customers. Our iDigi Manager pro services enable our customers to automate the remote configuration, commissioning, and ongoing management of millions of devices. Via our iDigi APIs developers can also elegantly integrate our features into custom applications.

 

RR: Security is always an issue when discussing the management of personal health information. How are you helping developers address this in their products?


Satish Ram, Kontron: Security is of paramount importance in many applications, and especially so in healthcare applications. Kontron’s products leverage the ISF security framework components, including Trusted Platform Module (TPM) and WindRiver Secure Remote Management (SRM) software, to ensure data privacy and security.

 

Mike Ueland, Telit Wireless Solutions: Cellular data communication is inherently very secure—substantially more so than traditional data communication. In addition to the security already built into cellular data, we provide the remote medical monitoring device integrator with a comprehensive set of IP protocols designed to enhance this security. A key element of this toolkit is the embedded SSL protocol available in all our modules. The use of SSL over cellular IP data, augmented by best practices in two-way challenges and other identification, authentication and security measures ensures that personal health information is communicated in the most secure manner possible.

 

Steve Popovich, Digi International: Digi provides several security options for both wired and wireless products and services. For example even our most basic wireless modules integrated inside of medical devices support complex encryption methods that prevent data from being read by network analyzers. For security purposes medical device and patient identification information is only combined beyond connectivity services platform.

 

 

 

Learn More

 

Solutions in this blog:

•            Intel Intelligent Systems Framework

•            Kontron M2M Smart Services Developer Kit

•            Telit Wireless Solutions HE910 Family

•            Telit Wireless Solutions GE910 Family

•            Digi International x2E Gateway

•            Digi International iDigi Manager Pro

                              

 

Related topics:

•            Medical - Top Picks (blogs, white papers, and more)

•            Connectivity - Top Picks (blogs, white papers, and more)

 

 

Kontron is a Premier member of the Intel® Intelligent Systems Alliance.

    Contact Kontron>>

Digi International is an Associate members of the Alliance.

    Contact Digi>>

Telit Wireless Solutions is an Associate members of the Alliance.

    Contact Telit>>

 

Cheryl Coupé

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Freelance technology writer & editor (scribo.net)

Cloud-based telecom services are creating new opportunities for service providers to manage costs and deliver enhanced services. Off-the-shelf, carrier grade servers, supporting virtualization and high speed network interfaces offer a cost-effective alternative to conventional telecom systems. Cloud-based services can be rapidly deployed to meet customer demands and software defined networking (SDN) will add further flexibility and extend the value of this approach.

 

In this blog I am going to explore the benefits of using servers based on Intel® Xeon® Processors E5-2600 Series for cloud-based telecom services. For this blog I am using implementation examples from Dell, an Associate member of the Intel® Intelligent Systems Alliance. The 250-plus members of the Alliance collaborate closely with Intel® to create hardware, software, tools, and services to help speed Intelligent systems  to market.

 

Telecom Cloud

The telecommunications infrastructure is developing quickly to meet the demands of business and personal customers. Video-on-demand, social networking and cloud storage are key applications driving demand for enhanced services and high speed connectivity. The combination of server virtualization and cloud architectures has revolutionized the provision of these applications to businesses and consumers.  By using virtual servers and cloud technologies in the telecom infrastructure carriers can quickly build flexible and scalable networks.

 

Carrier grade servers are already used in some elements of the network infrastructure used to deploy 4G, LTE wireless networking and high speed broadband. The latest Intel processors deliver significantly enhanced packet processing performance dramatically increasing the number of network elements that can be implemented using standard server systems. SDN has been developed to enhance cloud architectures. SDN uses real-time Intelligence, application information and virtualization to optimize server and networking resources to meet user demand. Carriers now have the opportunity to significantly increase return on investment (ROI) by shifting to cloud architectures and standard server systems to deliver telecom services.

 

Key Technologies
The Intel® Xeon® Processors E5-2600 Series support several technologies that are critical for high performance cloud based telecom services. The Intel® Xeon® Processors E5-2600 Series integrate up to 16 cores in a dual-socket configuration with large Layer 2 and Layer 3 caches and high-performance DDR3 memory controllers delivering fast memory access. The processors provide up to 40 PCI Express® Gen 3 interfaces supporting a total I/O bandwidth significantly in excess of 100Gbit/s.

The Intel® Xeon® Processors E5-2600 Series support Intel® Virtualization Technology that boosts virtualization performance by allowing the operating system more direct access to the processor hardware. The Intel® Xeon® Processors E5-2600 Series deliver a significant step forward over previous generations in terms of performance, reduced power consumption and support for packet processing functions.

Carrier Grade Servers

Carrier grade servers provide NEBS and ETSI certified platforms for telecom systems that use standard components and benefit from the high volume data center market. Dell is shipping both rack mount and blade server solutions for telecom applications. The systems are scalable from a single rack mount server with 8 cores up to a 42u rack with over 1,000 cores in four blade server enclosures. 

 

Dell R720t Cropped.png

Figure 1. Dell™ PowerEdge™ R720 t.

 

Figure 1 shows the Dell™ PowerEdge™ R720 t NEBS Level 3 and ETSI certified 2u rack mount server. The system integrates dual 4, 6 or 8 core Intel® Xeon® E5-2600 series processors and Intel® C600 Series Chipset. The system is available with up to 384GB memory and 1/10 Gigabit Ethernet (GE) or 4/8 Gigabit Fibre Channel (GFC) interfaces. The PowerEdge™ R720 t has 16 2.5“ drive bays and will support up to 14.4TB of storage.


Dell M620t and M1000et.png

Figure 2. Dell™ PowerEdge™ M620 t & M1000e t.

 

Figure 2 shows the Dell PowerEdge™ M620 t half height blade. The blade integrates dual 2, 4, 6 or 8 core Intel® Xeon® E5-2600 series processors and Intel® C600 Series Chipset. The Dell PowerEdge™ M620 t supports two 2.5” SAS HDD or SAS/SATA SSD drives. The Dell™ PowerEdge™ M1000e t enclosure has sixteen half height slots and shared power supplies and cooling fans. The solution supports 10 GE or 8 GFC network interfaces.

 

The Dell PowerEdge™ M620 t supports Microsoft Windows Server® and SUSE® or Rad Hat® Linux server operating systems. Virtualization options include Citrix® XenServer™ and VMware® vSphere™. Two internal SD cards are dedicated for embedded hypervisor support.

 

Delivering Cloud Based Telecom Services
The combination of standard components and carrier grade systems provides a cost-effective solution for cloud-based telecom services. The systems covered in this blog scale from a few cores to over 1000 cores using the same building blocks. Virtual servers and cloud architectures are set to provide a key parts of the telecom infrastructure, reducing carrier cost of ownership and the accelerating time to market.

 

LEARN MORE >>

 

Solutions in this blog:

 

Related topics:

 

Dell is an Associate member of the Intel® Intelligent Systems Alliance.

 

Simon Stanley

Roving Reporter (Intel® Contractor), Intel® Intelligent Systems Alliance

Principal Consultant, Earlswood Marketing

Follow me on Twitter: @simon_stanley

While editing articles for the upcoming April issue of Embedded Innovator magazine, I noticed that there is a major effort underway to make it easier to implement and scale digital signage systems. Where once there was just the Open Pluggable Specification (OPS), now there’s also the Intel® Intelligent Systems Framework.

 

This is the dynamic duo in industry specifications for digital signage systems. OPS provides a modular standard for displays and media players, enabling the design of media players that fit neatly into a slot in the back of a display to facilitate installation, maintenance and upgrades. The Intel Intelligent Systems Framework provides a common set of interoperable guidelines to address the fundamental capabilities required for connecting, managing, and securing intelligent devices in a consistent and scalable manner.

 

Plugging into the Open Pluggable Specification

OPS is a great convenience for matching the level of performance you want from a media player with a display that can take advantage of that performance. If you’re just showing static images, you can choose a low-end OPS-compliant media player designed for that. If you want high definition (HD) images and video, you can choose a high-end OPS-compliant media player.

 

OPS also provides excellent flexibility. If down the line you decide you want a bigger display, you can   buy a newer, bigger OPS-compliant display and just reuse the media player from the old display. OPS also provides some insurance on being able to utilize new technology. For example, if you want to take advantage of the improved integrated graphics, performance, manageability and security of a new generation of Intel® processors, an OPS display/player combo allows you to just pull out the old media player and insert a new OPS-compatible media player with the latest processor.

 

The Challenges in Scaling Out Digital Signage Systems

As great a step as OPS is in modularization, the digital signage industry still faces challenges in assembling large networked systems that connect hundreds of signs to the back-end or cloud, as well as other retail systems. Large signage systems require remote manageability and security to be cost effective and protected from malware. What’s more, large signage systems require some form of data collection and analysis to provide the audience metrics, proof of play, and delivery of targeted content that advertisers want.

 

This has been a tall order. Developers and system integrators have made laudable efforts to cobble together relatively secure and manageable digital signage systems. Unfortunately, it’s been a pretty labor-intensive process involving finding and spec’ing interoperable components, selecting and getting all the software and licenses together, and then testing and hoping the whole thing doesn’t become obsolete or hard to expand in several years. The customized nature of these systems can make them expensive to implement. Dependence on proprietary hardware or software that may not be offered or maintained in the future can be frightening when investing hundreds of thousands of dollars or more.

 

A Sign for the Times: The Intel Intelligent Systems Framework

For digital signage systems, the Intel Intelligent Systems Framework is a wish come true. Designed for use across the full spectrum of connected devices, the framework gives developers access to a range of validated components from the Intel® Intelligent Systems Alliance, a global ecosystem of 250+ member companies that collaborate closely with Intel and each other to innovate with the latest technologies. These framework-ready components can be mixed and matched to deliver comprehensive solutions spanning from media players to the cloud.

 

Framework-ready hardware includes boards, modules, and market-ready systems. Complementing these hardware solutions are software components from McAfee, Microsoft, and Wind River. These components include:

 

  • Operating system and middleware, such as Microsoft Windows, Wind River Linux, and Wind River VxWorks
  • Security solutions, including McAfee Embedded Control and McAfee Deep Defender™ designed to protect the edge and the network
  • Remote manageability capabilities supporting third-party management consoles
  • Virtualization solutions enabling consolidation of multiple functions on a single platform—such as point of sale (POS) and digital signage
  • Integrated software development tools, such as Intel® System Studio, that help software developers quickly and efficiently code for systems based on Intel® processors
  • Firmware solutions, such as Intel® Firmware Support Package, that provide low-level Intel processor, Intel® chipset, and memory firmware initialization capabilities using a standardized interface

 

Using these framework-ready components developers can confidently and efficiently connect and secure devices and data from different platforms and applications (see Figure 1). In addition, the framework uses Intel® Active Management Technology (Intel® AMT)—part of Intel® vPro™ technology available through select 2nd and 3rd generation Intel® Core™ processors—to deliver value-add capabilities like hardware-based virtualization and remote management. The latter is particularly valuable for replacing expensive in-person service calls with out-of-band control, update and repair capabilities. For sign system owners, this lowers total cost of ownership (TCO) and enables faster ROI by maximizing system uptime and lowering maintenance costs.

 

ISF Retail Diagram.JPG

Figure 1. Through the Intel® Intelligent Systems Framework, the retail industry will more easily be able to connect, manage and secure a wide range of devices, laying the groundwork for new consumer experiences, enhanced productivity, better inventory management, and brand optimization.

 

An OPS-Compliant, Framework-Ready Solution

When it comes to media players, the best of both worlds are OPS-compliant devices that are Intel Intelligent Systems Framework-ready. A good example is the Advantech ARK-DS262. This tiny (200 x 119 x 30 mm) unit small slides easily inside OPS-compliant displays, but packs all the processing and advanced graphics you’d expect from its 3rd generation Intel® Core™ i7 processor (see Figure 2). There’s integrated support for HDMI output for full HD content display and 3D animation applications with no need for a third-party graphics card.

 

OPS.jpg

Figure 2. The Advantech ARK-DS262 fits into a slot in an OPS-compliant monitor to make an all-in-one digital signage solution that can be reconfigured easily if necessary in the future.

 

The ARK-DS262 also is ideal for remote management through Intel AMT and a third-party console. In fact, as a framework-ready component, it comes pre-loaded with an embedded OS, along with SUSIAccess remote device management software, and system security software by McAfee. In addition, security is easily implemented on both a device and network basis through framework-ready McAfee products that take advantage of Intel® processor-based security technologies. With such a turnkey signage solution, developers and system Integrators can save important development time and focus on their own applications.

 

Giving Advertisers the Control and Metrics They Want

The use of framework-ready media players and boards ensures the ability to deliver viewership metrics and targeted content. Such players and board boards can run Intel Audience Impression Metrics Suite (Intel® AIM Suite), an anonymous viewer analytics (AVA) software product. Easily deployed in digital signage systems, Intel AIM Suite installs on nearly any device based on Intel® processors and utilizes off-the-shelf consumer web camera sensors and cloud-based reporting. By providing the ability to recognize viewer gender and approximate age bracket, Intel AIM Suite enables delivery of viewer-targeted content and can collect viewership data such as dwell time and user interactions to report back on advertising effectiveness.

 

A New Era for Digital Signage

Strategically combining key framework ingredients, developers and OEMs can now more quickly design digital signage solutions that connect over a network through a variety of wired and wireless technologies. This integration and the performance of Intel processor-based systems will enable signage solutions that system-wide can collect and analyze data in near real time to deliver customized messages to specific audiences at each sign and record their reaction.

 

This is a new, exciting era for digital signage. The ability to monitor and increase advertising effectiveness alone should increase the number of organizations willing to invest in extensive signage systems and to inspire new revenue models in their relationship with advertisers. The Intel Intelligent Systems Framework also opens the door to new usage models as the number of framework-ready products grow. It’s easy to imagine that developers quickly become adept and creative at using these ingredients to launch their own solutions for success. One might well ask: what will you do with the framework today?

 

 

LEARN MORE >>

 

Solutions in this blog:

 

Related topics:

 

 

Advantech is a Premier member of the Intel® Intelligent Systems Alliance. McAfee, Microsoft and Wind River are Associate members of the Alliance.

 

Mark Scantlebury

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Associate Editor, Embedded Innovator magazine

Network appliances based on mobile processors enable low power and cost effective solutions for network security and other functions, including firewall, gateway, anti-virus/spam and load balancing. The 3rd Generation Intel® Core™ i7 Processors have several enhancements that make them particularly attractive for network appliances and other communications applications.

 

In this blog I am going to explore the benefits of using commercial off-the-shelf (COTS) Mini-ITX motherboards based on Intel® Core™ i7-3xxx Mobile Processor Series to build network appliances. For this blog I am using implementation examples from Super Micro Computer Inc. , an Affiliate member of the Intel® Intelligent Systems Alliance. The 200-plus members of the Alliance collaborate closely with Intel® to create hardware, software, tools, and services to help speed intelligent systems  to market.

 

Network appliances

Network appliances are stand-alone systems that implement one or more networking support functions. These include network security functions, such as firewall and anti-virus/spam filtering, media gateways and load balancing systems. Network appliances are often 1U or 2U rack mounted systems with plug-in modules for storage and networking interfaces.

 

Most network appliances have been built using custom motherboards and dedicated processors selected for the particular networking functions supported by the appliance. The introduction of high performance multicore processors with four or more cores with support for low level packet processing functions has opened up new opportunities for developers to build network appliances using standard motherboards and common processors.

 

Packet processing, security and virtualization

3rd Generation Intel® Core™ processors have changed the dynamics of this market. These processors benefit from higher frequencies, larger L2 caches and integrated GPU. These processors also integrate several features that support network appliance functions including virtualization (VT - VT-x, VT-d and VT-C), advanced encryption standards (IPSec), advanced vector processing (AVX), Turbo Boost, AMT (Remote Management) and Trusted Execution (TXT).

 

Companies developing network appliances based on 3rd Generation Intel® Core™ i7 processors can take advantage of the Intel® Data Plane Development Kit (Intel® DPDK). This is a set of libraries designed to implement high speed data packet networking functions with outstanding small packet (64 byte) performance. The Intel® DPDK offers a simple software programming model that is easily integrated with standard Linux implementations.

 

Mini-ITX motherboard for embedded applications

The Supermicro Mini-ITX X9SPV M4 motherboard, shown in Figure 1, provides an off-the-shelf solution for network appliances and other embedded applications. The X9SPV M is available with two or four core Intel® Core™ i7-3xxx Mobile Series processors and up to 16GB ECC DDR3 1600/1333MHz memory. The Mobile Intel® QM77 Express Chipsets PCH supports dual display, PCI Express Expansion, USB 2.0 and 3.0 and SATA 3.0 G/bs and 6.0 G/bs expansion options. The motherboard integrates four Gigabit Ethernet (GbE) LAN ports and expansion for additional GbE or 10GbE ports via an x16 PCIe 3.0 slot. Storage support includes up to four 2.5” SATA2 HDD or SSD drives.

 

Mini-ITX X9SPV M.png

Figure 1. Supermicro Mini-ITX M Block Diagram.

 

The Mini-ITX X9SPV M4 motherboard is enterprise-grade and supports Intel embedded processors and chipset SKUs for long life availability (Typical 5 to 7 years availability). This solution is energy efficient with low TDP Mobile CPU. This reduces power consumption, enables quiet operation and supports harsher temperature environments. The Mini-ITX X9SPV M motherboard is shown in Figure 2.

X9SPV-M4.png

Figure 2. Supermicro Mini-ITX X9SPV-M4 Motherboard.

 

Network appliance solution

System OEMs can either develop their own network appliance platform or use a standard server platform. Supermicro will supply motherboards or complete server solutions. Figure 3 shows a Supermicro 1U server.

 

1U Server.jpg

Figure 3. Supermicro 1U Server (Model 5017P-TLN4F with X9SPV-LN4F-3QE).


Off-the-shelf motherboards and server platforms provide a very cost-effective approach to developing network appliances. System developers add secure operating system, firewall or other applications and centralized management, taking advantage of the AMT remote management already integrated.

 

Best of both worlds

This alternative approach to developing network appliances takes advantage of readily available hardware that is affordable and powerful enough to address customer needs for packet inspection, traffic management, data encryption and security. The standard Mini-ITX motherboard has integrated networking interfaces that are expandable up to 10GbE LAN and uses 3rd Generation Intel® Core™ processors that are supported by the Intel DPDK, and other readily available software, enabling system OEMs to quickly bring new network appliances into the market.

 

LEARN MORE >>

 

Solutions in this blog:

 

Related topics:

 

Super Micro Computer Inc. is an Affiliate member of the Intel® Intelligent Systems Alliance.

 

Simon Stanley

Roving Reporter (Intel® Contractor), Intel® Intelligent Systems Alliance

Principal Consultant, Earlswood Marketing

Follow me on Twitter: @simon_stanley

6WIND recently made a number of announcements involving key industry partners, including:

 

The transformation of telecom and networking enabled by SDN (Software-Defined Networking) and NFV (Network Function Virtualization) is often compared to the server virtualization and cloud revolution that happened a few years ago.

 

There is however a major difference. Standard applications “only” need processing capabilities, memory and storage to run efficiently in a virtualized environment. Designing cloud-based network architecture and virtualized networking functions also require high speed and secured I/Os.

 

I believe the combination of high performance generic hardware platforms based on the latest generation of multi-core processors, leading open source software platforms, 40Gbe Ethernet solutions and the 6WINDGate data plane software is a great foundation for addressing the transformation of telecom and networking.

 

6WIND is committed to being a key player in the ecosystem that brings compelling solutions for the industry to accelerate network virtualization. We will lead other major industry initiatives and announce breakthrough products soon.

 

Stay tuned…

train-photo.jpg

Until recently, the United States has taken a decidedly old-fashion approach to running its rail systems. Most of the country’s signaling, switching, and train operation has been handled manually, leading to inefficiencies and unnecessary hazards. The Metrolink commuter train accident in 2008 highlighted just how dangerous manual controls could be, prompting Congress to pass the U.S. Rail Safety Improvement Act of 2008. This act mandated that approximately 73,000 miles of rail and transit infrastructure would have PTC systems in place by 2015.


PTC is a computerized system for monitoring and controlling the movement of trains. The top priority is collision avoidance, but PTC systems can also automatically regulate the speed of trains in response to rail conditions, on-board equipment malfunctions, temporary speed restrictions, and other safety considerations. While the Federal Railroad Administration envisions a National Differential Global Positioning System (NDGPS) to enable seamless train tracking and control, to date the infrastructure is still a patchwork of legacy systems and disparate approaches.

 

SDR vs. the Tower of Babel

One company trying to address that shortcoming is Santa Clara based Lilee Systems. Lilee’s unique software defined radio (SDR) technology and Intel®-based hardware provides complete end-to-end wireless mobility management, enabling trains moving cross country, for example, to interact intelligently with a wide range of legacy safety systems along the way.

 

According to Jon Adams, Lilee’s VP of Strategic Development, “There are many components to Positive Train Control. We do the onboard radios, the onboard networking processors, the wayside radios and messaging processors, and the back office mobile IP abstraction. It’s all standards-based and high security. Every train, every piece of equipment in the field—whether it’s fixed or moving—has a fixed IP address, so it becomes straightforward to manage your assets.”

 

Why resort to something as complex as SDR to handle what would seem to be a relatively straightforward problem? “The answer has less to do with technology than with FCC regulatory domains,” Adams explained. “If you look at the 217-222 MHz band (Figure 1), which is where much of the industry has decided it’s going to put their PTC systems, it’s under four different parts of the FCC [regulations]: it’s under Part 80, which is Maritime Mobile; it’s under Part 90, which is Business/Industrial; it’s under Part 95, which is Citizens Band; and it’s under Part 97, which is the Amateur Radio Service. You can’t operate under the Amateur Radio Service, but you can operate under the other three parts.”

figure1.jpg

Figure 1: The U.S. radio spectrum from 217-222 MHz is multi-layered.

 

“The challenge,” continued Adams, “is even if you build a radio that’s flexible in frequency, it still needs to meet the special requirements of whichever part in which it’s operating. But in those parts they don’t specify modulation type, data rates, coding, or other things. So having a fixed radio means you can only service one segment of a pretty small market. But a software defined radio enables you to throw a virtual switch and suddenly you’re completely compliant with Part 80 and are at 16 kbps; or you’re completely compliant with Part 90 and you’re at 9600 baud. That’s why we took the SDR approach.”

 

Intel Inside (and Outside) the Train

How do Lilee’s solutions leverage Intel technology? “If you look inside our Lilee Mobility Controllers—that go in the back office—or our Wayside Messaging Servers you’ll find an x86 Intel processor that’s running the whole application space. It’s a very robust architecture, and we chose it because it’s so well supported by operating systems and by the customer base. You need to look at the cost of maintaining a platform, and we felt that the Intel architecture really does help to mitigate the unknowns.”

figure2.jpg 

Figure 2: Lilee's LMC-5500 Mobility Controllers provide the backbone for an integrated PTC system.

 

Lilee’s LMC-5500 Series Mobility Controllers (Figure 2) provide radio device management with roaming control and enable a conduit between the remote network and the back office servers. LMC-series controllers are built around the Intel® Core™2 Quad Processor Q9400 (6M Cache, 2.66 GHz, 1333 MHz FSB) and the Intel® 3210 Chipset with 82801I9B I/O Controller Hub (ICH9). SDR radios within the network establish tunnels with the LMC-5000 to allow mobile radios to move across different segments of the network without having to be aware of the underlying network topology changes.

 

Lilee’s Intel® AtomTM-based WMS-2000 Connectivity and Application Controllers enable back office visibility of wayside status and alarm messages, providing an interoperable gateway for PTC and legacy train control systems.

 

One rail system that has completely committed to Lilee’s approach to PTC is Southern California’s Metrolink. “Metrolink is the commuter heavy rail link in Southern California with 219 miles of right of way with over 200 wayside locations for signals and switches where they need to talk to a train,” concluded Adams. “Lilee’s WMS-2000 messaging server is in every one of those. These units manage all the communications from the back office network to the train. In the back office Lilee LMC-5000 mobility controllers extract the IP address so the back office can always send a message to any particular device throughout their entire system.”

 

While engineers will continue to drive Metrolink’s trains, PTC backup systems are in place to insure against temporary distractions ever again leading to disastrous consequences.

 

LEARN MORE


Solutions in this blog:

 

Related topics:

 

Lilee Systems is a general member of the Intel® Intelligent Systems Alliance. Lilee Systems is dedicated to delivering the highest quality, most reliable products and solutions for mobile connectivity across multiple market areas including railway.


John Donovan
Roving Reporter (Intel® contractor), Intel® Intelligent Systems Alliance
Editor/Publisher,
Low-Power Design
Follow me on twitter: @jdonovan43

By Franz Walkembach, Senior Product Manager, Wind River

 

The GENIVI Alliance had recently released its latest compliance specification GENIVI 3.0, and Wind River Platform for Infotainment was among the first to achieve it. Why should this matter? It’s all about trust, scalability and quality.

 

GENIVI®, Autosar, Car Connectivity Consortium…these are all groups created in order to set up standards across the auto industry. A long-time expert in this domain, Wind River is a founding member of the GENIVI® Alliance, keeping pace with the latest IVI developments and even helping to drive some of the changes coming from the Alliance. Wind River participates on the GENIVI board, marketing council and several expert groups. Wind River’s engineers have been in charge of leading initiatives such as including Android inside a GENIVI design. They are also currently leading the BIT (base integration team).

 

As the BIT lead, Wind River plans and manages the ongoing team releases (every 6 weeks!), organizes and coordinates the BIT projects (e.g. Baseline automated testing). In addition, as Meta-IVI maintainer, Wind River releases and maintains a GENIVI baseline built on the Yocto project (also every 6 weeks). This baseline has to align with other GENIVI goals, like the IP policy and various contributors such as silicon vendors and users of the baselines.

 

GENIVI has been working towards a compliance program that further helps align the industry and simplify things such as automotive requirements. In October, the Alliance updated their compliance program, introducing the latest specifications for GENIVI Compliance 3.0. In the automotive world, where everybody knows everybody, and quality is of critical importance, compliance is increasingly becoming an effective way to show that a vendor is:

A) Well connected within the ecosystem,

B) Trusted with a solid reputation,

C) Up-to-date and drives innovative changes,

D) Offering software that meets the highest quality and latest requirements and is scalable across different platforms.

 

There are 19 parties registered in the compliance program so far. While most have received compliance with earlier GENIVI specifications (versions 1.0 or 2.0), just a few have achieved level 3.0 compliance. With every new compliance specifications release, a variety of new requirements are introduced for different stacks inside an IVI platform, like CE Connectivity, HMI-Application Framework, System Infrastructure, Networking and so on.

 

Currently, Wind River Platform for Infotainment has achieved GENIVI 3.0 compliance. However, compliance is just one part of it. Our platform also comes with a wealth of other features such as:Riv

  • A semi Linux kernel (based on Yocto Meta-IVI)
  • Optional solution accelerator such as an IPod software solution
  • Downstream support
  • ADK (Application developer kit) for user space development helps to develop applications and product demos in a couple of weeks
  • PDK (Platform developer kit) for kernel space development to configure software platforms
  • Patches to increase the quality of open source

 

Additionally, other silicon vendor BSP Compliance statements will follow soon.

Wind River Platform for Infotainment is a Linux-based, GENIVI-compliant run-time platform specifically tuned for the IVI market. It can be used as a fast track for demo development or a solid core for platform development. The platform is designed, packaged and tested to support an all set of automotive features.

 

The GENIVI compliance program was launched in August 2011, and it’s been gaining quite serious momentum since then.  And of course, Wind River will be sure to continue to track closely with the compliance program at every step!

 

For additional information from Wind River, visit us on Facebook.

As we all know, the Internet isn’t just for people anymore. But that doesn’t mean that bringing intelligent systems onto the Internet of Things and getting them to work together well is an easy task. Fortunately, a new article in Boards & Solutions magazine shows us it doesn’t have to be rocket science. You just have to look for solutions based on the new Intel® Intelligent Systems Framework

 

The article’s author is Jim Robinson, General Manager, Marketing and Business Operations, Intel® Intelligent Systems Group. In the article, he explains how this new framework provides a blueprint for hardware, operating systems, and tools that simplifies the delivery of interoperable solutions. He talks about the players behind this new specification and what they’re doing to support it. He shows how the specification enables scalability and cutting-edge connectivity, manageability and security across applications. And he explains the framework’s role in helping support solutions designed to turn big data into business intelligence.

 

Download the article now to learn how you can use this framework to streamline the development of your designs for the Internet of Things.

NFV (Network Function Virtualization) was obviously one of the hottest topics at MWC 2013. Several use cases were demoed by tier-1 platform vendors, telecom equipment manufacturers, service providers and operators to show the benefits of the NFV concept.

 

These use cases can be classified into two categories. The first one is the migration of functions traditionally located in at the customer premises into the operator’s infrastructure.

 

One example is the concept of a “virtual CPE” that can be applied either to consumer or enterprise markets. The new architecture relies on a very simple CPE (modem, switch and antenna) and all services run on a central server in a virtual network appliance. CAPEX is considerably reduced. Software updates, service configuration, introduction of new services, maintenance are also greatly simplified, so service providers and operators can significantly reduce their OPEX as well.

 

The cloud RAN (Radio Access Network) concept similarly reduces both CAPEX and OPEX for mobile networks. In the conventional network architecture, each antenna connects to a co-located eNodeB from which traffic is backhauled to the Evolved Packet Core (EPC) over an IP link. With the Cloud RAN approach, however, the eNodeB at the antenna is replaced by a low-cost Remote Radio Head. The digitized RF data is then connected over a fiber link to a centralized Baseband Unit pool in which the baseband processing is performed.

 

A second category of use cases addresses core network functions. In contrast to the first one, the equipment is already located in the core infrastructure. For instance, virtualizing EPC functions for mobile networks in virtual machines on generic platforms instead of dedicated equipment reduces the CAPEX. Dynamic allocation and management of network resources bring more flexibility and better usage of network resources, thereby reducing OPEX and increasing ARPU.

 

Will NFV be the real first transformation of the network and telecom business? These different uses cases seem to be very attractive for operators, enabling them to increase their margins and deploy differentiating services faster. To make this happen, end users will also need to see real  reductions in their Internet expenses.

 

However, all the concepts have to come back to reality. Virtualizing hundreds of CPE or complex EPC functions on a single server requires high performance network bandwidth to be distributed to the virtualized functions. Today, standard virtualized architectures that work perfectly well for computing won’t be able to deliver the required network performance without optimized solutions like our 6WINDGate software. Please check how many software layers a network packet has to go through before reaching an application running in a virtual machine.

Filter Blog

By date:
By tag: