Embedded computing is moving into the world of “Big Data” and that means lots of scalable, high performance computing. RADAR, SIG-INT and EW are data intensive applications thatneed embedded computers that are able to process the vast amounts of data and do it quickly. Many applications in defense and aerospace must process their way through this “Big Data.”
Fortunately, the capability of high performance computing continues to evolve as technology advances. Processors get faster, memories larger, and interconnect pipes get bigger. All of this adds up to greater performance in smaller packages, packages that make it easier to embed these platforms in more applications. What once took racks of computers can now be reduced to a single rack or system the size of a shoe box or smaller.
High performance embedded computing (HPEC) takes much of its lead from the high performance computing (HPC) technology found in data centers, taking concepts and extending them to the more environmentally demanding conditions in embedded computing applications. Processor off-loading, cluster computing, distributed architectures, and open source software are but a few of the traits of high performance computing that are being passed on to HPEC.
For the military and aerospace embedded markets, additional requirements must be met. Size, weight, and power (SWaP) must be optimized for embedding in smaller systems. Long product life cycles and long term product support must match the life cycle of the deployed systems. Reliable and deterministic operation is mandatory and the platforms must be able to be deployed in everything from benign to harsh environments. As a result, boards and systems are made smaller, modified to run in extended thermal ranges and rugged environments.
High Performance Computing Requirements
High Performance Computing
Source: CSPI, Embedded Tech Trends 2013
Two organizations in particular have driven the efforts to evolve high performance embedded computing platforms. Both started their efforts on industrial computing, but have moved down separate paths to define alternate platforms commonly used for HPEC.
VITA was the first to organize, with members focused on the development of VMEbus in the early 1980’s. VMEbus started out as a solution for industrial platforms, providing a parallel bus architecture that could be scaled to 20 boards in the backplane. As VMEbus became a prime choice for many defense applications that demanded a more rugged solution, the members extended the original VMEbus specification to include options for alternate cooling strategies such as conduction cooling.
PICMG rolled out CompactPCI* in the mid-1990’s, implementing the parallel PCI bus in a 6U form factor. Again, the initial focus was on industrial applications but the communications industry quickly picked up on the architecture and started to drive it towards specific needs of their industry, primarily moving large blocks of data to multiple destinations. AdvancedTCA* was soon defined using a scalable switch fabric approach instead of a parallel bus. The form factor was enlarged to better address the needs of the communication industry.
Members of VITA responded with the VPX* initiative, bringing the scalable switch fabric architecture to the smaller 6U form factor. The primary influence for VPX came from the defense industry.
Both AdvancedTCA and VPX have greatly extended the ability to develop impressive HPEC platforms. The use of serial switch fabrics enables the design of systems ranging from three modules to complex mesh systems with point-to-point connections in many configurations suitable to specific application processing needs. While data center blade systems are occasionally used in embedded applications, it is AdvancedTCA and VPX that are specifically targeting markets and applications that require HPEC platforms. VPX offers a smaller packaging option over AdvancedTCA and is specifically designed for military and aerospace applications, including those with a need for conduction or liquid cooling options.
To give us a better feel for what is possible today, let’s look at what some of the Intel® Intelligent Systems Alliance members are offering.
At Embedded Tech Trends 2013, MEN Mikro Elektronik GmbH presented a multi-cluster HPEC solution using CompactPCI Serial. This specification adds greater support for serial point to point fabrics like PCI Express, SATA, Ethernet and USB in the classic CompactPCI form factor. In the MEN configuration, Ethernet is added to the CompactPCI specification so that traditional star, ring, or mesh configurations (up to eight-slots in a standard CompactPCI chassis) can be built. Additional clusters can be added to build very large scalable systems using low-cost modules. MEN builds systems with a single Intel quad-core CPU system slot module and up to 7 GPGPU modules directly controlled via PCIe x8 or PCIe x1.
At the same forum, Mercury Systems discussed the need for advanced fabric innovations to maximize the performance of HPEC systems. Their concern being that existing choices such as PCI Express and 10G Ethernet, are running out of steam and that we should be preparing to implement 40G Ethernet and InfiniBand. Mercury Systems has been building HPEC platforms for many years, their extensive experience in the interconnect implementations of these platforms makes them especially knowledgeable on what it takes to build HPEC platforms. They are not as concerned about computing as they are on how to handle huge amounts of I/O data that is collected by HPEC platforms.
Mercury Systems has been researching what it will take to get to the next generation of infrastructure for HPEC platforms. They feel that advances in connector and backplane technology will be needed to get beyond 8 Gbaud transfer rates, in other words “The connector is key to unlocking speed.”
Mercury Systems uses Serial RapidIO and 10G Ethernet for most data plane transactions today but they are quickly developing InfiniBand and 40GigE solutions for tomorrow. One of the technologies that most intrigues them is InfiniBand which is used extensively in HPC platforms. Advances in connector, backplane and cooling technologies will kick VPX into the next gear allowing them to develop massive many-core HPEC engines that are optimized for SWaP.
GE Intelligent Platforms puts such an emphasis on HPEC that they formed a HPEC Center of Excellence. Through this center, GE Intelligent Platforms can engage early with customers to help with architecture definition, develop application-ready systems, and then provide post sales application support and services all with the emphasis on the needs of HPEC applications.
GE Intelligent Platforms discussed the impact of new architectures on SWaP through an example of a radar processing system that they recently developed. The original system, based on VMEbus, was 4 cubic feet in size, weighed 105 pounds, and consumed 2000W of power. The newest system, based on 3U VPX, is 0.8 cubic feet in size, weighs 10 pounds, and consumes 120W of power. The original system delivered 576 GFLOPS peak while the new system delivers 770 GFLOPS peak with a combination of 3rd generation Intel® Core® i7 and GPGPUs.
I recently discussed the Kontron HPEC strategy in Roving Reporter: Kontron HPEC Proof-of-Concept delivers breakthrough I/O data bandwidth. The Kontron HPEC platform delivers 40 Gigabit Ethernet (GbE) equivalent data transfer rates via PCI Express 3.0. What sets the Kontron platform apart is the use of PCIe 3.0 to deliver 40GbE equivalent data transfer rates using common TCP/IP protocols. The combination of PCIe 3.0 and TCP/IP is achieved through Kontron’s VXFabric* middleware, which implements the TCP/IP protocol over the PCIe infrastructure to boost transmission bandwidth to nearly 40 GbE speeds. The PoC platform is based on VPX, which is known for its high performance, harsh environment capability, and small size.
Concurrent Technologies advocates a similar approach. The Concurrent Technologies Fabric Interconnect Networking Software (FIN-S) provides a rich software ecosystem allowing applications on multiple processor boards to communicate efficiently with each other over supported bus interconnect fabrics, enabling development of high performance embedded computing (HPEC) systems. FIN-S provides a variety of communication mechanisms to address the needs of various embedded applications. Support for the widely accepted IP socket interface is provided, enabling portability and ease of use at application level. A high performance, low latency message based direct communication library is included, enabling direct zero copy application level communication between boards. This provides the additional benefits of having the same application programming interfaces across multiple operating systems and multiple supported fabrics. High performance computing applications can easily be implemented in the embedded domain. FIN-S supports various system architectures including VPX, VXS and AMC along with their associated interconnect fabrics such as PCI Express, Serial RapidIO, and 10 Gigabit Ethernet. FIN-S is available on a range of Concurrent Technologies boards utilizing Intel processors in commercial, industrial, ruggedized air cooled, and ruggedized conduction-cooled operating environments.
Curtiss-Wright Controls Defense Solutions has published a must read technology white paper, “Understanding HPEC Computing: The Ten Axioms.” An HPEC system must meet the criteria outlined in the Ten Axioms of HPEC to be able to satisfy the requirements of today’s most demanding defense and aerospace rugged embedded applications. To help system designers to better understand what to look for in an HPEC system, and what to avoid in less capable system alternatives, the paper discusses the Ten Axioms, organized by the industry (HPC, Embedded Systems and Sensor) from which each is derived. The details of how each of these essential elements relates to and meets the unique requirements of these high performance-processing solutions are explained.
HPEC vs non-HPEC, Courtesy of Curtiss-Wright Controls Defense Solutions
Themis Computer is a long time provider of high performance computing to the embedded community and in particular, the defense and aerospace markets. They have a strong background in server technology that is designed to withstand the extreme operating environments of many defense applications. Themis Computer has been defining and prototyping a new form factor called the NanoPAK Small Form Factor Computer based on VITA 74. While targeted at smaller embedded systems, this small package has some big capabilities. Leveraging work done with serial switched fabrics, meshed computers such as those using VPX can be built on a smaller scale. With the right processor payload, these NanoPAKs occupy the low-end of the HPEC processing range.
HPEC computing is a rapidly growing market opportunity with no end in sight for the amount of performance needed for embedded computing in this class. When searching for HPEC computing platforms, there is no shortage of options. All of these alliance member focus on supporting the HPEC needs of defense and aerospace applications. You can be sure that all of them will have new products to announce with the roll-out of the 4th generation Intel® Core® processors. This will add a much appreciated performance boost to process the “Big Data” encounter in embedded computing.
Solutions in this blog:
- Kontron HPEC Platform based on 3U VPX Single Board Computer VX3044
- GE Intelligent Platforms CUDA6USK1 Starter Kit
- Concurrent Technologies 6U VPX SBC VR 737/08x
- MEN Mikro Elektronik GmbH G22 - 3U CompactPCI® Serial Intel® Core™ i7 CPU Board
- Curtiss-Wright Controls Defense Solutions - Understanding HPEC Computing: The Ten Axioms
- Mercury Systems Ensemble® Series 4th Generation Intel® Core™ i7 HDS6502 Module
- Themis Computer NanoPAK