Skip navigation
2013

On the surface, a digital signage system seems simple. You have an embedded computer designed to be a media player, a monitor to display images and video, and software to enable users to create and manage their digital content. What’s difficult is when you try to design a digital signage network with tens, hundreds or thousands of these systems and attempt to connect them to the back end or cloud for data collection and management. It’s so complicated that developers and system integrators end up doing a lot of the heavy lifting in figuring out how to find interoperable components, connect them, and make the entire signage network manageable and secure.

 

This much work doesn’t make much sense on such sought-after technology as digital signage systems. Why should everybody have to invent their own wheel when so many retailers, airports, museums, and other organizations are purchasing these systems?

 

This is the problem that Intel and many Intel® Intelligence Systems Alliance members decided to solve when they came up with the Intel® Intelligent Systems Framework that I discussed in a previous blog. Now one of these members, Arrow Electronics, a global distributor of electronic components and computer products, has taken this all a step further and is offering a comprehensive framework-ready ingredient store for edge-to-enterprise digital signage systems based on Alliance offerings.

 

Headquartered in Melville, New York, Arrow serves as a supply channel partner for more than 600 suppliers and 200,000 original equipment manufacturers and commercial customers through more than 225 sales facilities and 23 distribution centers in 39 countries. It has the scale, product diversity and knowledge base to be a powerful ally in making a big play in the digital signage space.

 

Recently, Arrow was part of a webinar with Intel and Kontron entitled “A Framework for the Internet of Things,” where Arrow talked about their Intel Intelligent System Framework stack and its application in the digital signage space. By collecting all the components in one place and ensuring all the pieces fit and work together, Arrow saves integrators from having to do work at a micro level. This means more time to focus on tailoring the end application to the customer to ensure the right technology fit, solution, and ability to scale.

 

Arrow carries edge-to-cloud ingredients based on Alliance offerings that include interconnects, SBCs, box-level products, aggregation gateways, operating systems, connectivity, cloud services, and displays (see Figure 1). In addition, Arrow offers extensive technical and value-added services to help reduce time to market, lower total cost of ownership, and enhance the overall competitiveness of a solution. This includes help with system architecture, operational efficiency, standard procedures, data collection and analysis, data revenue models, ROI, and other needs a developer, integrator, or end customer might have.

 

Framework plus Arrow.JPG

Figure 1. Arrow makes assembling digital signage systems from COTS components easy by basing solutions on Intel® Intelligent System Framework products and then providing value-add services covering selection, installation and integration.

 

Through framework-ready Alliance solutions, Arrow provides a common base of connected, managed, and secure capabilities that simplify the Internet of Things transformation across the digital signage value chain (see Figure 2). Whether your concerns are the fine points of connectivity, security, or manageability, Alliance members providing framework-ready solutions have already done the homework.

 

Components.JPG

Figure 2. Arrow’s Intel® Intelligent System Framework Stack gives developers a fast track for assembling digital signage solutions that connect back to the enterprise or cloud.

 

 

Taking this a step further, Arrow brings together the necessary component groups—sensors, embedded intelligence for collecting sensor information, secure wired and wireless connectivity, operating systems, middleware that bridges the system onto the IP network, display technology, content creation software, and more. This means if you’re wondering something like how to add cellular connectivity for remote manageability, it’s already been worked out and is ready to go. Instead of having to focus on such basics, developers and system integrators can work on customization, such as applications needed by a client and the scripts for connecting to the enterprise.

 

This kind of one-stop shopping for matched ingredients using COTS components really shortens the design window. And at the end of the day, that’s what it’s all about—getting your solution in place quickly, efficiently, with minimum design time, so you can move on to the next project. The example Arrow gives in their presentation is the system integrator who came to them on a short deadline and wanted to assemble a fully designed intelligent digital signage solution to display at a trade show. The solution designed by Arrow using framework-ready components met the deadline and included:

 

  •       Multiple options for hardware and other components through the Alliance 
  •       Bundled cloud and data services provided with the hardware
  •       Multiple new SKUs supported in manufacturing
  •       Ability to achieve fast time to revenue with minimal design risk

 

One thing not mentioned in the Arrow presentation is that the use of framework-ready media players and boards ensures the ability to deliver a key requirement for most large digital signage systems today: viewership metrics and targeted content. Such players and board boards can run Intel Audience Impression Metrics Suite (Intel® AIM Suite), an anonymous viewer analytics (AVA) software product. Easily deployed in digital signage systems, Intel AIM Suite installs on nearly any device based on Intel® processors and utilizes off-the-shelf consumer web camera sensors and cloud-based reporting. By providing the ability to recognize viewer gender and approximate age bracket, Intel AIM Suite enables delivery of viewer-targeted content and can collect viewership data such as dwell time and user interactions to report back on advertising effectiveness.

 

What excites me about what Arrow has done is how it has built an entire store around the Intel Intelligent System Framework and what it can do for making digital signage systems something you can put together from off-the-shelf components. This is definitely taking the Internet of Things a full step further. No longer do you have to do deep thinking on the things themselves. Instead, comprehensive solutions with all the pieces are available. If this is the way all systems are going, then sometime in the future, instead of the Internet of Things, we’ll be talking about the Internet of Solutions.

 

Learn More

Solutions in this blog:

·      Arrow Integration Services

 

Related Topics

·      Interoperability - Top Picks (blogs, white papers, and more)

·      Retail - Top Picks (blogs, white papers, and more)

       ·     Digital Signage - Top Picks (blogs, white papers, and more)

      

 

Kontron is a Premier member of the Intel® Intelligent Systems Alliance. McAfee, Microsoft and Wind River are Associate members of the Alliance. Arrow is a General member of the Alliance.

 

Mark Scantlebury

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Associate Editor, Embedded Innovator magazine

In a recent post, we summarized a number of announcements that 6WIND recently made involving key industry partners. We also announced that 6WIND will lead other major industry initiatives to bring compelling solutions for the industry to accelerate networking and telecom transformation.

 

6WIND is proud to announce the establishment of the dpdk.org open-source project to accelerate development of Intel® Data Plane Development Kit (Intel® DPDK) architecture-based networking and telecoms products.

 

In telecom, enterprise and cloud applications, the Intel® DPDK, which is a set of optimized libraries and drivers provided to OEMs under a BSD licensing model, is an increasingly viable foundation for packet processing. However, the development of a complete networking and telecom application requires strong expertise.

 

DPDKorg.JPG

As a major contributor to the Intel® DPDK, 6WIND has developed valuable enhancements to the software including crypto support via AES-NI, virtualization support, Mellanox 40GbE ConnectX®-3 EN series and Cavium Nitrox® support.

 

6WIND also developed strong expertise in leveraging the Intel® DPDK as a result of optimizing its 6WINDGate™ data plane solution (comprising virtual switch acceleration, IP forwarding, virtual routing, IPsec, firewall, Large Scale NAT, TCP termination etc.) to run at maximum performance on the Intel® DPDK platform. 6WIND also provides professional services and technical support to developers. As a result, 6WIND enables use of Intel® DPDK in Virtual Network Appliances, Software-Defined Networks and Network Function Virtualization.

 

At dpdk.org, engineers can find libraries, drivers, documentation, development mailing list, git repository and support relevant to high-performance packet processing using the Intel® DPDK software.

 

The dpdk.org open-source project is active now and openly available to any developers using the Intel® DPDK. It will provide an openly-accessible repository for released versions, while encouraging innovative collaboration and enabling participating companies to offer specialized support.

By Elizabeth Press, Global Business Intelligence Lead for Telecom, Media & Entertainment and Web

 

application-modernization-604x200Mobile operators are under pressure to prove that their services are superior to those of over-the-top (OTT) and other non-traditional challenger service providers. Many operators, especially in Western Europe, are investing in a rich communications suites (RCS) as a means of positively differentiating their services through sticky content and evolved communications abilities such enhanced instant messaging, video calling, document sharing and social media connectivity.Application modernization will be critical for technology vendors and mobile operators as they adopt RCS. Many mobile operators have grown through mergers and acquisitions, creating hundreds of duplicate applications on various infrastructures. Rationalization and modernization of legacy applications will be important for deploying RCS efficiently and cost effectively. Mobile operators have been offering RCS in the form of enterprise software packages, as well as downloadable applications. Moreover, applications supporting RCS are run both in Cloud-hosted environments and on-premise. A solid application modernization strategy will need to take all of these considerations into account.How RCS will create a competitive advantage for mobile operators:

  • As LTE networks get rolled out, operators will look to recoup their investments by targeting additional value-added services and niche products to a growing base of LTE end users. Voice over LTE (VoLTE) will become an important offering. Mobile video services will become the hook, attracting users and enabling operators to compete with challenger voice over IP (VoIP) providers.
  • The first wave of operators, mostly in Western Europe, have launched RCS services and are bundling them into their full service offerings. RCS strengthens operators’ platforms and allows them to compete as a premium service provider.
  • The ability to integrate value-added services into a strong communications platform is necessary for satiating next-generation user demand. End users expect seamless interactive abilities and a broad choice of applications. Application Programing Interfaces (APIs) will be available to 3rd party vendors to enable them to offer their services to end user through the RCS platform to meet the growing end user expectations.

Interested in learning more about Dell Application Modernization Services? Please click here to contact our team.

Follow me on Twitter at @ElizabethAtDell.

By Marcel Moelands, OEM Enterprise Lead EMEA, Dell OEM Solutions

When I read about the Telecommunications market, I notice

For instance, LTE (Long Term Evolution or 4th Generation Networks) is setting a new standard in transporting voice calls. Mobile phone users are accessing rich content, which in turn creates a balancing act between OTT(Over-the-Top) traffic and players on one side and carriers looking to monetize OTT on the other.

The proliferation of smart devices has led to a new ecosystem of machine-to-machine platforms and applications, which contributes to data growth explosion.

Evolution in Telecommunications happens faster with each generation.  The cool factor of fashionable features, generational performance leaps in handset equipment, increased bandwidth and ease of ‘getting connected’ all contribute to the speed of evolution.

This is unlike traditional network functions like SMS relays, content delivery, and support functions like OSS (Operations Support) and BSS (Business Support) that serve individual functions and aren’t scalable.

As a result, the Telecom Cloud is increasingly appealing. It allows one shared pool of resources capable of running ‘softwarized’ network and support functions while meeting the service level agreements (SLA) of the Telecom network.

It’s like the difference between a buffet restaurant and having a personal gourmet chef with a fridge full of ingredients. The buffet restaurant offers you what has been prepared for the day — no more, no less. The chef can make you whatever you want — in specific portions — whenever you like (and more efficiently)

Characteristics

The “Cloud” remains a word with many definitions and implied uses.  There are many different cloud deployment models and, it can be argued, not all are really leveraging the cloud.

True cloud deployments should meet several criteria. I would describe the ideal (API) cloud as: controlled, scalable, flexible and pay-per-use resources offering a fully-transparent platform for virtual machine and software deployments.

Key players in the Telecom market envisioned this a long time ago and have been vocal on such cloud-oriented topics as the socialization of communities through technology adoption, the future growth of networked devices and the introduction of the Telecom cloud infrastructure.

Dell

Dell’s cloud offering is very broad, consisting of Public Cloud Services, on- and off premise Private Cloud Solutions and Services, Converged Infrastructure Solutions, Cloud Infrastructure Design, Software, Consultancy, Security, Integration platforms (Boomi).

These are supported by a range of products, engagement practices, reference designs and programs to offer our customers the best experience in their paths to transformation. The common philosophy amongst the offerings is the Open Standards based approach to deliver capable and affordable solutions that suit our customers’ needs.

Open Source and OpenStack

Back to looking at the Telecommunications market, especially the equipment providers in this market.

I foresee a large-scale adoption of open source components on standards-based hardware to offer Software-as-a-Service solutions and network components as well as turnkey Infrastructure-as-a-Service solutions. Carriers will eventually leverage existing network, business and operations- support assets, allowing them to further enhance offerings.

The choice of leveraging open source platforms makes a lot of sense as it allows flexibility without affecting the ubiquitous nature of a true cloud. OpenStack is a great example of this. OpenStack is on its 6th release and offers all of the building blocks required for the Telecom cloud, especially when taking the ecosystem of value-add software vendors into account. OpenStack is already widely used in test plans and conceptual designs despite its absolute need for teams of skilled software engineers to create the environments.

Cloud Hardware

Making the right choices when selecting hardware is important as hardware will greatly influence operational expenses. Dell has been a very active player in datacenter solutions for years. More specifically, Dell leads in market share in the hyperscale server category (according to IDC).

With the experience gained in designing datacenter and high-performance servers, Dell started designing and delivering server products specifically-featured for Cloud deployments with PowerEdge C products. Additional components, such as Force10 and PowerConnect Networking complete the offering for end-to-end, turnkey cloud infrastructure solutions.

Dedication

Alongside the development of cloud server products, a dedicated team was assembled with the sole purpose of developing enablers for OpenStack design and deployment like reference designs, integration of Cloud based Block Storage, Software Defined Networking and Big Data Solutions based on Hadoop. Today, this group of experts is represented in the OpenStack Foundation board of Directors and has submitted components to the OpenStack Community such as the Crowbar software framework.

Rob Hirschfeld offers a non-technical overview of how OpenStack offers a unique combination of features for building hyperscale clouds.

Application overhaul

For OEM customers in the process of moving to the cloud, there are some unique considerations for a successful deployment. From designing for elasticity and scalability to ensuring fault-tolerance, license and infrastructure management, security,  multi-tenant readiness and more, Dell’s OEM Solutions team works closely with the broader Dell teams to deliver revenue-generating end-to-end cloud solutions.

Have you initiated any cloud deployments in your organization? If not, is it something you’re considering over the next 12-18 months?

High performance embedded computing (HPEC) has made tremendous leaps in performance and capability, with new implementations adding to the advancements. Processors like the 3rd generation Intel® Core™ i7 are making HPEC a powerful and cost-effective solution, eliminating the compute performance bottleneck of previous generations.  However, a major challenge for HPEC platforms remains, “How do I get more high-bandwidth data to the processors?” Applications such as radar, sonar, and other signal processing systems generate gigabytes of data that need high GFLOP computational power.

 

IMG_7250.JPG

 

Kontron-VX3044.jpg

This week Kontron took a major step towards solving this problem with its HPEC proof-of-concept (PoC) platform, which delivers 40 Gigabit Ethernet (GbE) equivalent data transfer rates via PCI Express 3.0. The platform delivers breakthrough I/O data bandwidth in a small footprint for next-generation embedded radar/sonar applications. The PoC platform is based on VPX, which is known for its high performance, harsh environment capability, and small size. Designed as a complete system, the Kontron HPEC PoC integrates their 3U VPX SBC VX3044, the VPX PCIe Switch VX3905 and the high-end L2/L3 Ethernet switch VX3910 into one platform.


The Kontron PoC is unusual for its use of PCIe instead of 10 GbE, which is a popular serial fabric option for high performance computing platforms. Ethernet has the benefit of widespread adoption and excellent software support for the TCP/IP protocol. However, some applications require even higher throughput that is available through serial fabrics like PCIe 3.0, but these solutions have been hindered by their programming challenges and more limited support of communication protocols.

 

What sets the Kontron platform apart is the use of PCIe 3.0 to deliver 40GbE equivalent data transfer rates using common TCP/IP protocols. The combination of PCIe 3.0 and TCP/IP is achieved through Kontron’s VXFabric* middleware, which implements the TCP/IP protocol over the PCIe infrastructure to boost transmission bandwidth to nearly 40 GbE speeds. This allows the I/O data bandwidth to match up nicely with the capabilities of 3rd generation Intel Core i7 processors on the VPX blades while running a well established transfer protocol, minimizing software impact during system development, and enabling the quick transfer of legacy applications to the new platform with little or no modification. This technology makes it easier to fully utilize the processing potential of the 3rd generation Intel Core i7. Routing of the PCIe fabric is provided by the Kontron VX3906, one of the industry’s first PCIe 3.0 VPX switches. This switch roughly doubles the per-lane throughput compared to PCIe 2.0, providing a major performance boost.

Kontron-VXFabric.jpg

 

With VXFabric, the use of standard communication protocols, TCP/IP or UDP/IP, based on the socket API protects the application software investment. Legacy software can operate now and new software based on TCP/IP is ensured to be supported for years to come. OEMs and developers can enjoy an optimized total cost of ownership (TCO) and have a direct migration path from their existing application deploying today. VXFabric addresses all fast and low latency peer-to peer inter computer node communication within a chassis. VXFabric can deliver up to 4.2 gigabytes per second (GB/s) (over PCI Express) in data throughput between VPX boards in a rack.

 

In addition to the high performance interconnection between blades, there is the tremendous amount of processing power provided with the 3rd generation Intel Core i7-3612QE processor (4M cache, 4 execution cores, 8 threads) with the integrated graphics core, the Intel® HD Graphics 4000. The Intel HD Graphics 4000 provides 16 graphics execution units (EUs) that produce noticeable improvement in 3D performance, of as much as 2x. This combination, in the 22nm process, makes for low power consumption and the integration enables even smaller packages helping to increase the blade functional density that is so important to HPEC.

 

HPEC applications require high GFLOP performance. This is achieved with the Core/GPU combination of the 3rd generation Intel Core i7-3612QE processor and Intel HD Graphics 4000. This low voltage processor has a very favorable GFLOPS/watt ratio with the Intel® Advanced Vector Extensions (Intel® AVX) technology.

 

Graphics computing offers unprecedented application performance by offloading compute-intensive portions of the application to the Intel HD Graphics 4000 execution units, while the remainder of the code still runs on the CPU cores. The CPU cores are optimized for serial processing, while the graphic EUs are more efficient for parallel process performance. Many radar and sonar applications can be broken down into serial and parallel algorithms that can take advantage of this combination. The graphics EUs provide a massively parallel processing subsystem that can focus on many threads and parallel large data sets boosting the GFLOPs to high performance levels.

 

Looking forward, the performance story will get even better with the 4th generation Intel® Core processor family. These chips introduce the Intel® Advanced Vector Extensions (Intel® AVX) 2.0 instruction set, which doubles peak floating-point throughput, enabling a quad-core mobile class processor to achieve up to 307 GFLOPS at 2.4 GHz. The graphics engine is also upgraded, offering another 352 GFLOPS of the raw performance through OpenCL 1.2 programming—more than doubling overall compute potential—while adding only a few watts of power consumption.

 

The Kontron PoC platform is also notable tor integrating a wealth of hardware and software that simplifies development. The PoC platform includes a Linux distribution, diskless node support and parallel workload management software. Also integrated are compilers, optimized FFT benchmark code samples and a stress test application framework for benchmarking. Computer health management is enabled through a chassis management board content, system-wide PBIT and power/performance management at the system level.

 

Processor Graphics add GFLOPs to HPEC.png

Processor Graphics add GFLOPs to HPEC

 

The goal of the Kontron HPEC PoC is to help developers dramatically streamline the process from design to field deployment of next-generation radar and sonar that are expected to make a tremendous jump in a processing power and bandwidth.

 

The VPX-based PoC is a flexible design that can be specifically optimized for the most demanding applications. The configurable PCI Express switch fabric interconnect can be routed in ways most appropriate for the data transfer needs of HPEC platforms. Legacy application support with TCP/IP make the platform even more attractive, leaving it to the imagination of designers to utilize the 10x increase in bandwidth. Based on mainstream IT technology (TCP/IP, PCIe, Intel® processors), the Kontron HPEC PoC is also developed to address the U.S. military’s smart procurement initiatives that put into place more rapid and agile purchasing processes.

 

Related Topics

 

Kontron is a Premier member of the Intel® Intelligent Systems Alliance.

     Contact Kontron>>

 

Jerry Gipper

OpenSystems Media®, by special arrangement with Intel® Intelligent Systems Alliance

Editorial Director, OpenSystems Media, VITA Technologies

By Jeffrey Volkert, OEM Solution Consultant BeNeLux, Dell OEM Solutions

Today’s dynamic healthcare environment relies heavily on the latest technology to increase the speed and accuracy of patient diagnosis and treatment. Physicians are nowadays required to interact with a wide range of end user devices to access their medical applications. From a Dell precision workstation with locally hosted applications to latitude tablets with applications running in a cloud, healthcare professionals may have to wait before using a specific application at an end-point, delaying medical treatment.

For these physicians it would be most beneficial if they could have an environment where they can access their applications from any device with a single sign-on, reducing time consuming actions during the course of a normal workday.

Dell’s Mobile Clinical Computing (MCC) solution understands these issues and mitigates many of today’s Hospital key challenges:

  • Limiting the amount of time clinicians and nurses spend
    searching for available end points as well as the time spent accessing and then re-accessing applications
  • Providing a single sign-on instance that clinicians and nurses can use to authenticate seamlessly against endpoint
    and multiple applications
  • Providing quick and secure access to patient information on-demand using proximity and/or biometric authentication methods
  • Reducing and limit the time spent by IT staff managing client systems in their environment


According to recent trials held in Europe, Dell has proven that our MCC solution can deliver:

  • Up to 215+ minutes per user per week productivity gain (9% improvement)
  • Economic value of productivity can be up to £10,000 per user per annum
  • Appropriate Information Security delivered that’s workable for users
  • Improved patient safety, quality of care and patient satisfaction

 

I think you would agree with me that the latter is the most important one.

Dell Mobile Clinical Computing Solution is a desktop virtualization and identity access management solution built on top of the DVS Enterprise 6020 Solution and powered by the latest Dell 12th generation servers. The Dell MCC Solution is offered in both Rack or Blade server configurations with an option for storage on Dell EqualLogic or Compellent solutions which allow for automatic load balancing, tiering, snapshots and replication.

Dell PowerEdge 12th generation servers are designed with more memory capacity and more integrated I/O (input/output) than the previous generation and can provide up to 18 times more Microsoft® SQL Server® transactions per second when using PowerEdge Express Flash Storage solid-state drives (SSDs) (directly connected to the CPU and memory bus).

Healthcare environments requires end user devices to be up and running 24/7 as there is no timeframe on patient care. Dell WYSE thin client have no moving parts that can brake down, like fans or hard drives. WYSE thin client have a lifecycle that is twice the lifecycle of a traditional PC, reducing validation and transitioning costs. An added bonus is that with a thin client environment, centralized IT management is now possible to ensure optimal system performance.

A solution is never complete with out the right support for your environment. Dell offers customers the Dell ProSupport for MCC. Dell ProSupport MCC solution helpdesk and field support technicians are highly-trained specialists who are knowledgeable of every component of the lab validated MCC solution configurations including enterprise hardware and software technology, endpoint devices, desktop virtualization software, and identity access management software. These technicians are certified in VMware vSphere & View, Imprivata, and Wyse technology and receive ongoing training.

To learn more contact our OEM team.

This week, 6WIND announced extensions to the 6WINDGate™ networking software that deliver a 10x performance improvement for virtual switches. These new capabilities address three critical scalability challenges faced by service providers operating public/private cloud data centers as well as telecom infrastructure architected around Network Functions Virtualization (NFV) concepts.

The first scalability challenge relates to the number of Virtual Machines (VMs) per server blade. This VM density is increasing rapidly, leveraging on-going improvements in the performance of the x86 processors used on those blades. Today, a typical server blade in a service provider data center hosts at least 50 VMs, with that number expected to grow to hundreds within a few years.

Because of this growth in the number of VMs running on each server blade, the data center network needs to expand beyond its current limit at the Top-of-Rack (ToR), to a model where a virtual switch on each server blade is used to distribute the increasing volume of network traffic to virtualized applications. This function is typically implemented using the open-source Open vSwitch (OVS) or an equivalent proprietary virtual switch.

The second scalability challenge is the network bandwidth required by VMs. With the constant growth in rich media applications, individual VMs can require sustained network bandwidth of 1Gbps or more. As VM density increases, this bandwidth requirement can quickly outstrip the capacity of a standard virtual switch, constraining either the number of VMs that can be instantiated on a blade or the performance seen by the users of those VMs.

The final major challenge is a result of evolving traffic patterns within data centers. In traditional client-server data centers, network traffic was predominantly “North-South”: from the Internet, through the core switch and aggregation layer, to the ToR switch and then to the server blades. Within multi-tenant Web 2.0 data centers, however, VM mobility and VM sprawl cause the majority of traffic to be “East-West”: between VMs located on different physical server blades.

This growth in East-West traffic means that high-bandwidth VM-to-VM communication (VM2VM) is mandatory. To isolate and secure VM2VM, however, requires secure tunneling services, running on the virtual switch, that extend beyond the basic Layer 2 features provided by a typical virtual switch. Within a cloud, these ensure that only specific VMs can access the applications and data owned by other VMs. Between clouds, policies are enforced to ensure that data and traffic for one cloud is not visible to another.

Adding to the overall security- and networking-related workload that must now be supported on server blades is the increasing trend towards the use of overlay network technologies which avoid the 4,094 tunnels limitation of traditional VLANs. Data centers are now adopting VXLAN and NVGRE protocols, with 16-bit IDs that allow for 16 million tunnels.

 

The new extensions to the 6WINDGate networking software provide solutions to these three challenges, delivering a data plane solution that achieves 5x - 10x acceleration for the baseline Layer 2 switching function.

 

Thanks to this increase in Layer 2 switching performance, data center operators can achieve the increases in VM density that are enabled by on-going improvements in the performance of server processors. They can also deliver high network bandwidth to individual VMs, addressing the performance needs of users running streaming media applications or other bandwidth-hungry workloads.

At the same time, 6WINDGate delivers high performance on the necessary secure tunneling protocols such as IPsec, GRE, NVGRE, VLAN and VxLAN.

Within the virtual switch, the 6WINDGate data plane spies on 6WINDGate spies on configuration messages from the virtual switch control plane to its data plane, intercepts the appropriate packets and processes them within the 6WINDGate fast path, thereby achieving the 5x – 10x performance acceleration.

These high-performance switching, tunneling and security functions are achieved with no changes required to the virtual switch software itself.

6WINDGate is also fully-compatible with OpenFlow and completely transparent to the applications running on the platform, which do not need to be recompiled or re-verified in order to work with this high-performance solution.

We will be demonstrating the use of 6WINDGate to accelerate the standard Open vSwitch, in partnership with Big Switch Networks and Ixia, in our booth number 401 at the Open Networking Summit in Santa Clara, CA from April 15th through 17th. We hope to see you there!

If the lights in your home have ever flickered briefly during a storm—and didn’t then stay out—a lot of Smart Grid technology kicked in between the time the lights blinked out and the few hundred milliseconds later when they came back on again.

 

During that brief time an operation control center detected the loss of power to your area and redirected power from another substation to compensate, perhaps diverting power from a third source to compensate for the additional load on the second substation. When the break was repaired, the transformer replaced, or the breaker in your local substation automatically reset after a lightning strike, the control center automatically brought your substation back online and rebalanced the loads between all substations. In that way the Smart Grid is said to be self-healing, though humans will always have to repair line breaks and blown transformers.

substation.jpg

Figure 1: Electrical power distribution and transmission.

 

There are two types of substations: primary and distribution (Figure 1). Primary substations work on the supply side, taking power from a variety of primary sources—hydroelectric, solar, wind, geothermal, and nuclear—and putting it out on the grid. This involves synchronizing highly variable inputs such as solar—which is clearly only available during the day—with wind power, which peaks at night. The substations must also regulate the loads on the power sources, which may vary considerably in capacity.

 

For each primary substation there may be dozens of distribution substations, which work on the demand side, ensuring load sharing between residential, industrial, and transportation end users. When a substation starts nearing its peak capacity it signals the control center to bring other sources online to get it through peak demand, avoiding the ‘rolling blackouts’ that preceded the Smart Grid.


The Smart Grid works because substations can all communicate with each of the elements under their control, sending that information back to a master control center that controls all the substations. IEC 61850 is the IEC standard for substation automation, replacing a myriad of proprietary protocols whose lack of interoperability delayed the advent of the Smart Grid.

 

On the Level

There are three different levels in Smart Substation architecture: the Station Level, the Bay Level, and the Process Level. Advantech provides numerous Intel-based IEC 61850 certified Smart Substation solutions in each of these areas. Its UNO-4600 series Substation Automation Computers can operate as HMI/SCADA, Terminal (serial-port) Servers, Protocol or Communication Gateways, Cyber Security Servers (UTM), and Substation/Networking Recorders.

 

At the Station Level the Advantech UNO-4683 provides the communication gateway between the remote control center and all the environmental monitoring and control devices at the substation; it also provides cyber security for the substation. The UNO-4683 Automation Computer is based on an Intel® Core™ i7 running at 2.0 GHz with 4 GB of DDR3 SDRAM. It provides two RS-232/422/485 isolated serial ports with automatic flow control; 2 x 10/100/1000Base-T and 4 x 10/100Base-T Ethernet ports; and six USB 2.0 ports with three domain I/O expansions.

 

At the Bay Level (Figure 2) the Advantech UNO-4673A protocol server provides a data gateway between intelligent devices and the station-level controller. The UNO-4673A is based on a 1.66 GHz dual-core Intel Atom processor with 2 GB of DDR2 SDRAM. Sitting on the Ethernet backbone the Advantech UNO-4672 acts as a network recorder and analyzer, passing device data back up to the station level. The UNO-4672 is powered by either an Intel® Pentium® M running at 1.4 GHz or an Intel® Celeron® M at 1.0 GHz, each with 1 GB of on-board DDR DRAM.

bay level.jpg

Figure 2: Substation automation at the Bay Level.

 

Finally, at the Process Level either the Advantech UNO-4671A (Intel® Atom™ D510 @ 1.66 GHz) or UNO-4673A (dual-core Intel®  Atom™D510 @ 1.66 GHz) acts as an Intelligent Electronic Device (IED) that continuously monitors the status of transformers, circuit breakers, and switch gears, warning of excessive temperature, vibration, leakage or other issues that could cause device failure.

 

Getting Smarter

When the lights go out they don’t just blink for everyone—sometimes they go out for hours. The basic design of the electrical power grid is over 100 years old, and it’s only gradually being computerized. Most utilities have begun to automate the restoration process by installing supervisory control and data acquisition (SCADA) systems that monitor and control line reclosers and switches, but the system is still a long way from being completely automated. Smaller cities and other customers are usually connected to their local substation by a single radial feeder. Outages to these feeders are called in by a customer to the control center, which then dispatches a person to the area to manually restore power to customers.


Implementation of automated devices such as SCADA-enabled switches and line reclosers would cut outages. Distribution circuits could also be sectionalized with SCADA-operated devices between each section. Open points that connect to other circuits could be replaced with SCADA-enabled switches. Then in the event of a failure the system could automatically isolate the problem, opening adjacent switches and rerouting power to unaffected sections by closing connections to adjacent circuits.


The Smart Grid is getting smarter, and substation automation is the key element to its success. Advantech already has a wide range of Intel-based products that can provide a complete, automated solution. It’s just a matter of time before the Smart Grid all comes together and your lineman will need to find another line of work.


LEARN MORE

Solutions in this blog:

Related topics:

 

Advantech is a Premier member of the Intel® Intelligent Systems Alliance.

 

John Donovan
Roving Reporter (Intel® contractor), Intel® Intelligent Systems Alliance
Editor/Publisher,
Low-Power Design
Follow me on twitter: @jdonovan43

Filter Blog

By date: By tag: