In a November 2nd, 2010 E-cast on "40G and Beyond: Next-Gen Network Design with AdvancedTCA ", sponsored by the Intel® Embedded Alliance, a panel of experts discussed the latest hardware, software, and standards necessary to support the rapid rise in the number of  connected devices.


Curt Schwaderer, Technology Editor, moderated the event for OpenSystems Media. Panelists presenting were: John Long, Product Line Manager for ATCA Products at RadiSys Corporation, Brian Carr, Strategic Marketing Manager at Emerson Network Power, Paul Stevens, Telecom Sector Marketing Director at Advantech, and Sven Freudenfeld, Business Development for Telecom at Kontron.


Curt opened the E-cast with insight into the enormous growth of mobile internet bandwidth usage. He cited projections from Cisco that forecast a compound annual bandwidth growth of over 100% in almost every region worldwide from 2009 to 2014. Curt also referenced an iSuppli Corporation report stating that capital expenditures for wireless infrastructure will once again be on the rise in 2011 after having been delayed due to the global recession.


John Long of RadiSys then described how LTE (Long Term Evolution) applications are driving requirements for 40G. Although LTE represents simplified network architecture there are a lot of complications and potential security risks that must be addressed. John addresses the question of using Intel architecture in current LTE control plane applications. He also provides a timeline for using Intel architecture to address higher complexity packet processing applications in the future.


Brian Carr of Emerson Network Power presented an introduction to the AdvancedTCA (ATCA) open standard and how it covers shelves, board, mezzanines, power distribution, and system management. Brian also described the technical differences between 10G and 40G architectures and the associated challenges. An evolution path to 40G was detailed for customers that already employ 10G ATCA. This path includes the deployment of 40G backplanes, switch hubs, and payload blades as necessary to support new services or higher capacity options for existing systems.


Paul Stevens of Advantech covered system scalability in enterprise and carrier networking. He outlined the network and software elements to consider when scaling from 10G to 40G and beyond to possibly 100G throughputs. He dissected the anatomy of a current network appliance and covered the techniques and resources necessary to translate to a scalable blade topology and then to extend that with full 40G interconnects.


The last speaker, Sven Freudenfeld of Kontron offered additional approaches to take full advantage of multi-core and ATCA. Starting with multi-core, Sven covered the hardware and software extensions that are bringing new opportunities and capabilities to ATCA. Virtualization allows users to consolidate multiple applications into one platform and share resources such as processor capabilities and storage between individual blades. Users can also apply virtualization on top of ATCA to combine multiple applications into a single blade.


Register to view the archived event, with slides and audio at After the presentations, the question and answer portion of the E-cast began with panel discussion of questions submitted by the attendees. Following are excerpts from that session.


CS: What do you see as some of the biggest challenges for the success of 40G ATCA moving forward?

SF: There are a couple of items in terms of launching products including having all the components available. Designers must start with the backplane approach itself, implementing the IEEE specification. PICMG is currently finalizing their specification and that platform is becoming widely available. The latest ATCA framework is really just considered infrastructure to support bandwidth to each of the individual blades. When it comes down to the blades, the software component is the key element to make it happen and to take full advantage of the infrastructure of the ATCA platform. The challenge is to basically combine all of this together and achieve the fastest time to market with the platform itself.


CS: When do you see volume 40G adoption kicking in and how fast or slow is 100G behind it?

JL: When you look at 40G adoption you have look at how Brian set it up. Chassis that support 40G are being deployed today. You will see switches starting to be deployed next year and payload blades also next year. I don’t think we will see mass deployment in the carrier network until sometime in 2012 or 2013. Before you see mass adoption you will see a year or two in the development labs. And we think the start of that curve will be next year. Much like the adoption of 1G to 10G – it takes time.


CS: Are there any wireless and wire line service providers doing any kind of in-lab tests and trials at this point or are the pieces just not there yet?

JL: We are just starting to get there so it is too early for the carriers to get a full 40G platform with the components that are available. And with 100G we are starting to see people really pushing the technology. We still have challenges on the silicon side. Our expectation is that five years from now we will start seeing some migration to 100G.


CS: What is the industry doing to address the interoperability concerns at 40G for software, blades, backplanes, and shelves?

PS:  I really believe this is where PICMG and cTCA are working with their members to create the best environment to make interoperability work. The cTCA interoperability and compliance documents were put together to define test methodologies for various platforms. They are really the result of multivendor cooperation to insure that the products work together without major challenges to any of them. We all work together at plugfests as well. Another one was held just a couple of weeks ago in Germany. That also helps to insure that the widest range of products work together and mitigate problems.


CS: Since 40G payload blades won’t be available until next year, why should service providers worry about 40G ATCA right now?

BC: It all has to do with smooth migration. The earlier you prepare, the smoother it will be. So if you are hoping to take advantage of 40G ATCA, and you are currently deploying 10G ATCA, then now is the time to think about getting ready. Remember, as I said, the first thing that you need to do is to introduce 40G ready platforms with suitable backplanes and I think most people understand that. And after that you can gracefully introduce 40G switches and payload as you are ready. Remember you can get those platforms now. The 40G switches will port existing 10G payloads and a mix of 40G and 10G payloads. So yes, you should think about it now.


CS:  Is there is a plan to support 100G over the backplane?

BC: We’ve already seen situations where people are talking about it. We’re looking to the future. I’ve already mentioned that people that are pushing the edges are already deciding what the next step is. But what we need to be careful of is there is no IEEE standard for 100G on the backplane yet. The switches themselves would need another major increase in capacity to do that and we are a fair way away from payload technology that could effectively process 100Gs worth of traffic for any slot. So I don’t see 100G ATCA superseding 40G ATCA any time soon. I do believe we will see 100G coming in but it will be as options for network terminations on the switch hub. That’s likely to be the first.

JL: I think Brian summed it up very well. Network interfaces will be first but over time you will see customers demand 40G over the backplane. There are technical issues and standards issues to address but in time it will come.

PS: I agree with what my colleagues just said. What we are beginning to see now is people are looking at bringing in 100G through the switch. We’re probably still a long way off from seeing 100G within the system itself.


CS: What percentage of the new TEMs (Telecom Equipment Manufacturers) will remain proprietary vs. ATCA?  I know the ATCA evolution has been happening within telecom for quite a while. How do you see the conversion to ATCA within TEMs within the next few years?

JL: If you look at TEMs overall, a vast majority of them have ATCA somewhere in their portfolio. You also have to look at where ATCA makes sense. For applications like the eNodeB, ATCA is probably not the right solution. Where we think it makes sense, where the bandwidth and cost points make sense, we expect ATCA to continue to grow.

BC: Yes, I go along with that. I think the other point to bring out is that ATCA is an open standard and there are quite a lot of TEMs who have adopted ATCA and they still make it all themselves. So a lot of companies that are on this call don’t necessarily sell to those people. But the whole point is that ATCA is an open standard so you can start and you can take advantage of the ecosystem and the ecosystem is an incredibly powerful thing.

SF: Yes, I think that the adoption rate and the fact that ATCA is an open standard is very well recognized and multiple TEMs have been launching products. There has been a lot of consolidation within the TEMs market itself by mergers and acquisitions. But the advantage is that most of them have already used ATCA in the past and now moving to 40G gives a much broader application arena for ATCA.  It helps them to consolidate platforms but also to get new platforms to the market on time to make revenue.


CS: How about blade computer platforms using ATCA with 40G. We saw a lot of architecture diagrams from you folks about 1 or 10G into the blade. Are any of them going to use 40G or is that just a switch pipe.

JL: I’m not sure if you’ll see 40G on a compute blade in the next generation. I think that if you look at the packet processing blades, you will see 40G.

PS: Yes, we are seeing 40G at the switch end of things now. It is becoming mainstream in PCI now. 40G into the switch with multiple connections.

BC: We can see both situations in place. Fundamentally,   there’s still going to be a fair cost differential between 10G and 40G which means that there is going to be a place for both 10G and 40G blades in the portfolio. So the ones that are focused clearly and heavily on server side applications will generally stay on 10G whereas the ones that are focused more clearly on packet processing applications will move to 40G more quickly.

SF: There is a combination of that leveraging 40G. If you look at MME platforms for example the security and data gateway functions are the key. The best combination already is in the space of being deployed for the 10G. The 40G gives much broader bandwidth and takes full advantage of the all IP infrastructure. There are a couple of items to consider. One of them is the over speed synchronization infrastructure. But, I agree on the cost point. Some of the data frame functions are really not needed but they end up in deep packet inspection functionality where the combination of multi-core and the software can help to keep the cost reasonable and also reuse some of the hardware architecture which is already deployed.


CS: Where is the point at which Intel architecture can capably handle the packet processing data plane functions? It that today or is it next year?

JL: If you look out there, in some applications, Intel can handle the packet processing and it depends on the functionality and data planes. And over time the amount of applications and the density that the x86 architecture can address will increase.

BC: I think John has put it very well there. In fact, we are already in a situation today where Intel architecture is doing packet processing. So it depends on the rate and the size of the packets as to how far that goes. We can certainly see the next generation coming through and that’s going to be very interesting indeed.

PS: I think the combination of the latest software from companies like 6WIND and Wind River are going to help accelerate that. They’ve already given pretty much a 10x improvement in performance over current applications. On the other side, x86 issues are getting faster, technology refreshes are getting more cores and accelerating the technology. x86 processors are pretty good today for flow analysis, database integration, management and general applications. Fast MPUs are better for packet processing applications, and things like operations and payload identification. I think the pendulum is swinging and there’s more refreshes coming with x86. And there’s a lot of exciting technology coming out in the next year or so.

SF: I think it has already been well said. I think it matters what you are going to do with a packet and the packet length in terms of performance and dedicated resources on the blade itself. Yes, I completely agree that some of the functions can be done with x86 and also the nice thing on x86s is that some of the operating systems work with packet processing functions. Most likely it needs additional software development. It depends on what types of applications are being created. So we have a niche market where vendors can take advantage of the x86 platform doing some of the data processing functions already.


Advantech, Emerson, Kontron, and RadiSys are Premier members of the by Intel® Embedded Alliance.  Wind River is an Associate member and 6WIND is an Affiliate member of the Alliance. 


Warren Webb
OpenSystems Media®, by special arrangement with Intel® Embedded Alliance