Skip navigation
2010

Earlier this month industry group CompTIA reported that one in four doctors will buy a tablet PC in the next year. As shown in Figure 1, doctors are looking to improve care and their cost/revenue balance, and medical tablets can help on both fronts. The growing interest in tablets is also motivated by the arrival of low-cost hardware and tablet-friendly operating systems like Microsoft* Windows* 7.

 

fig1.png

Figure 1. Top priorities of doctors and health care provider (Source: CompTIA, via mobihealthnews)

 

Although price is a key motivator for the medical tablet buying spree, it is important to remember that price isn’t everything. Health care providers must consider other factors such as durability and functionality in order to meet their goals of improved service and a better financial picture. I recently read an excellent discussion of the issues written by Panasonic (PDF, registration required). Let’s review the main issues now:

 

Durability: Many doctors are considering consumer-oriented tablets like the Apple iPad. These devices have low up-front costs, but they are easily damaged. As a result, consumer-grade devices suffer from a high total cost of ownership (TCO). Figure 2 shows the results of a study by VDC Research that compares the five-year costs of mobile devices with varying levels of ruggedization. As shown in the figure, ruggedized devices have significantly higher up-front costs (shown in green) but they quickly pay for themselves through lower repair and replacement costs (shown in gray).

 

fig2.png

Figure 2. TCO for mobile devices over 5 years, including direct costs (green) and indirect costs (gray). (Source: VDC Research data, via Handheld Group)

 

Ruggedization is also important because hospital equipment must be sanitized frequently. To make sure your tablet can tolerate repeated cleaning, look for IP-rated equipment. Depending on where the tablet will be used, it may also need to meet a variety of standards such as those listed in Figure 3.

 

fig3.png

Figure 3. Important medical standards; click to enlarge. (Source: Embedded Innovator)

 

These concerns make purpose-built medical tablets like the Arbor Technologies Gladius G0710 shown in Figure 4 a better choice. This 7” tablet features an IP54-compliant enclosure and is certified to FCC Class B, EN60601-1, EN60601-1-2, and UL60601-1. The Gladius G0710 can withstand a 4-foot drop; shock up to 10G/11ms; and vibration up to 1 Grms/5~500Hz.

 

fig4.png

Figure 4. The 7” Arbor Technologies Gladius G0710 features an Intel® Atom™ processor Z510.

 

Functionality: Medical facilities are already packed with electronic devices. Adding tablets to the mix can complicate matters. For example, it gives the IT department another set of devices to manage. One way to get around this problem is to have the tablet play multiple roles. For example, a convertible tablet PC can do double duty as a physician’s desktop computer, reducing the load on the IT department. As another example, Mobile Clinical Assistants (MCAs) have integrated features such as barcode scanners, RFID readers and cameras to improve point-of-care productivity.

 

When considering the feature set, be sure to consider whether you need touch input or pen input. Most consumer-grade tablets have transitioned to capacitive touch screens, but these screen fall short if you need to hand-write notes. Consider looking for tablets that can accept both touch and pen input for maximum flexibility and usability.

 

Also make sure the tablet supports the right combination of connectivity options, such as Wifi*, cellular, and Bluetooth. Some tablets offer these wireless features as options, allowing you to pay only for the features you need.

 

Finally, it is critical to ensure that the tablet has the appropriate level of security features. Options like fingerprint scanners are must-have items for many applications. Also consider BIOS-level security technology like Computrace to ensure that you can lock down the machine in case of theft.

 

The Advantech MICA-101 shown in Figure 5 is an example of a feature-rich tablet. This Mobile Clinical Assistant offers:

  • Integrated camera and RFID reader
  • Optional image based barcode scanner (no lasers)
  • Dual resistive and digitizer panels for touch operation and digital pen recognition
  • Wi-Fi, Bluetooth, and optional 3.5 G mobile communication
  • Optional fingerprint reader

 

fig5.png

Figure 5. The Advantech MICA-101 offers a long list of features including support for both touch and pen input.

 

Ergonomics: Don’t forget that the tablet must be easy to carry! Look for features such as straps and supports for comfortable all-day operations. Weight is another concern—a lighter tablet is easier to carry. However, make sure the light weight does not come with a short battery life. A too-short battery life will significantly reduce the usefulness of the tablet PC.

 

As an example of an ergonomic design, consider the Arbor Technologies M1255. As shown in Figure 6, this Mobile Clinical Assistant features a no-slip grip that can also be used with a shoulder strap. The tablet comes with a hot-swappable external battery to push service life to 3.5 hrs, yet is weighs only 1.6 kg (3.5 lbs) with the external battery included.

 

fig6.png

Figure 6. The Arbor Technologies M1255 features an ergonomic design.

 

The bottom line is that not all tablets are alike. Tablets vary widely in areas such as durability, functionality, and ergonomics. Before selecting a medical tablet, make sure you know which features are important for your unique requirements.

 

Advantech is a Premier member of the Intel® Embedded Alliance. Arbor Technologies is an Affiliate member of the Alliance.

 

Kenton Williston

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance

Editor-In-Chief

Embedded Innovator magazine

In a November 2nd, 2010 E-cast on "40G and Beyond: Next-Gen Network Design with AdvancedTCA ", sponsored by the Intel® Embedded Alliance, a panel of experts discussed the latest hardware, software, and standards necessary to support the rapid rise in the number of  connected devices.

 

Curt Schwaderer, Technology Editor, moderated the event for OpenSystems Media. Panelists presenting were: John Long, Product Line Manager for ATCA Products at RadiSys Corporation, Brian Carr, Strategic Marketing Manager at Emerson Network Power, Paul Stevens, Telecom Sector Marketing Director at Advantech, and Sven Freudenfeld, Business Development for Telecom at Kontron.

 

Curt opened the E-cast with insight into the enormous growth of mobile internet bandwidth usage. He cited projections from Cisco that forecast a compound annual bandwidth growth of over 100% in almost every region worldwide from 2009 to 2014. Curt also referenced an iSuppli Corporation report stating that capital expenditures for wireless infrastructure will once again be on the rise in 2011 after having been delayed due to the global recession.

 

John Long of RadiSys then described how LTE (Long Term Evolution) applications are driving requirements for 40G. Although LTE represents simplified network architecture there are a lot of complications and potential security risks that must be addressed. John addresses the question of using Intel architecture in current LTE control plane applications. He also provides a timeline for using Intel architecture to address higher complexity packet processing applications in the future.

 

Brian Carr of Emerson Network Power presented an introduction to the AdvancedTCA (ATCA) open standard and how it covers shelves, board, mezzanines, power distribution, and system management. Brian also described the technical differences between 10G and 40G architectures and the associated challenges. An evolution path to 40G was detailed for customers that already employ 10G ATCA. This path includes the deployment of 40G backplanes, switch hubs, and payload blades as necessary to support new services or higher capacity options for existing systems.

 

Paul Stevens of Advantech covered system scalability in enterprise and carrier networking. He outlined the network and software elements to consider when scaling from 10G to 40G and beyond to possibly 100G throughputs. He dissected the anatomy of a current network appliance and covered the techniques and resources necessary to translate to a scalable blade topology and then to extend that with full 40G interconnects.

 

The last speaker, Sven Freudenfeld of Kontron offered additional approaches to take full advantage of multi-core and ATCA. Starting with multi-core, Sven covered the hardware and software extensions that are bringing new opportunities and capabilities to ATCA. Virtualization allows users to consolidate multiple applications into one platform and share resources such as processor capabilities and storage between individual blades. Users can also apply virtualization on top of ATCA to combine multiple applications into a single blade.

 

Register to view the archived event, with slides and audio at http://bit.ly/blFol7. After the presentations, the question and answer portion of the E-cast began with panel discussion of questions submitted by the attendees. Following are excerpts from that session.

 

CS: What do you see as some of the biggest challenges for the success of 40G ATCA moving forward?

SF: There are a couple of items in terms of launching products including having all the components available. Designers must start with the backplane approach itself, implementing the IEEE specification. PICMG is currently finalizing their specification and that platform is becoming widely available. The latest ATCA framework is really just considered infrastructure to support bandwidth to each of the individual blades. When it comes down to the blades, the software component is the key element to make it happen and to take full advantage of the infrastructure of the ATCA platform. The challenge is to basically combine all of this together and achieve the fastest time to market with the platform itself.

 

CS: When do you see volume 40G adoption kicking in and how fast or slow is 100G behind it?

JL: When you look at 40G adoption you have look at how Brian set it up. Chassis that support 40G are being deployed today. You will see switches starting to be deployed next year and payload blades also next year. I don’t think we will see mass deployment in the carrier network until sometime in 2012 or 2013. Before you see mass adoption you will see a year or two in the development labs. And we think the start of that curve will be next year. Much like the adoption of 1G to 10G – it takes time.

 

CS: Are there any wireless and wire line service providers doing any kind of in-lab tests and trials at this point or are the pieces just not there yet?

JL: We are just starting to get there so it is too early for the carriers to get a full 40G platform with the components that are available. And with 100G we are starting to see people really pushing the technology. We still have challenges on the silicon side. Our expectation is that five years from now we will start seeing some migration to 100G.

 

CS: What is the industry doing to address the interoperability concerns at 40G for software, blades, backplanes, and shelves?

PS:  I really believe this is where PICMG and cTCA are working with their members to create the best environment to make interoperability work. The cTCA interoperability and compliance documents were put together to define test methodologies for various platforms. They are really the result of multivendor cooperation to insure that the products work together without major challenges to any of them. We all work together at plugfests as well. Another one was held just a couple of weeks ago in Germany. That also helps to insure that the widest range of products work together and mitigate problems.

 

CS: Since 40G payload blades won’t be available until next year, why should service providers worry about 40G ATCA right now?

BC: It all has to do with smooth migration. The earlier you prepare, the smoother it will be. So if you are hoping to take advantage of 40G ATCA, and you are currently deploying 10G ATCA, then now is the time to think about getting ready. Remember, as I said, the first thing that you need to do is to introduce 40G ready platforms with suitable backplanes and I think most people understand that. And after that you can gracefully introduce 40G switches and payload as you are ready. Remember you can get those platforms now. The 40G switches will port existing 10G payloads and a mix of 40G and 10G payloads. So yes, you should think about it now.

 

CS:  Is there is a plan to support 100G over the backplane?

BC: We’ve already seen situations where people are talking about it. We’re looking to the future. I’ve already mentioned that people that are pushing the edges are already deciding what the next step is. But what we need to be careful of is there is no IEEE standard for 100G on the backplane yet. The switches themselves would need another major increase in capacity to do that and we are a fair way away from payload technology that could effectively process 100Gs worth of traffic for any slot. So I don’t see 100G ATCA superseding 40G ATCA any time soon. I do believe we will see 100G coming in but it will be as options for network terminations on the switch hub. That’s likely to be the first.

JL: I think Brian summed it up very well. Network interfaces will be first but over time you will see customers demand 40G over the backplane. There are technical issues and standards issues to address but in time it will come.

PS: I agree with what my colleagues just said. What we are beginning to see now is people are looking at bringing in 100G through the switch. We’re probably still a long way off from seeing 100G within the system itself.

 

CS: What percentage of the new TEMs (Telecom Equipment Manufacturers) will remain proprietary vs. ATCA?  I know the ATCA evolution has been happening within telecom for quite a while. How do you see the conversion to ATCA within TEMs within the next few years?

JL: If you look at TEMs overall, a vast majority of them have ATCA somewhere in their portfolio. You also have to look at where ATCA makes sense. For applications like the eNodeB, ATCA is probably not the right solution. Where we think it makes sense, where the bandwidth and cost points make sense, we expect ATCA to continue to grow.

BC: Yes, I go along with that. I think the other point to bring out is that ATCA is an open standard and there are quite a lot of TEMs who have adopted ATCA and they still make it all themselves. So a lot of companies that are on this call don’t necessarily sell to those people. But the whole point is that ATCA is an open standard so you can start and you can take advantage of the ecosystem and the ecosystem is an incredibly powerful thing.

SF: Yes, I think that the adoption rate and the fact that ATCA is an open standard is very well recognized and multiple TEMs have been launching products. There has been a lot of consolidation within the TEMs market itself by mergers and acquisitions. But the advantage is that most of them have already used ATCA in the past and now moving to 40G gives a much broader application arena for ATCA.  It helps them to consolidate platforms but also to get new platforms to the market on time to make revenue.

 

CS: How about blade computer platforms using ATCA with 40G. We saw a lot of architecture diagrams from you folks about 1 or 10G into the blade. Are any of them going to use 40G or is that just a switch pipe.

JL: I’m not sure if you’ll see 40G on a compute blade in the next generation. I think that if you look at the packet processing blades, you will see 40G.

PS: Yes, we are seeing 40G at the switch end of things now. It is becoming mainstream in PCI now. 40G into the switch with multiple connections.

BC: We can see both situations in place. Fundamentally,   there’s still going to be a fair cost differential between 10G and 40G which means that there is going to be a place for both 10G and 40G blades in the portfolio. So the ones that are focused clearly and heavily on server side applications will generally stay on 10G whereas the ones that are focused more clearly on packet processing applications will move to 40G more quickly.

SF: There is a combination of that leveraging 40G. If you look at MME platforms for example the security and data gateway functions are the key. The best combination already is in the space of being deployed for the 10G. The 40G gives much broader bandwidth and takes full advantage of the all IP infrastructure. There are a couple of items to consider. One of them is the over speed synchronization infrastructure. But, I agree on the cost point. Some of the data frame functions are really not needed but they end up in deep packet inspection functionality where the combination of multi-core and the software can help to keep the cost reasonable and also reuse some of the hardware architecture which is already deployed.

 

CS: Where is the point at which Intel architecture can capably handle the packet processing data plane functions? It that today or is it next year?

JL: If you look out there, in some applications, Intel can handle the packet processing and it depends on the functionality and data planes. And over time the amount of applications and the density that the x86 architecture can address will increase.

BC: I think John has put it very well there. In fact, we are already in a situation today where Intel architecture is doing packet processing. So it depends on the rate and the size of the packets as to how far that goes. We can certainly see the next generation coming through and that’s going to be very interesting indeed.

PS: I think the combination of the latest software from companies like 6WIND and Wind River are going to help accelerate that. They’ve already given pretty much a 10x improvement in performance over current applications. On the other side, x86 issues are getting faster, technology refreshes are getting more cores and accelerating the technology. x86 processors are pretty good today for flow analysis, database integration, management and general applications. Fast MPUs are better for packet processing applications, and things like operations and payload identification. I think the pendulum is swinging and there’s more refreshes coming with x86. And there’s a lot of exciting technology coming out in the next year or so.

SF: I think it has already been well said. I think it matters what you are going to do with a packet and the packet length in terms of performance and dedicated resources on the blade itself. Yes, I completely agree that some of the functions can be done with x86 and also the nice thing on x86s is that some of the operating systems work with packet processing functions. Most likely it needs additional software development. It depends on what types of applications are being created. So we have a niche market where vendors can take advantage of the x86 platform doing some of the data processing functions already.

 

Advantech, Emerson, Kontron, and RadiSys are Premier members of the by Intel® Embedded Alliance.  Wind River is an Associate member and 6WIND is an Affiliate member of the Alliance. 

 

Warren Webb
OpenSystems Media®, by special arrangement with Intel® Embedded Alliance

One of the most anticipated airplanes, Boeing’s 777 Dreamliner incorporates a two-pilot flight deck with a five-screen electronic flight information system. If you’ve ever looked in the cockpit of a commercial airliner, you have seen the bewildering array of screens, dials, and indicators used to control the airplane. The Dreamliner flight screens replace a large number of indicators, and serve to reduce the amount of information the pilots need to deal with at one time.

 

To better appreciate the complexity of automatic navigation systems it is important to understand the three basic control surfaces that affect an airplane's attitude. Elevators are devices on the tail of a plane that control pitch. This is best described as the movement of an aircraft around a horizontal axis perpendicular to the direction of motion. The rudder turns the airplane left and right relative to the forward motion of the airplane – called yaw. Ailerons are the third control surfaces attached to the trailing edge of the wings. Ailerons control the airplane’s roll. Roll occurs when the ailerons move in opposite directions and cause the airplane to turn on the axis that is in the direction of travel. Roll is the flight path you see when the pilot performs a barrel roll. So, in order to control aircraft attitude, we must control: yaw, pitch and roll. In addition, if equipped with throttle control, we can control the aircraft speed.  Single-axis autopilots manage one set of controls, typically the ailerons. Such systems are known as a "wing leveler" because by controlling roll it keeps the aircraft wings level. A full autopilot manages all three basic control systems: ailerons, elevators and rudder. Other aircraft, such as helicopters, are controlled differently. The control algorithms for a helicopter are also different from that of a fixed wing aircraft.

You can get an idea of the specific control software needed to control a hobbyist aircraft by reviewing sites like http://diydrones.com/. Many of the hobby autopilots offer a set of operational modes intended to make operating a Radio Controlled (RC) aircraft easier. A Carnegie Mellon University project "An Autonomous Autopilot Control System Design for Small-Scale UAVs" equipped an RC model airplane with a ground-based autopilot.

 

455i413DC5D76C394E66

 

Open Source products like ARDUpilot  http://code.google.com/p/ardupilot/wiki/FlightModesprovide the hobbyist with multiple control modes:

 

  • MANUAL
  • STABILIZE (RC control with wing leveler; let go of the sticks and it will level)
  • FLY BY WIRE_A (More autonomous control, including airspeed.)
  • FLY BY WIRE_B (More autonomous control, no airspeed.)
  • AUTO (Aircraft will follow GPS waypoints set by configuration utility.)
  • RTL (Aircraft will return to launch point and circle.)
  • LOITER (Aircraft will circle in current position.)

All of these functions in an RC airplane are contained in a single chip lower end microcontroller.  Studying the code for the RC airplane is a straight forward task. If you are not familiar with PID control algorithms, the code base for ARDUpilot is fairly small – each function takes a few pages of code. PID control was discussed briefly in an earlier article.

 

How does a hobbyist RC autopilot turn into a commercial autopilot that is hardware intensive with incredible software complexity? In part the answer lies in risk. If a small single engine RC aircraft crashes, there is little risk to human life. When a several hundred thousand pound aircraft crashes, there is risk to both life and property.   To mitigate risk, designers of life critical systems often add processors to their design. With added processors come more power consumed, more memory used, more peripheral control required, and more intrinsic complexity.

 

The growth in commercial airplane control hardware complexity and weight led to a decision to integrate all of the flight deck functions into a unified system. The 777 was the first aircraft with an ARINC data bus linked to the main and standby navigation systems. It also includes a Terrain Collision Avoidance System (TCAS) and a twelve-channel Global Positioning System (GPS). The aircraft is equipped with a color weather radar. These last three systems give a glimpse of the complexity present in the current generation of commercial aircraft flight systems.

 

457i92124C21367F3FC6

The Boeing 777 has a triple redundant digital autopilot and flight director. The flight control system includes envelope protection commands, which prohibit maneuvers that would push the airplane beyond its flight limits. Each of the three primary flight computers contains three different and separately programmed 32 bit microprocessors. The microprocessors include three different manufacturers’ CPUs, including an Intel processor, selected to manage the fly-by-wire functions.

Let’s look at why multiple processors would be used in a commercial autopilot.  The first obvious reason to have more than one processor in such a system is for redundancy. When one processor fails the redundant processor(s) can take over operations. But multiple processors on a single chip don’t address all of the processor failure mechanisms. We’ve talked before about hot standby which is one mechanism to ensure continuous operation. When hardware encounters a failure that does not result in a processor-stopping hardware failure, how do you tell? The classic method is to implement a three-or-more-processor design with voting logic. If two of three systems produce the same result, then the third system is known to be bad by definition. Guarding against processor data sensitivity and processor logic errors requires additional effort by engineers. Much of this effort falls to the software engineers. Many processor families have suffered from arithmetic errors intrinsic to a specific implementation. The idea of using different processors, possibly designed by different engineering teams, makes it unlikely to replicate the same set of errors under the same circumstances. So, it’s easy to see the rationale behind including three physically different processors from different manufacturers.  Errors can be introduced at several places in the software development process. Most people don’t think about failures of compilers and other software tools. Software development systems introduce a whole new wrinkle into the design of critical systems. It’s easy to see how you could implement three or more software control systems operating on a single processor. Of course, each of the software systems must be programmed in different ways yet still achieve the same answer. Arbitration software can vote any single process offline if it produces answers different from the other two processes.  

 

Certified Real Time Operating Systems (RTOSes) can provide designers with a foundation for qualified software development. For example, Green Hills Software, Inc (1), QNX(2), and Wind River Systems(3) all offer embedded RTOS products.  Green Hills offers a specific aerospace package called the Integrity Real-time Operating System (RTOS) and AdaMULTI Integrated Development Environment (IDE). The AdaMULTI IDE is an integrated set of tools for the development of embedded applications using Ada 95, C, C++, Embedded C++, and FORTRAN. Many aerospace systems must employ software developed using ADA and ADA-based tools. Wind River also provides an aerospace platform  that conforms to many of the world’s standards for critical avionics systems including RTCA DO-178B, EUROCAE ED-12B (“Software Considerations in Airborne Systems and Equipment Certification”),  IEC 61508 http://www.iec.ch/functionalsafety/and other related software standards.

 

The Boeing 777 flight control capabilities come with some hardware and software complexity. Five screens provide the two pilot flight decks with instant access to essential information. The five displays include two primary flight displays, two navigation displays and an engine indication monitor.  Software vendors like Green Hills, QNX, and Wind River RTOSes are the tip of the iceberg for aeronautics software, but by comparison to hardware choices; it’s a well contained series of alternatives. The proliferation of hardware alternatives is staggering. For example, GE Intelligent Platforms (4) in Charlottesville, Va., offers the ACR301 rugged 3U CompactPCI CPU board based on the Intel Atom E6XX processor for harsh-environment aerospace, defense, and industrial embedded computing applications. Chief among these applications is unmanned aircraft with demands for minimal power consumption.

On the other end of the scale, Kontron (5) offers the ACE Flight 600 ruggedized airborne server based on the Intel® Core™ 2 or Atom™ processor. Kontron also offers a selection of 3U CompactPCI cards and Computer-on-Module form factor systems.

 

459iD4845C9D56791DDA

 

 

From hobby RC airplanes, remote photography, and autonomous aircraft, to commercial airplane flight control systems, there is a continuum of systems capabilities. Basic RC autopilot facilities can be implemented in a single low end microcontroller, but commercial passenger airplanes demand more from auto-navigation systems. More means more power systems, more processors, more error detection, and more complex software.

 

How can you employ aircraft reliability and safety in your non-aircraft designs?   

 

___________________________________________________________________________

1. Green Hills Software is an Affiliate Member of the Intel Embedded Allaince

2. QNX Software Systems is an Associate Member of the Intel Embedded Alliance

3. Wind River Systems is an Associate Member of the Intel Embedded Alliance

4. GE Intelligent Platforms is an Associate Member of the Intel Embedded Alliance

5. Kontron is a Premier Member of the Intel Embedded Alliance

 

Henry Davis
Roving Reporter (Intel Contractor)
Intel(r) Embedded Alliance

Although the resulting product was decried by many converts to the open source movement, the developers successfully asserted their right under the GPL to keep portions of the code proprietary. But it also prompted multiple authors to fork new code bases aimed at satisfying a wide range of requirements.


Most retail wireless routers are based on Systems-on-Chip (SoC) that incorporate a processor, memory, and specialized peripherals. Such systems restrict the number of hardwired LAN connections, optional software and router-based packet processing. Open Source gives us a way to address these issues – we can port wireless router software to a

new target processor.  But we need a radio to interface to the WiFi transmissions.

 

441i4A1863DAE5917FA5


WiFi radio units are available from a number of vendors. Marvell’s 88W8786U radio module is typical of module functionality. The device provides the combined functions of the IEEE Standard 802.11/802.11b Direct Sequence Spread Spectrum (DSSS), 802.11g and 802.11n Orthogonal Frequency Division Multiplexing (OFDM), baseband modulation, Medium Access Controller (MAC), CPU, memory, host interfaces, direct-conversion WLAN RF radio, and Bluetooth coexistence on a single integrated chip. The CPU used in these modules provides control of the radio elements themselves. Making a full featured wireless router requires additional processing power, memory, and peripherals.

ADI Engineering (1) also offers a WiFi capable radio, RM1 High-Power MiniPCI 802.11 a/b/g Radio, as part of its network boards. ADI also offers customized hardware based on stahdard Intel processors, or customized single chip ATOM-based computers as a solution to needs outside their standard offerings.

 

439iBE332C61EB335942

No matter what WiFi radio is chosen for your design, there still is a lot of software needed to operate a large scale wireless router.  DD-WRT and OpenWRT are probably the most popular wireless router software packages available. Even though these packages will implement a wireless router with minimal effort, there is a porting effort involved to make them operate on an ATOM-type CPU. Still, existing ports to other embedded processors simplifies the job - OpenWRT is a Linux distribution configured for embedded devices including members of the Intel architecture processor family.


OpenWRT provides these basic wireless router services


  • port forwarding of external traffic to computers inside the LAN
  • UPnP for dynamically configured port forwarding
  • static DHCP leases
  • extensive firewall and router configuration
  •  QoS for applications such as VoIP, online gaming, and streaming media
  • configuration of the device as a wireless repeater, wireless access point, and wireless bridge 
  • mesh networking
  • use of Dynamic DNS services to maintain a fixed domain name with an ISP that does not provide a static IP address
  • for devices with USB ports, it supports printer sharing, Windows-compatible file sharing (via SAMBA), USB audio, and most any device that can be connected
  • realtime network monitoring


The code base for OpenWRT includes all retail wireless router functions, and adds some capabilities usually found on enterprise class routers. Since OpenWRT is the product of the Open Source movement, there are additions that developers either thought were “cool” or they had a personal interest in developing. A third party offering from EXPlanetOne adds load balancing, normally an enterprise class capability.  There are a number of third party Open Source code forks of the original software. Some of these can be added to OpenWRT, while others require complete rewrites.


Still other Open Source code bases can add more capability, possibly while maintaining the OpenWRT base. Zeroshell is one software package that provides a great number of Internet service routines.  Zeroshell is primarily focused on LAN services, but many of these are also of critical importance to enterprise class wireless routers. Zeroshell offers a Wireless Access Point mode with Multiple SSID and VLAN support by using WiFi network cards.

 

A Zeroshell box with one WiFi card could become an IEEE 802.11a/b/g Access Point providing reliable authentication and dynamic keys exchange by 802.1X and WPA protocols. Of course, the authentication takes place using EAP-TLS and PEAP over the integrated RADIUS server that is part of Zeroshell.

 

Porting any one of the wireless router software packages may become a large undertaking based on the relatively limited number of supported CPU architectures. The amount of work required to port a specific piece fo software is difficult to predict in advance. Without doubt, combining disparate code bases for wireless routers will be a significant effort. But, the effort will be less than developing the entire application as a complete new development. For many wireless routers, a single chip-based hardware platform is preferred due to cost reasons. Other wireless routers can easily support a single board computer solution. Alisting od Intel-based COM boards may be found in an overview blog of COM boards.

 

Open Source may save development time and effort. But it isn’t free. How will you decide whether to use Open Source code or develop the application completely yourself? 

 

__________________________________________________________________________

  1. ADI Engineering is an Associate Member of the Intel Embedded Alliance

Henry Davis
Roving Reporter (Intel Contractor)
Intel(r) Embedded Alliance

 

Filter Blog

By date: By tag: