Skip navigation
2012

Testing for security flaws is difficult because it fundamentally differs from the functional testing performed on most applications. Most functional testing involves “positive testing” to ensure correct responses to expected inputs. In contrast, security testing revolves around “negative testing,” which subjects the system to unexpected inputs and conditions to see if it will fail (Figure 1). The inputs needed for this negative testing can’t easily be derived from requirement documents alone.

 

negative.png

Figure 1. Security testing focuses on the unexpected and unplanned.

 

To make things more challenging, security vulnerabilities may not be apparent outside a real-time environment. Consider the denial-of-service (DoS) attack as a simple example. These attacks saturate a system with inputs (typically network traffic) so that the system resources are overloaded and cannot function as intended. The key point about DoS attacks is that each individual input may appear to be a valid, expected event. It’s only when you accumulate the tasks in real time that the problem becomes visible.

 

Security testing is also hindered by the fact that testing tools are typically designed for developers, not testers. For example, code coverage tools used by developers are often too “heavy,” complicated, and slow for quality assurance (QA) testing. What’s more, these tools typically require instrumented code rather than production code. On the other hand, QA-oriented testing tools often treat code as a black box, making it difficult to determine exactly where the security vulnerabilities are.

 

These problems are exacerbated for projects that employ agile or iterative development, where code is updated frequently, often close to the launch date. Simply testing the entire code base on every revision can be impractical under these circumstances, yet testers must maintain comprehensive coverage to ensure that updates to one part of the code have not exposed new vulnerabilities in other functions.

 

Wind River Test Management was designed to address challenges like these. It provides three capabilities that are critical to catching security flaws:

  • Code coverage analysis of real-time production code
  • Automated performance regression testing
  • Binary code change analysis to minimize re-testing and flag suspicious code
  • The ability to simulate attacks thorough both pinpoint fault injection and “brute force” Fuzz testing

 

Let’s start with code coverage. Unlike typical code coverage tools, Wind River Test Management does not require a special build. This is a critical distinction, because most coverage methods work by inserting special instructions into the code. This instrumentation can cause a large performance hit. In addition to slowing testing—potentially stretching run times from hours to days—instrumentation can cause time-sensitive tests to fail, mask problems, or introduce bugs.

 

In contrast, Wind River Test Management takes production binary code—no special builds required—and analyzes the structure to instrument the code on the fly. Once a section of code has been covered, the instrumentation for that code is removed. Thus, the performance impact of instrumentation is minimized—in principle, there is no performance hit to steady-state execution.

 

This low-impact approach makes it practical to measure code coverage on production code. This is critical for negative testing, because non-covered code is inherently suspicious. Unexecuted functions might be “dead code” that contains security vulnerabilities. For example, developers can forget to remove test or debug APIs. Leaving this code in is a bad idea. Debug code is usually subject to less scrutiny—and thus more likely to have holes—when the development team assumes it will be removed.

 

It’s also fairly common for developers to leave debug code in intentionally, e.g., to aid in diagnostics after shipping. This is also a bad practice because hackers can take advantage of this back door. Finally, unexecuted functions could be malicious code inserted by a disgruntled coder, an infected development machine, etc.

 

Next, let’s consider performance profiling. Wind River Test Management’s real-time testing capabilities enable realistic performance profiling. Among other benefits, this feature enables testers to identify performance degradation across builds. Such anomalies can indicate that good code has been replaced with malware. To illustrate the significance of this feature, consider StuxNet. This worm overwrote code that was supposed to monitor motor speeds, replacing it with code that oscillated the motors from 2 Hz-1 kHz. The design of the worm made it difficult to detect in the field, but oscillating motors requires significantly more processor cycles than just monitoring them. Thus, performance regression can help testers identify malicious code.

 

Code change analysis is another key feature. Wind River Test Management also automatically inspects the binary code to identify which sections changed between builds. The system can use this “binary build differencing” and its structural code analysis to determine which test cases must be re-run, and to flag tests that may need to be enhanced to verify the changed code (see Figure 2). This information helps minimize testing on new builds, which is critical to achieving comprehensive coverage. It can also alert testers to unauthorized changes. For example, suppose functions A, B, and C all change, but only A and B are on authorized change list. This throws up a red flag—why did C change? Has the code been compromised?

 

build.png

Figure 2. Automated change-driven testing.

 

Finally, the tool includes features that help simulate attacks, including both pinpoint and brute-force method. The pinpoint method uses fault injection to mimic a hacker forcing the system into edge condition. For example, fault injection can be used to simulate a DoS attack that fills memory, causing the system to crash and leave vulnerable files on disk. Rather than force the tester to actually fill memory, the tool lets testers simulate the condition by injecting fault signals into the system (Figure 3). These faults could include conditions like memory full, database full, or no network connection. Because the tool gives testers white-box visibility, they can watch how the system responds to make sure any failures are graceful and do not expose vulnerabilities.

 

fault.png

Figure 3. Fault injection can simulate an attack.

 

Wind River Test Management’s Security Pack extends attack simulation with fuzz testing licensed from Codenomicon. This technology deliberately feeds the software malformed inputs. Unlike fault injection, which requires knowledge of fault conditions, fuzz testing doesn’t require knowledge of the code. Instead, it uses a brute-force approach in an attempt to locate unknown risks.

 

The bottom line for testers is that security is an increasing concerns for embedded software, but test software is often poorly suited to detecting vulnerabilities. Wind River Test Management provides an alternative approach that both addresses security concerns and solves many common problems in QA testing.

 

 

Of course, even the best testing may not reveal every vulnerability. That's why it is wise to chose a hardware platform with built-in security features. One such platform is the 2nd generation Intel® Core™ processor family, which features the Intel® vPro™ suite of hardware-assisted security and management technologies. These technologies include:

 

  • Intel® Active Management Technology (Intel® AMT), which provides remote diagnosis and repair
  • Intel® Trusted Execution Technology (Intel® TXT), which supplies security protection over and above ordinary software solutions
  • Intel® Virtualization Technology (Intel® VT), which enables secure task separation, e.g., for systems that mix open-source and high-security software.

 

Together, these technologies provide a foundation for proactive threat deterrence – an approach that stops threats before they breach your system and isolates compromised systems so the damage is contained. See my recent Intel Vpro overview for details.

security.pngFor more on securing connected devices, see intel.com/go/embedded-security.

 

Wind River is an Associate member of the Intel® Embedded Alliance.

 

Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine

Follow me on Twitter: @kentonwilliston

As multi-core processors become an integral part of the embedded landscape, designers of real-time systems are looking for the best way to take advantage of the improved performance and flexible architecture while minimizing software rework. Designers want to incorporate multi-core architecture to boost performance through parallel processing and to minimize hardware costs by combining platforms. Because of the long life cycle of many embedded applications, designers must also interact with multiple generations of technology and are often faced with combining legacy platforms with the latest high speed real-time applications. Although developers have access to multiple techniques to enable this performance gain, embedded virtualization offers the most flexibility by isolating and allocating system resources between operating environments so that real-time, general-purpose, and legacy software can be readily integrated. In embedded applications, virtualization software is widely used to combine a real-time, deterministic operating system with a high-level, interactive operating system such as Windows.

 

Virtualization is achieved by adding a virtual machine monitor (VMM) software layer that isolates individual partitions and manages the execution of guest operating systems. In order to speed up the interaction between virtual components, Intel has incorporated hardware-assisted virtualization in the latest processor architectures tailored for extended-lifecycle embedded applications. The 2nd generation Intel® Core™ processors support Intel® Virtualization Technology (Intel® VT) to improve the performance and security of software-based virtualization operations by offloading execution of certain VMM instructions to system hardware. In addition, specialized Intel® functions such as Extended Page Tables (EPT) and Page Attribute Table (PAT) provide a hardware- assist to the partitioning and allocation of physical memory among virtual machines.

 

In addition to memory and hardware device allocation, virtualization software allows the developer to join multiple, real-time processes without having to establish priorities between deterministic processes and to assign multiple cores to compute intensive applications as needed to maximize overall system performance.  Taking advantage of the virtualization benefits of the 2nd generation Intel® Core™ architecture, TenAsys Corporation recently introduced a scalable, real-time operating system called INtime Distributed RTOS allowing programmers to write applications that run without modification on different system configurations ranging from single-core or multi-core processor systems to multi-platform systems with multi-core processors (See figure 1). This application scalability is made possible with the introduction of global objects and an underlying network called GOBSnet (Global Objects Network). This inter-process communications technology enables real-time embedded applications to leverage the performance potential of the new 2nd Generation Intel® processor family allowing performance-critical application code to be executed on dedicated cores, essentially eliminating the need to have separate, application-specific processors in a system.

1sideDRTOS scalable for print.JPG

 

The TenAsys INtime Distributed RTOS also targets another trend in the evolution of embedded systems: distributed computing. Designers can build scalable applications with the flexibility to leverage multi-core processing concentrated on one platform or by distributing application across a network in order to be closer to the control point. The built-in GOBSnet features allow processes running on different instances of the INtime RTOS kernel to intercommunicate as if they were running on the same kernel. This means that the same application can be run on adjacent cores of a multi-core processor or a core on another platform with no program changes. INtime Distributed RTOS applications are edited, compiled, and debugged using the same off-the-shelf Microsoft Visual Studio software that is used to build standard Windows applications.

 

Hardware assisted virtualization (Intel® VT) improves the flexibility and robustness of software based virtualization technology and offers the tools needed to isolate and secure critical applications. You can find more information and technical articles on Intel architecture and virtualization at the Intel® Embedded Community page on Virtualization.  If you are starting a new virtualization project with multiple operating systems and you have security questions, please share your concerns with fellow followers of the Intel® Embedded Community. You can also keep up with the latest technical details and product announcements at the Embedded Computing Design archives on Virtualization.

 

To view other community content on virtualization, see “Virtualization - Top Picks

virtualization.jpg

Warren Webb
OpenSystems Media®, by special arrangement with Intel® Embedded Alliance

 

Microsoft is an Associate member of the by Intel® Embedded Alliance. TenAsys is an Affiliate member of the Alliance.