Skip navigation

Software & Bios

8 Posts authored by: jennysuh

By AJ Shipley, Senior Security Architect, Wind River

 

Security cannot be bolted on, it must be built in. This statement proves to be especially true when considering the recent hack of the New York Times, where during a four-month long cyberattack by Chinese hackers, the company's antivirus system from Symantec missed 44 of the 45 pieces of malware installed by attackers on the network. Cases like this highlight the danger of relying on a single security product to keep you safe from hackers. According to a written statement by Symantec, "Advanced attacks like the ones the New York Times described underscore how important it is for companies, countries and consumers to make sure they are using the full capability of security solutions. Antivirus software alone is not enough."


As evidenced with incidents like this, products need to be developed that combine multiple layers of security to keep the “bad guys” out of customer systems, and to also minimize the danger and exposure to device resources and data if they do get in.


Products like the Wind River Intelligent Device Platform (IDP), a software platform built for developing next-generation smart systems, are a perfect example of what I mean by building security into the product. IDP provides multiple layers of security that are all highly configurable and customizable.  Security policies are developed that match the specific deployment scenario with the correct level of access control and integrity monitoring, providing a true architectural, defense in depth approach to securing embedded devices and machine-to-machine applications.


Specifically, IDP includes Secure Remote Management (SRM), which ensures the integrity of the system with boot time security using Trusted Platform Modules (TPM) and Trusted Software Stacks (TSS).  Advanced security features like address space layout randomization (ASLR) and non-executable memory pages make it exceedingly difficult to penetrate IDP.  Run time security and integrity is monitored with the built in integrity measurement architecture which immediately detects when system files have been tampered with.  SRM provides a robust access control infrastructure for limiting access to system resources based on the privilege levels of specific users or groups of users ensuring that if a bad guy gets in they can’t access any of the critical resources or information.  Finally, secure package management remotely deployed over secure communication channels ensure that software updates are intact and trusted prior to installation.


Anti-malware products, like the one deployed at the New York Times, are an important component of an overall security strategy, but are nowhere near sufficient to protect systems, resources, and information from the advanced persistent threats that are on the rise.  We can no longer assume that we can successfully keep the bad guys out of our systems.  A good security strategy must first understand the multiple attack vectors and then deploy solutions that provide multiple layers of defense to deter, detect, and defend our critical resources.


For additional information from Wind River, visit us on Facebook.

By Steve Konish, Director, Product Management, Wind River

 

In just three short months, the engineering team at Wind River has added new, amazing capabilities to the Wind River Intelligent Network Platform. These enhancements to the platform allow applications to go even deeper and faster than ever before.  

 

When we first launched the platform, it included two data plane engines for packet acceleration and pattern matching.  The two engines together created an incredible software-enabled solution to provide high-performance deep packet inspection (DPI) for network applications. We’ve now added a third data plane engine, the Flow Analysis Engine (FAE), which allows applications to incorporate even greater network intelligence.  The Flow Analysis Engine offers complete visibility into network traffic in real time, including flow classification, protocol and application identification, and metadata extraction.

 

INP2

 

With the addition of the Flow Analysis Engine, Wind River Intelligent Network Platform delivers the most comprehensive and integrated DPI solution on the market today.  Imagine being able to use one platform to consolidate management and data plane applications and also have the ability to accelerate, analyze, and secure network applications that help you deliver even greater value to your customers.

 

But wait…it gets better. We’ve added some very cool innovations to the platform that allows for the transparent acceleration of Linux-based applications.  This means any existing application can leverage the fast path within the Application Acceleration Engine without having to modify a single line of code. That’s right, unmodified applications can be accelerated up to 300% faster.  This capability is so extraordinary we had to patent it. Or, if you make the extra effort to also port and optimize your application for the platform, you can achieve even greater performance…up to 500% faster.

 

For additional information from Wind River, visit us on Facebook.

Debugging Simics -- on Simics

Posted by jennysuh Jan 16, 2013

By Jakob Engblom, Senior Technical Manager, Tools & Lifecycle Solutions at Wind River

 

Blurb

I often write and talk about how useful Simics is debugging concurrency bugs and glitches in multithreaded and multicore systems. Recently, we had a case where we proved this on a very complex application, namely, Simics itself. This nicely demonstrated both the recursive completeness of Simics, and its usefulness for conquering tricky bugs in complex software.


The beginning of this story is a bug in Simics, triggered by a certain Simics configuration. The Simics target is a Power Architecture machine, running some bare-metal test code testing the processor simulation. Occasionally, this setup would crash Simics, due to some bug in Simics or the models. It was a difficult bug to track down, as it only happened in one run out of 50 or so. When attaching a debugger to try to diagnose it, it invariably did not happen (a classic Heisenbug).

Simics-on-simics-1

Simics is the perfect tool to diagnose these kinds of issues, but in order to do that, we had to get the failing program into Simics. I.e., running Simics on Simics. The first step was to create a duplicate of the development host, inside of Simics. This was fairly simple, just a matter of installing a Fedora 16 standard Linux on an 8-core Intel target. Once the Linux was installed and booted, a checkpoint of the system was taken.

Next, the development code tree from the host was packaged up as a tar file and put on a DVD image file. Simics was started from the checkpoint of the booted target system, and the DVD image inserted into the virtual DVD drive and mounted by the Fedora Linux running on Simics. The tar file was copied to the file system on the target, and unpacked. A new checkpoint was taken after the Simics installation was thus completed and Simics could run on Simics. The result at this point was a completely self-contained, controllable, and repeatable environment.

The screenshot below shows Simics running on Simics, with the same desktop wallpaper being used for both the host and outer Simics Fedora system:

Simics-on-simics-screenshot-600px

The next step was to replicate the bug inside of Simics. To this end, a shell command was used that repeatedly ran the inner Simics until the bug hit (obviously, this session was started from the checkpoint after the Simics installation).

The result was this setup, ready to run Simics until the bug hit:

Simics-on-simics-2

To recap, we have Simics running on Simics. The “inner Simics” is configured with the Power Architecture setup that resulted in a crash on the host, and the “outer Simics” is running Fedora 16, providing a virtual replica of the development host (but inside of Simics).

Additional scripting in the outer Simics was used to make the search for and replication of the bug more efficient.

Simics-on-simics-3

  • The Simics script varied the time slices given to the processors in the IA target system. This caused greater variation in scheduling of concurrent processes and threads in the Simics-simulated Fedora 16 OS, which in turn helped provoke the bug so that it appeared faster (after fewer runs of the inner Simics).
  • A checkpoint was taken after the inner Simics had been started and the timing variation applied to the IA processors – but before it had started executing the test case. This meant that a checkpoint would be available that led straight to the bug, with no need to do any warm-up of the target or particular configuration of Simics. The checkpoint would in effect be a self-contained bug report for the issue.
  • A magic instruction (blue star) was planted in the segfault handler of the inner Simics, making it very simple to catch the crash of the inner Simics. Often, using a magic instruction like this is simpler than trying to capture the right page fault or putting a breakpoint at the right place. A magic instruction is a live marker in the code that will always trigger, regardless of debug information or OS awareness. Furthermore, it has no overhead until it hits.

Eventually, after some 20 runs of the inner Simics, the bug was triggered. Thanks to the checkpoint and Simics repeatability, reproducing the bug is trivial. The Simics crash could now be reproduced any number of times, and it was time to go debug and figure out why Simics crash. An occasional Heisenbug had been converted into a 100% reproducible Bohrbug.

 

The first step of debugging was to figure out the mapping of the many dynamically loaded modules in the inner Simics. This was done by running the outer Simics and sending a Ctrl-Z to the Fedora shell, pausing the inner Simics. Then, the /proc file system on the Fedora Linux running on Simics was interrogated to find the load addresses. Since the checkpoint was taken after Simics was started, we know that this is the mapping in the software setup found in the checkpoint. Every time the checkpoint is opened, the same mapping applies – so the information was saved and used to setup symbolic debug information for the Simics modules used.

 

The next step of debugging was to open the checkpoint again, turn on reverse execution, and run forward until the magic instruction hit. Then, OS awareness was used to back up until the last time that the inner Simics was running prior to hitting the segfault handler. This placed the execution of the outer Simics at the precise instruction where the inner Simics crashed.

 

It turned out that Simics was trying to execute code in a location (BCDE) where no code was to be found.

 

Stepping back one instruction led to a JMP instruction to the location BCDE.

 

So where did this JMP BCDE come from? It was clearly not part of the static code of Simics, but something that was generated at run time by Simics itself (Simics contains a JIT compiler and thus modifying running code at run time is perfectly expected behavior).

To find out how the bad JMP was created, a memory write breakpoint was put on the instruction (JMP BCDE), and execution reversed. Simics stopped at the point where the “JMP” part of the instruction was written to memory. Doing a stack back trace at this point showed the code that was trying to write a five-byte “JMP XYZQ” instruction into the JIT-generated code stream. Since the breakpoint had hit on the write of the byte containing the JMP instruction code, indicating that the other four bytes (containing the actual JMP target location of XYZQ) were yet to be written when the instruction got executed and Simics crashed.

 

Stepping forward (on the processor) revealed that a thread switch happened in the inner Simics, and that the incoming thread immediately executed the five-byte JMP instruction, such as it was. Since only the JMP byte had been written, this was a jump to location BCDE, rather than the intended XYZQ (it would also have been OK to execute the original ABDCE code). Thus, the issue was diagnosed to be a read-write race condition, with the twist that the read was an execution of the memory as code and the write a regular data write. As soon as the problem was identified it was of course very easy to fix.

 

With the same setup, another race condition in Simics was also found and fixed, involving the more common case of multiple concurrent threads updating and reading a shared data structure without sufficient synchronization.

 

In summary, this blog post has described one instance where Simics was used to find and fix concurrency bugs in a real-world complex software system called Simics. The key to the success was the repeatability that Simics provides, even for timing-related occasional events, along with checkpoints, scripting, reverse execution, and debug facilities.

 

For additional information from Wind River, visit us on Facebook.

By Davide Ricci, Senior Product Manager, Open Source Platforms, Wind River

 

As I work heavily with embedded Linux on a daily basis, I’m often asked about the Yocto Project.  While the Yocto Project is gaining traction and folks in the industry are generally aware of it, there still exists a bit of unfamiliarity of the details. What exactly is the Yocto Project? Is it a Linux distribution?  How would I use it in my embedded Linux projects?

 

Let’s dig in. The Yocto Project is an open source collaboration project that provides sorely needed standardized, high-quality infrastructure, tools and methodologies to help decrease the complexity and increase the portability of embedded Linux implementations. It is not a Linux distribution similar to what Wind River and other Linux distributors provide. 

 

A lot of vendors across the hardware and software ecosystem participate in the Yocto Project, and together this community is helping embedded Linux developers realize greater cross-platform compatibility and component interoperability.   How the heck does the Yocto Project accomplish that?

 

Consider this: In the embedded world, building a scalable Linux relies on the availability of a piece of software and set of associated tools. When taking the usual open source input (i.e. a kernel, userspace packages, patches) and your own input (size of the filesystem, kernel configuration), and using the appropriate cross-toolchain would produce different output.

 

It's not unlike cooking. With the same ingredients (flour, water, salt, sugar) and set of tools (mixer, spoons, rolling pin), you can come out with different results just by slightly changing your recipe (different temperature, proportions, etc.), and in the end you might pull off pizza or bread! Now, think about trying to combine recipes or perform some fine-tuning, without a common set of metrics. One recipe may require a tablespoon, another teaspoon, or liter or gram. This challenge is often what you see in the embedded Linux world before using a standardized environment like the Yocto Project:  No common repository for ingredients (kernel, userspace), no common set of recipes for cross-architecture builds, no common tools (toolchain, makefiles, compilers, debuggers), and no common build system to bake all the ingredients together.

 

As an embedded Linux distributor who built and maintained our own build infrastructure, I am excited to say that by adopting the Yocto Project as our standard build environment, we’ve removed a major part of the effort and cost of packaging a user-friendly, standards-based embedded Linux distribution.    Now we can focus on developing cool new features and cutting-edge capabilities. In other words, our engineering team is having a lot more fun these days!

 

For customers, the value proposition is no vendor lock-in if they adopt the freely available Yocto Project infrastructure or engage with vendors like Wind River who embrace the Yocto Project.   We pride ourselves on delivering the leading embedded Linux solution, while helping our customers manage risk. It’s just good business sense to keep your technology options open.  

 

The benefits of better interoperability, portability and reuse of our Linux implementation means we can build better stuff cheaper and faster. How great is that?   For our customers, that translates to Wind River’s ability to deliver far greater value at a lower cost. You can see that everyone wins with the Yocto Project!

 

Wind River is dedicated to advancing the state of embedded Linux, and will continue to support open standard efforts like the Yocto Project to deliver commercial-grade, optimized, open standard offerings that allow our customers to get the greatest value out of their open-source platforms.

 

I’d be interested to hear your feedback. Please feel free to post comments on what you’re hearing in relation to Yocto.

 

To stay posted, sign up here.

 

For additional information from Wind River, visit us on Facebook.

By Brian Vezza, Director of M2M Solutions, Wind River

Within the next few years, Machine-to-Machine (M2M) devices will account for more than half of all connected devices shipped annually. Think about that for a minute. There will be more M2M devices shipped each year than PCs, cell phones, tablets, set-top boxes, and gaming platforms put together. There are already hundreds of millions of M2M devices (if not over 1 billion) at work in sensors, smart meters, industrial control systems, SCADA devices, video surveillance, smart buildings … the list goes on. But ask 100 business executives to tell you what M2M is and you’ll get 100 very different answers. Ask those same executives what M2M means for their business, and those who follow the space will say that it will make their businesses smarter.

 

I can’t think of a technology that is so potentially transformational for business, and yet so poorly understood as M2M. It’s time to bring some clarity and focus to M2M so that businesses can transform the “promise” of M2M and smart solutions into real business results.

 

So I’ll start with a quick review of what M2M is and where it fits in the big picture of today’s technological landscape.

 

Making Sense of Data to Make Systems Smarter

To understand M2M you first have to grasp a key concept: The Internet is no longer a network of computers. It has evolved into a network of devices of all types and sizes, all connected, all communicating and sharing information all the time: cars, refrigerators, smart phones, toys, cameras, medical instruments, industrial systems, even trees embedded with sensors.

 

In this “Internet of Things,” devices, systems, and machines don’t only communicate with humans; they also communicate with each other. And they can generate an enormous volume of data about all kinds of things: temperature, GPS position, speed, humidity, vibration, altitude, and so on.

 

This data can be used in all kinds of smart ways when it is collected, analyzed, and applied to products, systems, and services. In the medical arena, for example, M2M devices can provide heartbeat-to-heartbeat monitoring and give doctors the data they need to diagnose and treat people with chronic heart conditions. On the highways, M2M systems can provide dynamic traffic control to help prevent accidents and optimize traffic flow so that more people can reach their destinations sooner. On the factory floor, automation systems can use M2M data to more quickly and flexibly respond to changing tastes and preferences of consumers.

 

Smarter Business Models via M2M Devices

In short, organizations of all types can take advantage of the intelligence generated by M2M to build smarter business models, create new sources of revenue and competitive advantage, and often reduce costs by improving productivity. And once more companies understand the transformative potential of M2M, the misunderstandings and misconceptions about M2M begin to disappear.

It’s easy to see why M2M has been hard to embrace. Intuitively we know about “human” devices because we see them, touch them, or use them. When the Xbox, Kindle, iPhone, or tablet computing device hits the streets (or even before), we almost always have some idea of what the device is and how we might use it. With M2M, our experience is much more limited. Unless you have worked with telematics, industrial controls, the smart grid, connected health monitoring devices, smart buildings, embedded computing, or related areas, it has been difficult to learn about M2M beyond the hype and the buzzwords.

 

However, broader acceptance and adoption of M2M is clearly underway. The growth of connected devices is skyrocketing, according to multiple analyst reports. Forecasts of 20 billion, 50 billion, or even 1 trillion connected devices by 2020 have been made by Ericsson, Cisco, IBM, McKinsey, and other leading companies. If the 50 billion connected device forecasts are accurate, then M2M could be over 50% of all connected devices shipped as soon as 2015. And according to McKinsey Quarterly June 2010, “More than two-thirds of new products feature some form of smart technology.” McKinsey estimates that at least $3 trillion in current spending could be impacted by M2M.

 

The challenge for business leaders is that creating and/or implementing M2M with a do-it-yourself approach has been filled with potential pitfalls. Take something as seemingly straightforward as wireless connectivity. Should you use cellular, 802.11, 802.15.4, WiMAX, or satellite? If cellular, should you use 2G, 3G, or 4G? 2G may have wider coverage globally and lower costs, but will it be around for the life of your device or service? 4G has much higher bandwidth, but costs more. What coverage and technologies are supported by wireless service providers in your areas? You get the idea. With limited M2M rollouts this probably isn’t a huge issue, but at scale making poor choices now can cause huge issues operationally and financially. Making smarter choices can dramatically increase your value from M2M-based smart solutions.

 

Wind River is working to make it much easier for both device manufacturers and companies to adopt M2M. Last week at Intel’s Software Media Day and more recently at the Intel Developer Forum (IDF), I spoke about M2M and some of its challenges, and described a few of the areas Wind River is focusing on to accelerate the adoption of M2M along with our partners, such as the Wind River Embedded Development Kit for M2M Smart Services, which was designed in collaboration with Intel, Kontron, and Eurotech. This is a broad space with almost every vertical market making use of M2M at some level. One thing that is common to just about every M2M implementation is that building a solid foundation today will put you in better shape for years to come.

 

In future blog postings I’ll explore some of these areas and how the many billions of smart connected devices, and specifically M2M, will need a common “kit” of capabilities.  Once this foundation is in place, it will be much easier to focus on extracting value created by M2M devices, turning that into knowledge, and then into action.

 

I’m looking forward to further exploring M2M and helping us all realize the immense potential of M2M and smart solutions.

 

 

 

 

For additional information about Wind River, visit:

http://www.windriver.com/

http://www.facebook.com/WindRiverSystems

 

Wind River is an Associate member of the Intel® Embedded Alliance

How to Get Virtual

Posted by jennysuh Aug 9, 2011

By Jakob Engblom, Technical Marketing Manager, Simics, Wind River

 

My blog posts about Simics tend to focus on how virtual platforms are used and what you can do with them. However, underlying all Simics use is the issue of getting a virtual platform in place. You need some kind of platform to run your software on in order to get the full benefit of Simics. Thus, creating virtual platforms is an important part of practical Simics use for many of our users.

 

We have been working with Simics for more than a decade, and we have learnt a lot about modeling along the way. The approach we have ended up with is a top-down methodology, where we start by investigating the target system as a whole from the perspective of the intended use cases, and then explore the details of each subsystem in order to determine how best to model it.  We then switch to bottom-up, applying an iterative modeling flow that delivers something useful as quickly as possible. During development, the goal is to always keep a useful (albeit probably incomplete) platform available at all times.

 

It is easy to get distracted by the details when creating a virtual platform.  If you start with a list of all the components of a system without considering the system context, you are likely to assume you need to model everything, as well as every function of everything. This is usually an overwhelming task, resulting in a protracted development project without clear focus or any chance of delivering something early. So, we need a better way.

 

The method we typically use with Simics is to start by investigating how the system is going to be used, and which hardware units are involved in these use cases. In this way, we can often exclude parts of the target system from consideration. Certain subsystems or subsystem functions might not be needed at all, or needed only at a later stage of the project.  Even when modeling an SoC with the eventual goal of a complete model, it is possible to create a priority list and staging sequence that quickly gets something into the hands of users that applies to their initial use cases.

 

Once this analysis is complete, the existing library of Simics models might well cover a large part of what is needed. There could also be "near misses", where a small adjustment to an existing model is all that is needed. In this way, the ever-growing library of Simics models provides significant short-cuts towards a final system model.

 

Next, it is time to start building the new models needed to complete the target system. This process is run in an iterative, agile way. We strive to define a minimal subset of the target system that we start with, so that software can be run on the target as soon as possible. The outer loop of the modeling effort is the extension of the overall system model and the testing of software on it.  The inner loop is the creation and changing of individual device models, using both model-level unit tests and software tests to drive the development.

 

The net result is an flexible project where the virtual platform can be updated in reaction to changes to the target system design. Often, modeling projects are begun before the details of the eventual target hardware are completely known. As shown in the picture below, the result is a series of incremental model deliveries that represent the current design state of the hardware.

 

simics image.png

This modeling method applies regardless of whether the modeling being performed is pre-hardware (for a new chip that might not even be announced yet or a new board built on existing components), post-hardware for a board or system that exists right now, or post-obsolence for an old system that still needs to be maintenaned.

 

Modeling can be done by Wind River, Simics users themselves, or third-party services firms. All Simics users are equal in what they can model and their access to the Simics modeling tools and APIs.

 

To learn more about system modeling with Simics, please read our new white paper on creating virtual platforms with Simics.

 

Happy modeling! And remember - resistance is futile, you will be simulated!

 

 

 

For additional information about Wind River, visit:

http://www.windriver.com/

http://www.facebook.com/WindRiverSystems

 

Wind River is an Associate member of the Intel® Embedded Alliance

By Marc Brown, Vice President of Tools and Marketing Operations, Products Group at Wind River

 

Security is a heavily overloaded term, meaning different things to different people. In the embedded systems world, news on security is often about Stuxnet or similar attacks where an enemy can control devices remotely and shut them down, make them behave abnormally, or worse, cause equipment failure and destruction.  In the enterprise world there is news of information breaches and sensitive data exposure which often seems to be unrelated to embedded systems. However, for embedded security it’s important to consider both the safe and correct behavior of the system and the data it may store or transmit. These two key areas of security concerns and priorities are as follows.

Information Assurance

First is Information Assurance, this is all about protecting and preserving data. The top priority for information assurance is the data the device stores or transmits, typically this data is confidential or, in government and military applications, it’s highly sensitive possibly secret or top secret. The embedded device must be designed in such a way that it’s very, very difficult for an attacker to gain access to the stored or transmitted information on the device.  This protection can include external, network based attacks but also where the attacker has physical access to the device as well. Correct and safe operation of the device is not necessarily the top priority for information assurance.

Cyber Security

The second key area is cyber security – where the aim is to protect the device from intrusion from the outside world to prevent failure, incorrect or unsafe behavior. The chief aim of cyber security is to prevent unwanted manipulation of the behavior of the devices. Leaking or exposing data is not a primary concern of cyber security.

The key areas of embedded device security are shown in the following diagram:

m.brown blog.jpg

 

Managing device security risk is about prioritization and balance between these two key aspects of information assurance and cyber security. Its not an either-or scenario – no manufacturer wants their device to protect data but be easily disrupted due to attacks. Alternatively, many classes of devices do not handle sensitive data but their correct and safe operation is paramount. However, as machine-to-machine communication grows into important parts of our infrastructure (e.g. the Smart Grid for our electrical system) automation devices maybe safety critical and handle confidential information. Understanding these two key areas of security and managing the risks in both areas is critical for secure embedded device design.

The Next Steps

Understanding the security priorities for your embedded device is important but only part of secure device development approach that needs to start before your devices is designed and built.

These include:

  • Threat assessment – understanding the security threats to your device, the attack vectors and the desired response from the device under such attacks.
  • Secure design – designing security into your device as a key requirement. Employing secure design and development techniques.
  • Proper runtime selection – selecting and building upon a secure runtime platform. Embedded operating systems need to support advanced security techniques, communication and data security and robust and reliable operation.
  • Securing the applications – applications must be designed and architected for security. Importantly, the device should restrict application execution through whitelisting and greylisting techniques.
  • Product lifecycle support – security needs to be designed into a product with platform, process and tools support throughout its lifespan.

 

 

 

For additional information about Wind River, visit:

http://www.windriver.com/

http://www.facebook.com/WindRiverSystems

 

Wind River is an Associate member of the Intel® Embedded Alliance

Testing for Security

Posted by jennysuh Aug 2, 2011

By Ido Sarig, vice president of product marketing at Wind River

 

Last summer was a watershed event for security-consciousness in the embedded systems world: Stuxnet, a highly sophisticated worm exploited no fewer than 4 zero day vulnerabilities in Windows in order to attack a specific Siemens PLC and its associated SCADA system. The target was reportedly the Iranian nuclear facilities at Natanz, where uranium-enrichment centrifuges were taken out of commission by the worm’s malicious payload. It was perhaps not the first, but certainly the most well-publicized successful attack on critical infrastructure systems.

 

The software security industry has been discussing such an attack for years, mostly as a theoretical possibility – but now it appears this is no longer the stuff that Hollywood scripts are made of - it is all too real. Understandably, this raised concerns and awareness among other possible targets, from electrical grid operators to water management facilities. But surprisingly, there was no corresponding increase in spending on security at these organizations.  A recent study by McAfee showed only a very modest 1-3% increase in security budgets of critical infrastructure operators following Stuxnet.

 

I think perhaps one of the reasons for this disturbingly low increase is lack of awareness of available solutions. There are tools available today, specifically designed for the embedded software world, that help address some of these threats. Wind River Test Management, for example,  can flag “suspicious” areas of untested code as potential sources of malicious code infections. It also enables testers to work like a hacker - by using sensorpoint technology to simulate cyber-attacks as part of the testing process.

 

One common technique used by hackers to expose vulnerabilities and attack systems is to create adverse conditions, such as no memory or no disk space.  Thus, correctly relying on the fact that since these conditions are hard to set up in a test lab, the code designed to handle the errors resulting in these conditions has not been properly tested -- if it even exists in the first place! But with sensorpoint technology, it is very easy to inject these kind of faults directly into binary production code, and thoroughly test every such path through the code.

 

We recently ran a series of seminars dedicated to safety & security where I gave a presentation on these threats and the way we address them at Wind River. At the break between sessions, one of the attendees shared his insight about these vulnerabilities  – “When we built those systems, ten or twenty years ago, we didn’t plan for security – heck, people wanted easy accessibility, remote flashing of ROM – nobody wants to make a special trip into the Alaskan wilderness just to update SCADA software – now they realize it helps the bad guys get access to those PLCs, too.”

 

 

For additional information about Wind River, visit:

http://www.windriver.com/

http://www.facebook.com/WindRiverSystems

 

Wind River is an Associate member of the Intel® Embedded Alliance