Skip navigation

How to Get Virtual

Posted by jennysuh Aug 9, 2011

By Jakob Engblom, Technical Marketing Manager, Simics, Wind River


My blog posts about Simics tend to focus on how virtual platforms are used and what you can do with them. However, underlying all Simics use is the issue of getting a virtual platform in place. You need some kind of platform to run your software on in order to get the full benefit of Simics. Thus, creating virtual platforms is an important part of practical Simics use for many of our users.


We have been working with Simics for more than a decade, and we have learnt a lot about modeling along the way. The approach we have ended up with is a top-down methodology, where we start by investigating the target system as a whole from the perspective of the intended use cases, and then explore the details of each subsystem in order to determine how best to model it.  We then switch to bottom-up, applying an iterative modeling flow that delivers something useful as quickly as possible. During development, the goal is to always keep a useful (albeit probably incomplete) platform available at all times.


It is easy to get distracted by the details when creating a virtual platform.  If you start with a list of all the components of a system without considering the system context, you are likely to assume you need to model everything, as well as every function of everything. This is usually an overwhelming task, resulting in a protracted development project without clear focus or any chance of delivering something early. So, we need a better way.


The method we typically use with Simics is to start by investigating how the system is going to be used, and which hardware units are involved in these use cases. In this way, we can often exclude parts of the target system from consideration. Certain subsystems or subsystem functions might not be needed at all, or needed only at a later stage of the project.  Even when modeling an SoC with the eventual goal of a complete model, it is possible to create a priority list and staging sequence that quickly gets something into the hands of users that applies to their initial use cases.


Once this analysis is complete, the existing library of Simics models might well cover a large part of what is needed. There could also be "near misses", where a small adjustment to an existing model is all that is needed. In this way, the ever-growing library of Simics models provides significant short-cuts towards a final system model.


Next, it is time to start building the new models needed to complete the target system. This process is run in an iterative, agile way. We strive to define a minimal subset of the target system that we start with, so that software can be run on the target as soon as possible. The outer loop of the modeling effort is the extension of the overall system model and the testing of software on it.  The inner loop is the creation and changing of individual device models, using both model-level unit tests and software tests to drive the development.


The net result is an flexible project where the virtual platform can be updated in reaction to changes to the target system design. Often, modeling projects are begun before the details of the eventual target hardware are completely known. As shown in the picture below, the result is a series of incremental model deliveries that represent the current design state of the hardware.


simics image.png

This modeling method applies regardless of whether the modeling being performed is pre-hardware (for a new chip that might not even be announced yet or a new board built on existing components), post-hardware for a board or system that exists right now, or post-obsolence for an old system that still needs to be maintenaned.


Modeling can be done by Wind River, Simics users themselves, or third-party services firms. All Simics users are equal in what they can model and their access to the Simics modeling tools and APIs.


To learn more about system modeling with Simics, please read our new white paper on creating virtual platforms with Simics.


Happy modeling! And remember - resistance is futile, you will be simulated!




For additional information about Wind River, visit:


Wind River is an Associate member of the Intel® Embedded Alliance

By Marc Brown, Vice President of Tools and Marketing Operations, Products Group at Wind River


Security is a heavily overloaded term, meaning different things to different people. In the embedded systems world, news on security is often about Stuxnet or similar attacks where an enemy can control devices remotely and shut them down, make them behave abnormally, or worse, cause equipment failure and destruction.  In the enterprise world there is news of information breaches and sensitive data exposure which often seems to be unrelated to embedded systems. However, for embedded security it’s important to consider both the safe and correct behavior of the system and the data it may store or transmit. These two key areas of security concerns and priorities are as follows.

Information Assurance

First is Information Assurance, this is all about protecting and preserving data. The top priority for information assurance is the data the device stores or transmits, typically this data is confidential or, in government and military applications, it’s highly sensitive possibly secret or top secret. The embedded device must be designed in such a way that it’s very, very difficult for an attacker to gain access to the stored or transmitted information on the device.  This protection can include external, network based attacks but also where the attacker has physical access to the device as well. Correct and safe operation of the device is not necessarily the top priority for information assurance.

Cyber Security

The second key area is cyber security – where the aim is to protect the device from intrusion from the outside world to prevent failure, incorrect or unsafe behavior. The chief aim of cyber security is to prevent unwanted manipulation of the behavior of the devices. Leaking or exposing data is not a primary concern of cyber security.

The key areas of embedded device security are shown in the following diagram:

m.brown blog.jpg


Managing device security risk is about prioritization and balance between these two key aspects of information assurance and cyber security. Its not an either-or scenario – no manufacturer wants their device to protect data but be easily disrupted due to attacks. Alternatively, many classes of devices do not handle sensitive data but their correct and safe operation is paramount. However, as machine-to-machine communication grows into important parts of our infrastructure (e.g. the Smart Grid for our electrical system) automation devices maybe safety critical and handle confidential information. Understanding these two key areas of security and managing the risks in both areas is critical for secure embedded device design.

The Next Steps

Understanding the security priorities for your embedded device is important but only part of secure device development approach that needs to start before your devices is designed and built.

These include:

  • Threat assessment – understanding the security threats to your device, the attack vectors and the desired response from the device under such attacks.
  • Secure design – designing security into your device as a key requirement. Employing secure design and development techniques.
  • Proper runtime selection – selecting and building upon a secure runtime platform. Embedded operating systems need to support advanced security techniques, communication and data security and robust and reliable operation.
  • Securing the applications – applications must be designed and architected for security. Importantly, the device should restrict application execution through whitelisting and greylisting techniques.
  • Product lifecycle support – security needs to be designed into a product with platform, process and tools support throughout its lifespan.




For additional information about Wind River, visit:


Wind River is an Associate member of the Intel® Embedded Alliance

As you can read in many posts on the Intel® Embedded Community web site, virtualization is a popular topic given that the technology can allow a single processor to partition and host two or more separate set of tasks that have traditionally required multiple processors or boards. Many of the posts have focused on high-end Intel® Architecture (IA) processors but power-optimized Intel® Atom™ processors can also support virtualization – both processors with and without explicit Intel® Virtualization (Intel VT) support in hardware. Today, let’s consider the type of virtualization implementation that you might deploy on Atom and discuss a methodology for evaluating the level of processor performance required. Indeed virtualization can maximize system efficiency and fewer ICs and a smaller system footprint is almost always better given that the system meets the application requirements.


One key application for virtualization is the combination of real-time tasks and an intuitive user interface on one processor for applications such as industrial control. Design teams find it much easier to implement a compelling user interface using a general-purpose operating system such as Windows. But Windows can’t support the real-time requirements of the control application. Virtualization allows a real-time operating system (RTOS) to coexist with Windows on one processor whereas legacy systems have required two separate processors and often two separate boards or modules.


You have likely seen the theme I’ve laid out above relative to high-end multi-core processors such as second-generation Intel® Core i5, and i7 processors. But you can combine Windows and an RTOS on Atom processors as well. Indeed you can even use virtualization on some single-core Atom processors, although dual-core models may offer superior real-time performance. In control applications, performance is measured in terms of how fast the processor can respond to an external event that triggers an interrupt as much or more so than the aggregate performance of the processor measured by MIPS or a benchmark such as Dhrystone.


Consider the Atom 400 and 500 processor series announced last year that were formerly code named Luna Pier.  Members of the family include the dual-core D510 and D525, and the single-core D410, N425, N450, and N455. None of the family members includes Intel VT support but all include Intel® Hyper-Threading (Intel HT) capability.


In general, hyper-threading or multiple-threading technology allows two separate threads to run simultaneously on a single core. A dual-core processor with Intel HT can simultaneously execute four threads. Embedded design teams can leverage Intel HT technology in multiple ways. You can simply boost application performance by using threaded-programming techniques to add parallelism to the applications. I covered the potential of such performance in a post on Xeon processors working on an imaging application.


You can also use Intel HT to support virtualization, for instance running windows on one thread and an RTOS on the other. For example, TenAsys* software supports virtualization on single- and dual-core IA processors that feature Intel HT support. And of course the company supports Intel-VT-enabled processors as well.


TenAsys published a white paper last year entitled “Delivering deterministic, real-time performance with a low cost Intel Atom processor-based platform” that focuses on a comparison of D410- and D510-based systems. TenAsys offers a software product called INtime® for Windows that allows the company’s INtime RTOS to run alongside Windows on single- or multi-core IA processors.


TenAsys tested the D410 and D510 first in what the company referred to as an unloaded state where the processor only ran industrial-control benchmark code. The company measured response to interrupts and the thread response that it defines as the interrupt-response time plus the time required for the interrupt service routine to start a new thread. That means that thread response includes the time it takes the operating-system scheduler to allocate memory and setup the thread. TenAsys then repeated the tests on the single-and dual-core processors with a Windows graphics application running concurrently.




The above figure summarizes the results. In the loaded instance that’s typical of a virtualized environment, the dual-core processor performed considerably better. A deep examination of the exact details are beyond our scope here. If you want to know more, please read the entire whitepaper. But the point is that even the single-core, loaded test revealed results that would be suitable in some real-time applications, although the response time would also fall short of the requirements of many applications.


TenAsys has continued to add support for newer Atom family members. After the whitepaper was published, the company announced support for the Atom E6XX series processors that include both Intel VT and Intel HT support. Embedded teams can use both INtime for Windows and the TenAsys eVM™ for Windows Embedded Virtualization Manager with the Intel-VT-enabled processors. The eVM software allows you to run Windows alongside RTOS products from other software vendors in a virtualized environment.


With both TenAsys software products, teams will realize better performance using a processor that supports Intel VT. That doesn’t mean that an Intel-VT-enabled processor is the right choice for every project where a team might use virtualization. It does mean that design teams should evaluate a processor in terms of virtualized performance on code similar to their specific application requirements and relative to other factors such as power consumption, cost, and I/O features.


I’ll leave you with a link to a related post that you might find interesting. RadiSys** has also utilized its Hypervisor software with Atom processors and documented sub-10-µsec interrupt response. The Atom family is quite capable in virtualized real-time applications.


How do you leverage Intel HT? Have you focused on parallel, threaded code, or have you used it with virtualization software? What kind of interrupt-response requirements have you faced recently? Please share your experience via comments with the Intel® Embedded Community and tell other community followers how you evaluate real-time-response requirements relative to IA processor capabilities.


To view other community content focused on virtualization, see “Virtualization – Top Picks.”



Maury Wright

Roving Reporter (Intel Contractor)

Intel® Embedded Alliance


*TenAsys is an Affiliate member of the Intel® Embedded Alliance

** RadiSys is a Premier member of the Alliance.

As embedded systems increase in complexity while combining dissimilar functionality, designers are turning to virtual processors hosting multiple operating systems to insure unimpeded, deterministic response to real-time events while simultaneously providing users and operators with a high-level, graphics-based interface. Although this virtualization approach allows the development team to independently allocate system resources including memory and I/O to each operating environment, security, stability, and reliability are still major design issues. For example, any platform that combines real-time, safety-critical embedded functions with a large graphics-based operating system must contain security provisions to allow unaffected partitions to continue operation in the event of a software failure or even a cyber attack.


Virtualization is achieved by adding a virtual machine monitor (VMM) software layer or hypervisor that isolates individual partitions and manages the execution of guest operating systems. In order to speed up the interaction between virtual components, Intel has incorporated hardware-assisted virtualization in the latest processor architectures tailored for extended-lifecycle embedded applications. Both the 2nd generation Intel® Core™ and the Intel® Atom™ E6xx processors support Intel® Virtualization Technology (Intel® VT) to improve the performance and security of software-based virtualization operations by using hardware-assist to trap and execute certain VMM instructions. Intel® VT allows the VMM to allocate memory and I/O devices to specific partitions to decrease the processor load and reduce virtual machine switching times. Applications requiring a higher level of security can be isolated and protected by hardware features that only allow access to memory boundaries specified by the VMM.


Several embedded software suppliers provide virtualization products that take advantage of the enhanced isolation and security provisions provided by Intel® VT. For example, the Wind River Hypervisor allows designers to configure and partition hardware devices, memory, and cores into virtual boards, each with its own operating system while maintaining the necessary separation. (See figure 1). These virtual boards can be run on a single processor core or distributed across multiple cores based on the needs of the system.  The Wind River Hypervisor has been applied in safety-critical applications where the safety-certified and noncertified components of the system traditionally must be physically separate. However, embedded virtualization allows system designers to isolate the safety-certified components while still operating on a single hardware platform utilizing a certified hypervisor. Virtualization also improves the potential uptime of embedded applications since individual partitions can be rebooted, even reprogrammed, while other services on the same device are not affected.




Also adding support for Intel® VT-based embedded multicore processors, LynuxWorks recently updated the LynxSecure separation kernel and hypervisor for various virtual machine configurations as shown in figure 2. This virtualization software is designed to operate in secure defense environments where data and applications with different security levels need to co-reside on a single device without corruption. LynxSecure uses a hypervisor to create a virtualization layer that maps physical system resources to each guest operating system. Each guest operating system is assigned certain dedicated resources, such as memory, CPU time, and I/O peripherals. Another key feature is the ability to run fully-virtualized, 64-bit guest operating systems such as Windows 7, Linux, and Solaris across multiple cores while managed by the security of LynxSecure.




TenAsys Corporation offers the eVM for Windows embedded virtualization platform that hosts an embedded or real-time operating system alongside Windows on the same multi-core processor platform. eVM partitions the platform to ensure that critical hardware interfaces are not virtualized, guaranteeing maximum performance and deterministic response to real-time events.  eVM is installed as a standard Windows application and includes all of the integration tools needed to set up, start, and stop multiple RTOS guest configurations. The Windows-based control panel also allows the user to assign interrupts, allocate I/O devices, and the set up of disk boot images. After the system is set up, eVM provides the guest RTOS with the lowest possible interrupt latency, direct access to I/O, and non-paged RAM.  eVM for Windows has been tested with a wide range of guest operating systems including QNX®, VxWorks®, Linux®, Windows CE and T-Kernel.


Hardware assisted virtualization (Intel® VT) improves the flexibility and robustness of software based virtualization technology and offers the tools needed to isolate and secure critical applications. You can find more information and technical articles on Intel architecture and virtualization at the Intel® Embedded Community page on Virtualization.  If you are starting a new virtualization project with multiple operating systems and you have security questions, please share your concerns with fellow followers of the Intel® Embedded Community. You can also keep up with the latest technical details and product announcements at the Embedded Computing Design archives on Virtualization Security.


To view other community content on virtualization, see “Virtualization - Top Picks


Warren Webb

OpenSystems Media®, by special arrangement with Intel® Embedded Alliance


Wind River Systems is an Associate member of the by Intel® Embedded Alliance. LynuxWorks and TenAsys are Affiliate members of the Alliance.


Testing for Security

Posted by jennysuh Aug 2, 2011

By Ido Sarig, vice president of product marketing at Wind River


Last summer was a watershed event for security-consciousness in the embedded systems world: Stuxnet, a highly sophisticated worm exploited no fewer than 4 zero day vulnerabilities in Windows in order to attack a specific Siemens PLC and its associated SCADA system. The target was reportedly the Iranian nuclear facilities at Natanz, where uranium-enrichment centrifuges were taken out of commission by the worm’s malicious payload. It was perhaps not the first, but certainly the most well-publicized successful attack on critical infrastructure systems.


The software security industry has been discussing such an attack for years, mostly as a theoretical possibility – but now it appears this is no longer the stuff that Hollywood scripts are made of - it is all too real. Understandably, this raised concerns and awareness among other possible targets, from electrical grid operators to water management facilities. But surprisingly, there was no corresponding increase in spending on security at these organizations.  A recent study by McAfee showed only a very modest 1-3% increase in security budgets of critical infrastructure operators following Stuxnet.


I think perhaps one of the reasons for this disturbingly low increase is lack of awareness of available solutions. There are tools available today, specifically designed for the embedded software world, that help address some of these threats. Wind River Test Management, for example,  can flag “suspicious” areas of untested code as potential sources of malicious code infections. It also enables testers to work like a hacker - by using sensorpoint technology to simulate cyber-attacks as part of the testing process.


One common technique used by hackers to expose vulnerabilities and attack systems is to create adverse conditions, such as no memory or no disk space.  Thus, correctly relying on the fact that since these conditions are hard to set up in a test lab, the code designed to handle the errors resulting in these conditions has not been properly tested -- if it even exists in the first place! But with sensorpoint technology, it is very easy to inject these kind of faults directly into binary production code, and thoroughly test every such path through the code.


We recently ran a series of seminars dedicated to safety & security where I gave a presentation on these threats and the way we address them at Wind River. At the break between sessions, one of the attendees shared his insight about these vulnerabilities  – “When we built those systems, ten or twenty years ago, we didn’t plan for security – heck, people wanted easy accessibility, remote flashing of ROM – nobody wants to make a special trip into the Alaskan wilderness just to update SCADA software – now they realize it helps the bad guys get access to those PLCs, too.”



For additional information about Wind River, visit:


Wind River is an Associate member of the Intel® Embedded Alliance

Machine-to-machine (M2M) communications is an emerging concept for connecting embedded devices to one another and to back-end systems using open technology such as cellular networks. This connectivity enables smart services that can lower costs, raise revenues, and improve service for end customers.


To get a better understanding of this technology, I asked a panel of industry experts to share their perspectives. Here’s what I learned.



What is M2M communications? How does this technology differ from traditional approaches to networking embedded devices?


Fred Yentz, President and CEO of ILS Technology: I think Peggy Smedley defined M2M quite succinctly for Connected World Magazine’s July/August issue when she wrote, “M2M provides machines with the ability to communicate data, connecting people, devices, networks, and everyday objects, while interpreting much-needed information that can be acted upon in a timely manner.”


Kevin Rhoads, Vice President Embedded Products, Kontron: The simplest definition for M2M is any technology that supports wired or wireless communication between machines. M2M streamlines communications for a network of machines that traditionally would have to route information back to a central hub for analysis and then be rerouted into another system or PC.


Peter Thompson, Chief Scientist, GoS Networks : The term 'M2M' is very broad, but the main sense in which it is being used today is to mean autonomous communications between embedded devices (smart meters, refrigerators, cars etc.) and some central platform, across a telecoms operator's network. Traditional approaches used only local interconnections and local management, and thus had no economy of scale.


Kurt Hochanadel, Product Manager, Eurotech Inc.: M2M communications happens transparently to the people or systems that consume the data. The technology is moving away from strictly embedded computing and connectivity into more of an IT solution that allows data to move freely to consumers and applications.


Traditional approaches were very focused and distinct, with protocols, interfaces, applications, and delivery methods tied to a specific vertical market. M2M communication in general is building common standards so devices can communicate information more freely.



What are the benefits of M2M? What applications are taking advantage of this technology, and how?


Kurt Hochanadel: M2M benefits depend on the application, but in essence the benefits of relaying actionable data in near real-time cost-effectively throughout an ecosystem of consumers allow for increased efficiencies and services. M2M allows companies to be more situationally aware, so they can take actions as an agile organization based on near real-time data. M2M also allows companies to manage their devices, configure and provision them, and track those enterprise assets remotely and cost effectively.


Applications such as Telehealth, Asset Management, Location-based Services, Security and most any telemetry applications are the initial benefactors of this technology. For instance, a security company is using the Device Cloud offering from Eurotech to monitor and manage cash transfers, to effectively “follow the money” for greater accountability to their banking customer.


Kevin Rhoads: M2M offers increased computing, networking and data aggregation efficiencies. The advantages of M2M networks are that collected data becomes actionable data that can be used in real time. Industries that can truly leverage M2M capabilities:

  • Industrial automation -- overall equipment effectiveness (OEE) is frequently used as a KPI (key performance indicator) by manufacturers
  • Building automation, water and wastewater management -- equipment fault, detection and diagnostics
  • Energy – analytics can be used so a business can view the cost, carbon footprint of their energy usage
  • Medical – keeping accurate track of patient information from in-home care to hospital care or assisted living


Fred Yentz: The objective is to create additional business value for machine owners, machine makers and 3rd party service providers. Today, M2M is allowing applications such as performance based services and predictive maintenance solutions to expand rapidly. For example, equipment service routes can be modified to address only the field assets which are in need of support or service.


Peter Thompson: Another benefit of M2M is economies of scale: one platform can manage a large number of embedded devices; one operator (e.g., a cellular network) can manage different types of devices; one provider (e.g. a utility company) can manage devices connected to a number of different operators. Further benefits are: mobility (if connecting via a cellular network); and separation of concerns (the operator provides the network and makes sure it works while the M2M provider focuses on the application). Early applications include vehicle fleet management and energy management, but many more will follow.



Is M2M the same thing as embedded cloud computing? How are these concepts related?


Kurt Hochanadel: M2M is not entirely the same thing as embedded cloud computing. Embedded cloud computing, or the Device Cloud as Eurotech refers to it, is one way to accomplish M2M using IT centric software tools and frameworks.


M2M is the broader term since encompasses all types of M2M applications including the so-called traditional approaches. Embedded cloud computing talks about M2M in a more IT centric mindset. In essence, embedded cloud computing is a subset of M2M.




Fred Yentz: Cloud Computing can be leveraged in an M2M solution. Any well-designed M2M solution will have a balance of edge-based intelligence in combination with Centralized intelligence and control. The Centralized intelligence can reside in a private network, a public network or “Cloud” depending on the needs of the customer. For example, our M2M Application Platform is capable of hosting applications in the cloud such as SCADA, HMI, ERP or PLM applications from a variety of 3rd party vendors, but the solution is also available in a Customer Located Managed Service Model for those not ready to move to a full Cloud implementation.


Kevin Rhoads: Cloud computing is just one technology that allows M2M communication between devices. The growth for M2M connected computing is accelerating as OEMs launch smart services that utilize cloud computing to communicate and aggregate data.



What should developers look for in an M2M solution? What role does Intel technology play in enabling these solutions?


Kurt Hochanadel: Enterprises want their M2M communications system to be IT-centric, low-cost, flexible, open source, data agnostic, and application agnostic. IT developers are looking for security, scalability, redundancy, configurability, flexibility – everything they demand of a traditional IT data center. To meet these goals, device developers should look for application portability for reusability of code, a software framework for simplicity of development, built in applications and testing tools.


Intel plays a role in providing chipset functionality at a cost affordable to the M2M market. Low power, long life devices need low power, long life processors and chipsets to support M2M applications, which Intel provides. Intel’s role in promoting a new M2M paradigm is also very valuable.


Peter Thompson: Reliability and flexibility are important, especially since many M2M applications will have a long replacement cycle. Intel technology offers the capacity to grow the infrastructure and add new capabilities as time goes on.


Kevin Rhoads: OEMs should look for production-ready solutions that help them accelerate smart services deployment opportunities. For example, Kontron is providing M2M intelligent devices, as well as the infrastructure building blocks to enable M2M technology data from the point of collection through the cloud to the point of aggregation and decision making.


A major benefit of selecting an open Intel Architecture platform is the rich ecosystem of software partners that have already developed solutions that support M2M smart services. Intel has a support partner for every aspect an OEM would need from OS through middleware, software companies have ready proven OS and the horizontal hosting services needed for provisioning or updating an M2M system.


Fred Yentz: Business and technical innovators should look for a few key attributes when evaluating M2M solutions. These attributes include an integrated solution with compute and communications resources on board, in addition to a solution which can quickly tie into your existing back end systems. This combination of Hardware, Software and Service defines the ILS Technology “M2M Deployment Kit”. This kit combines the Intel M2M gateway with ILS Technology’s deviceWISE M2M agent, deviceWISE M2M Application Platform Subscription and a broadband subscription from one of the ILS Technology Carrier partners, enabling a solution proof point to be completed within days.



M2M solutions can provide infrastructure, applications, and even field devices as a service. This can greatly speed deployment, but it also gives developers less control over the system details. What would you say to developers who are worried about relinquishing control?


Peter Thompson: The essence of M2M is to pass responsibility for the communications infrastructure to someone else and to concentrate on developing the application.


Kurt Hochanadel: From an IT perspective, if the administrators and programmers are no longer responsible for a data center for a specific M2M application because they are using the Device Cloud, they do give up control of things like server purchasing, maintenance, programming, and other data center functions. Frankly, few enterprises, if any, can put anywhere near the resources into securing and building their data center than cloud companies like Amazon, Google, and Microsoft can. What the enterprise IT team gains is the ability to focus their attention on what brings the most value to the organization, what interfaces they can build to internal systems, how they can make the best use of the data they’re bringing it through the Device Cloud, and how they’re saving money for their company.


Kevin Rhoads: Each individual developer will have a distinct value proposition in the M2M deployment of smart services. The systems that are deployed include hardware gateways, OS, M2M software, security features and transactional revenue from device to cloud and cloud to client business models. The control of the systems details will be dictated by the total solution proposed to each customer.



If a developer wants to explore the possibilities of M2M, where can they start? How can you help developers get up to speed on this new technology?


Kurt Hochanadel: Device developers want to work with M2M reference platforms as a starting point, which Eurotech is working with Intel and others to provide. It has several I/O options to start with, along with a software framework and Wind River operating system on a hardware reference platform.


For the developers who are ready to explore the benefits of the Device Cloud as a way to perform M2M communications, Eurotech offers an Intel-based development kit that includes a 90-day trial of the Eurotech’s Everyware Device Cloud for connectivity between the device and enterprise. Developers can quickly try their application on the device and share data over the device cloud.


Fred Yentz: I would recommend picking up a deviceWISE M2M Deployment Kit. It’s like a pilot in a box that allows you to get your asset connected and have a glimpse into the potential of full-scale M2M solutions in hours or days rather than weeks or months.


Kevin Rhoads: Kontron has developed a white paper for the purpose of helping designers better understand M2M market trends and usage models, available hardware and software platforms, industry challenges and how Kontron’s M2M solutions can facilitate development of Internet smart services applications. This useful white paper can be downloaded at


Kontron also has additional details on all these subjects on its website


Peter Thompson: You can also find joint whitepapers we’ve created with Intel at our website



More information


connectivity.pngFor more on extending the Internet to embedded devices, see


For more information on the solutions from our panelists, please visit the following sites:



Kontron is a Premier member of the Intel® Embedded Alliance. Eurotech is an Associate member of the Alliance. ILS Technology is an Affiliate member of the Alliance. GoS Networks is a General member of the Alliance.



Kenton Williston

Roving Reporter (Intel Contractor), Intel® Embedded Alliance

Editor-In-Chief, Embedded Innovator magazine


Follow me on Twitter at!/kentonwilliston