Skip navigation
2012

By Davide Ricci, Senior Product Manager, Open Source Platforms, Wind River

 

As I work heavily with embedded Linux on a daily basis, I’m often asked about the Yocto Project.  While the Yocto Project is gaining traction and folks in the industry are generally aware of it, there still exists a bit of unfamiliarity of the details. What exactly is the Yocto Project? Is it a Linux distribution?  How would I use it in my embedded Linux projects?

 

Let’s dig in. The Yocto Project is an open source collaboration project that provides sorely needed standardized, high-quality infrastructure, tools and methodologies to help decrease the complexity and increase the portability of embedded Linux implementations. It is not a Linux distribution similar to what Wind River and other Linux distributors provide. 

 

A lot of vendors across the hardware and software ecosystem participate in the Yocto Project, and together this community is helping embedded Linux developers realize greater cross-platform compatibility and component interoperability.   How the heck does the Yocto Project accomplish that?

 

Consider this: In the embedded world, building a scalable Linux relies on the availability of a piece of software and set of associated tools. When taking the usual open source input (i.e. a kernel, userspace packages, patches) and your own input (size of the filesystem, kernel configuration), and using the appropriate cross-toolchain would produce different output.

 

It's not unlike cooking. With the same ingredients (flour, water, salt, sugar) and set of tools (mixer, spoons, rolling pin), you can come out with different results just by slightly changing your recipe (different temperature, proportions, etc.), and in the end you might pull off pizza or bread! Now, think about trying to combine recipes or perform some fine-tuning, without a common set of metrics. One recipe may require a tablespoon, another teaspoon, or liter or gram. This challenge is often what you see in the embedded Linux world before using a standardized environment like the Yocto Project:  No common repository for ingredients (kernel, userspace), no common set of recipes for cross-architecture builds, no common tools (toolchain, makefiles, compilers, debuggers), and no common build system to bake all the ingredients together.

 

As an embedded Linux distributor who built and maintained our own build infrastructure, I am excited to say that by adopting the Yocto Project as our standard build environment, we’ve removed a major part of the effort and cost of packaging a user-friendly, standards-based embedded Linux distribution.    Now we can focus on developing cool new features and cutting-edge capabilities. In other words, our engineering team is having a lot more fun these days!

 

For customers, the value proposition is no vendor lock-in if they adopt the freely available Yocto Project infrastructure or engage with vendors like Wind River who embrace the Yocto Project.   We pride ourselves on delivering the leading embedded Linux solution, while helping our customers manage risk. It’s just good business sense to keep your technology options open.  

 

The benefits of better interoperability, portability and reuse of our Linux implementation means we can build better stuff cheaper and faster. How great is that?   For our customers, that translates to Wind River’s ability to deliver far greater value at a lower cost. You can see that everyone wins with the Yocto Project!

 

Wind River is dedicated to advancing the state of embedded Linux, and will continue to support open standard efforts like the Yocto Project to deliver commercial-grade, optimized, open standard offerings that allow our customers to get the greatest value out of their open-source platforms.

 

I’d be interested to hear your feedback. Please feel free to post comments on what you’re hearing in relation to Yocto.

 

To stay posted, sign up here.

 

For additional information from Wind River, visit us on Facebook.

Modern factory support and production test systems must be general purpose, remotely programmable, and flexible enough to handle a variety of input ranges and types, speeds, and functions.  In addition, many in-process inspection stations now include high-speed digital signal analysis and video product analysis.  In order to present the latest trends, I asked two experts from the Intel® Intelligent Systems Alliance to share their views on the management and optimization of production test systems. Here are their responses:

 

How does the latest wave of multi-core processors affect industrial design and production optimization?

 

Dr. Yong Luo, General Manager of the Embedded Computer Segment at ADLINK Technology:  The industrial design & production really benefit significantly from the latest wave of multi-core processors, not only from the unprecedented performance packed in the multi-core processors, but also from the potential of managing the power consumption through new techniques such as dynamic core allocation etc, beyond the traditional speed-step technology.

 

Sarah Schlonsky, Product Marketing Manager for PXI Embedded Controllers at National Instruments:  With the number of processor cores on a chip quickly increasing, engineers and scientists are more capable through new techniques to make the most of this hardware. Beyond performance benefits, multi-core processors also enable applications to take advantage of virtualization technology. To adapt to evolving applications, NI Real-Time Hypervisor software enables the consolidation of systems by running NI LabVIEW Real-Time OS and Windows or Linux on a single controller through the virtualization technology. For example, one core of a quad core system can be used for NI LabVIEW Real-Time OS and the remaining three cores could be used for Windows. The cost of hardware and footprint of system can be reduced, as well as more capabilities packed into a consolidated system by leveraging NI Real-Time Hypervisor in an industrial application.

 

What software tools and development aids are available for factory automation and industrial management projects?

 

Sarah Schlonsky, National Instruments: Traditional sequential language applications must be explicitly broken up into separate pieces, or threads, to run in parallel on multi-core processors. Although the concept of threads is straightforward, working with them can be time-consuming and tedious. Each thread must be carefully managed, and data accessed by threads is very susceptible to race conditions if not protected carefully.

NI LabVIEW system design software implements a dataflow programming paradigm. LabVIEW applications inherently run in parallel where possible and without requiring specific multi-threading programming techniques. The dataflow nature of LabVIEW means that anytime there is a branch in the data flow, or a parallel process on the block diagram, the underlying LabVIEW compiler creates a thread to execute the code in parallel. These independent pieces of code are assigned to run in a fixed number of threads that LabVIEW automatically creates. This process helps an application take advantage of multi-core processors and avoids creating too many threads, which can increase switching overhead and inefficient execution.

 

With cloud computing and connectivity dominating embedded designs, what security precautions are available to prevent unauthorized access?

 

Dr. Yong Luo, ADLINK Technology: Widely connected devices and cloud-based embedded design solutions have certainly improved the industry efficiency and automation, but also introduced security challenges together with the net connectivity. This threat never existed before for stand-alone systems. All kinds of software/hardware-based security solutions have to be implemented to prevent attacks and breach and thus increase the system load and cost. The newly-introduced McAfee “White-list” solution seems like an attractive and efficient solution for embedded design and industry due to their nature of narrowly-focused application targets (as compared to generic internet devices).

 

Sarah Schlonsky, National Instruments: Intel® multi-core processors feature Intel® vPro Technology which contains a collection of hardware technologies that enable management features independent of the operating system (OS) and many security features. These technologies are designed into the PC’s chipset and other system hardware making it less vulnerable to the standard software threats that affect an OS or software application.

From the software standpoint, an initial consideration during embedded design projects should be with the operating system:  evaluating a Windows operating system or a real time operating system. Real time operating systems, such as NI LabVIEW Real-Time, can guarantee time-critical tasks are precisely executed and are not as susceptible to everyday software threats. A real time operating system can be designed to run critical applications reliably and deterministic as compared to a Windows operating system. A Windows operating system’s primary task is to maintain user responsiveness.

 

What are the technical challenges that designers face as they implement high speed image analysis systems for production inspection applications?

 

Dr. Yong Luo, ADLINK Technology: While the new Intel® Media Software Development Kit (Intel® Media SDK) is developing along the right direction to replace the dedicated DSP solution for video encoding/transcoding, the software performance for image analysis and processing on Intel® Architecture (IA) is still far from enough to handle high resolution and high-speed image processing in this area, as the resolution, accuracy and processing speed requirements are getting higher. Therefore, a hybrid solution mixing IA (mostly for control and applications) and DSP (mostly for high-speed and high-resolution image analysis) might still be needed.  This certainly is still a challenge for both designers and software developers.

 

Sarah Schlonsky, National Instruments: As instrumentation for high speed image analysis expands its capability with incredibly fast sampling rates and high bandwidths, the bus that interfaces the instrument with the PC is often overlooked.  The throughput capacity of a communication bus can directly impact the instrumentation’s realizable bandwidth. This is known as bottlenecking a system, which results in an overall impact to test times. More demanding applications can be solved as PC-based measurement hardware continues to adopt progressively higher performance data buses. 

The evolution to the PCI Express/PXI Express bus is enabling even faster data transfers through direct data links. Implementing streaming from the instrument, through the PC, and onto a hard disk increases the available memory of the instrument from megabytes to terabytes. PXI platforms are utilizing the high-bandwidth PCI Express bus architecture to effortlessly stream data to and from a hard disk at rates high enough to support the advanced bandwidth instrumentation.  With increasingly faster read/write speeds and increased storage capacity, data streaming enables faster sampling rates over longer testing periods than ever before.

 

As you look ahead, which technologies and applications present the most interesting opportunities for industrial designers?

 

Dr. Yong Luo, ADLINK Technology: Personally, I believe the 4-in-1 strategy is developing in the right direction, but how far and how broad the Intel® Data Plane Development kit (Intel® DPDK) and the Intel® Media SDK can go still remains to be seen.  It is definitely a very interesting trend, but the industry would love to see a set of real hybrid hardware chips from Intel® successfully replacing the need of most DSPs and dedicated packet processing processors, converging all of them into the most popular x86 programming model.  The key here is not just the replacement of hardware, but really to replace the costly and complicated hybrid programming model.

 

Sarah Schlonsky, National Instruments: PCI Express Gen 4 and FPGA capabilities are the most promising technologies looking forward. Instrumentation will continue to evolve and the amount of data that needs to be passed to and from the instrument will continue to rise. PCI-SIG continues to evolve the PCI Express standard with an expectation that the PCI Express 4.0 specification will improve the bandwidth capabilities of the bus enabling even more applications.

 

To view other community content on manageability, see “Manageability - Top Picks

manageability.jpg

Warren Webb

OpenSystems Media®, by special arrangement with the Intel® Intelligent Systems Alliance

 

National Instruments and ADLINK Technology are Associate members of the by Intel® Intelligent Systems Alliance.