Skip navigation

Embedded Community

7 Posts authored by: murrayslovick

We can, in unison and without debate, agree that consumers want In-vehicle Infotainment (IVI) solutions that integrate seamlessly with their cell phones, DVDs, CDs, etc. It is also undeniable that consumers want multi-purpose IVI solutions that allow for the driver to have one type of IVI experience—emphasizing information and safety-- while kids in the back seat enjoy pure entertainment. All of which are reasons why the IVI system option in an automobile is selling like beer in a college town.

 

Another, somewhat less self-evident truth is that increasingly consumers are calling for an IVI display function that will not add anything to the entertainment aspect of the unit but will contribute mightily to peace of mind, both for drivers and for parents: a rear view camera to prevent automotive backover incidents. According to the advocacy group Kids and Cars every week in the U.S. about 50 children are backed over by a vehicle; 48 are treated in emergency rooms and at least two children die. There are approximately 228 fatalities and 17,000 injuries annually in backover incidents, according to the group and In over 70 percent of these incidents, the person behind the wheel is a parent or close relative.

 

A backup camera system employing a small camera mounted on the rear of the car can help a driver with visibility problems, especially when driving in reverse; it can also help to see objects in a car’s naturally occurring blind spots. The resulting image is presented on an IVI display screen mounted on the dashboard or in a center console.

 

These hard to see areas have been at the root of much vehicular damage over the years, so much so that government agencies have been considering new rear visibility standards for light vehicles sold in the United States; the National Highway Traffic Safety Administration (NHTSA) estimates that backup cameras would save about 100 lives a year.

 

NHTSA has already said backup cameras will be included in its New Car Assessment Program, which means it will list all the car models that include backup cameras on a public web site and then begin listing those vehicles as having “recommended” safety technology.

 

One common necessity of all vehicular camera systems—which may also include infrared (IR) night-vision cameras that can detect pedestrians or animals in the road ahead that radar may not be able to identify-- is computing power for image processing and the capability to present a high-quality graphics display.


Enter the Intel® Atom™ processor E3800 product family with built-in Gen 7 Intel® Graphics Technology to enable the delivery of playback and 3D visual performance for interactive content. Graphics performance of the Intel Atom processor E3800 product family is based on Intel® HD Graphics 4000 technology and features support for HDMI 1.4a and DisplayPort 1.1 with maximum resolution of 2560x1600 @ 60 Hz and dual-display support. Hardware acceleration for video decoding is enabled for H.264, MVC, VPG8, JPEG/mJPEG, VC1/WMV9, and MPEG2 standards. The processor family is capable of decoding 10 or more streams of 1080p video, which can be simply implemented using the <video> tag in HTML5 in Microsoft Internet Explorer 10.

 

Advancements in visual processing capabilities enable faster media conversions, stereoscopic 3D and enhanced HD video transcoding. For example, the processor features an image and signal processor (ISP) that supports multiple image processing functions. The ISP is connected to multiple MIPI-CSI interfaces, allowing up to two 1080p cameras plus one 720p camera, depending on the raw image color format.

 

A fundamental requirement for IVI devices is that a user must see the rear camera video in the IVI system screen immediately after putting the vehicle in reverse gear. As a result, very fast boot speeds are needed to display the rear camera output. In January Intel engineers published a white paper (“Early Camera Presentation on Intel® Atom™ Processor E38xx Series" by Bruce Liao et al.) describing  a method to minimize the time of every boot stage to meet these requirements (see Fig. 1 below). The new method includes steps for optimizing the boot loader, fine-tuning the kernel and camera drivers, and optimizing the Tizen user space.


Fig1camera.png

Fig 1 block diagram for a proposed fast boot camera.

 

The goal of the project is to decrease the boot time for the Intel Atom processor E38xx product family so that the rear camera is available to the driver as soon as the car is switched into reverse gear regardless of the current state of the IVI box. In general, the authors report, this process should occur in less than two seconds. This means that, even in the worst case scenario in which the box is powered off, it should still boot to the operating system (OS) and bring up the camera within a very short time.

 

 

The solution presented employs the Intel® Atom™ processor E38xx product family as well as the hardware and software components listed below:

• Intel® Atom™ processor E38xx product family customer reference board

• TW6865/69 PCIe camera

Intel® Embedded Media and Graphics Driver (Intel® EMGD)

• Reference Boot Loader from Intel

 

The BIOS image released from Intel usually is a UEFI BIOS for the Intel Atom processor E38xx product family. This BIOS image takes 5-10 seconds to start the OS loader, depending on the boot device. The authors note that as a full-featured boot loader, UEFI BIOS is not the best candidate to implement the early camera feature, which requires minimal boot time. Instead, they propose using the Reference Boot Loader from Intel or other lightweight boot loaders for the boot speed improvement. The Reference Boot Loader from Intel saves the memory parameters in the flash after the first boot, and it no longer needs to train the memory modules in future boots.

 

The proposed implementation involves the use of a lightweight boot loader, Linux kernel optimization, and early camera module optimization along with Tizen user space fine-tuning. With this implementation, the early camera can be displayed at around 1.4 seconds after the power button is pressed, and the Tizen UI can be displayed in 2.9 seconds. For their next step, the authors plan to port the implementation to the IVI dedicated development kit, which is also based on the Intel Atom processor E38xx product family.

 

Safety is not the only reason for employing cameras in a vehicle.  In the 2013 American Le Mans Racing series the Corvette C6.R race car competing in the GT class featured an innovative rear-view camera system that contributed to the success of drivers Tommy Milner, Oliver Gavin, and Richard Westbrook. The system relayed to the driver the distance of approaching vehicles, even in the blind spots. And it worked in the rain, and in the dark.

 

The Corvette race car’s system runs on a custom Linux platform with an Intel® Core™ i3 processor CPU and uses a rear-facing radar sensor that is capable of tracking up to 32 objects while working in tandem with the camera. Different colors and symbols are displayed on the rearview screen, which allows the driver to easily see race cars that are behind, how close they are, their closing speeds, and even the approaching vehicle’s racing class.

 

For instance, a green marker on screen means the car behind the Corvette C6.R is falling back, while yellow means it’s moving at the same rate of speed; a red marker indicates that the car is closing the gap and about to pass, with a blinking arrow indicating on which side the C6.R will be passed. To differentiate racing classes, an extra line in the marker indicates when the computer believes the encroaching vehicle to be of a faster class.

 

The Intel Atom processor E38xx has been widely adopted on various Intel® Intelligent Systems Alliance partner products capable of supporting the video processing needs of IVI displays with and without camera input. Many of these operate across a wide range of temperatures-- a must for vehicles driven in all types of weather.

 

Here are just two for instances of low-power components on small factor boards that can fit into slim enclosures:

 

The new Kontron COMe-mBT10 COM Express® mini Computer-on-Module family (55 mm x 84 mm) is equipped with Intel Atom processor E3800 or Intel® Celeron® processors. The processor offers three times higher graphics performance compared to previous Intel Atom processor parts and supports DirectX 11, Open GL 3.1, and OpenCL 1.1 as well as support for two independent displays with 1x DP++ (DP/HDMI/DVI) up to 2560x1600@60Hz and 1x Single Channel LVDS 18/24bit with DP to LVDS up to 1900 x 1200.

 

Fig2camera.png

Fig .2 Kontron COMe-mBT10

 

These features in combination with the low-power credit card-sized footprint make the new COM Express® mini Computer-on-Modules a good fit for an extremely wide range of new, graphic-rich multi-touch applications.  Seven module variants are included in the range, offering wide scalability from low-power single-core Intel Atom processor (1.46 GHz / 5 W TDP) performance for energy-sensitive applications through to quad -core Intel Atom processor (4x 1.91 GHz/ 10 W TDP) and Intel Celeron processor (4x 2.42 GHz / 10 W TDP)  performance in high-end applications.

 

ADL Embedded Solutions has announced its ADLE3800HD PCI Express 3.5-Inch Single Board compact SBC based on Intel Atom processor E3800, featuring multiple GbE, SATA, and display ports, PCIe expansion, and extended temperature operation. The ADLE3800HD graphics engine is capable of decoding 10 or more streams of 1080p video, has integrated hardware acceleration for video decode of H.264, MVC, VPG8, VC1/WMV9 and others standards. It also supports DirectX 11, Open GL 4.0, full HD video playback, and a maximum resolution of 2560 x 1600 @ 60Hz with dual-display support. The ADLE3800HD touts a wide thermal junction temperature of -40C to +85C.

 

 

Learn More

Contact Featured Alliance Member:


Solutions in this blog:


Related topics:

 

Kontron is a Premier Member of the Intel® Intelligent Systems Alliance

ADL Embedded Solutions is a General Member of the Intel® Intelligent Systems Alliance


Murray Slovick

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

In recent years the pace of innovation in video surveillance technology has increased rapidly, thanks to improved compression methods, more intelligent video analysis techniques, higher performance processors,180/360-degree panoramic network cameras and what has been termed the “crowd sourcing” of video data—footage collected from different citizens with camera-equipped smartphones, such has those who stepped forward to share their live video with law enforcement following the Boston Marathon bombing. Why have video surveillance providers invested ever-increasing sums on these technical developments? Look no further than financial incentives: video surveillance has become the fastest growing sector within the security industry over the last decade. For 2014, the market research firm IHS forecasts that the global market for video surveillance will grow by more than 12 percent.

 

But no matter how good the IP cameras, NVRs and analysis PCs are, there are two other core values--often overlooked-- that create the vendor/user trust that forms the foundation for long term success in the DSS business. These factors are system sustainability and continuity, or, put another way, the supplier’s being aware of the need for providing maximum protection for its users' prior investments.

 

As an example consider Geutebrueck, an independent, family enterprise based in Windhagen, Germany that has provided video security solutions for more than 40 years. Over this period video surveillance has undergone signi´Čücant changes, first shifting from the earliest analog systems to digital surveillance, which gives the user an entirely different flexibility and scalability compared to what was possible before, and now to the present and burgeoning IP cloud-based video surveillance solutions, where a user can gain access to real time images, even if cameras are located on different continents.

 

As a leading manufacturer of multi-camera video surveillance solutions technological advances such as fast graphics performance and the possibility of integrating a large number of real-time video streams remain key requirements for all components used in Geutebrueck systems. And yet the company’s innovative ideas, from the perspective of the customer, can be characterized as evolutionary in nature with an emphasis on guaranteeing not only the reliability, continuity, and high quality of video surveillance, but also that inter-generational compatibility is maintained.

 

As such the company’s product generations, as well as its in-house developed software, all are completely compatible with one another. For example, three generations of Geutebrueck’s digital video systems--MultiScope, GeViScope and re_porter-- are controlled via the same viewer interface, GSCView, for operation, administration and investigation independent from the camera source and whether it is analog, hybrid or pure IP.

 

This priority on sustainability and continuity, the company says, ensures maximum protection of its users' investment, not just with regard to cost optimization but also assuring that systems are scalable, flexible, expandable, upgradable and compatible with future technologies. Even the firm’s maintenance strategy guarantees the availability of replacement parts or functionally compatible replacements as much as 5 years after a product has been discontinued.

 

Against this backdrop Geutebrueck engineers recently approached the task of determining boards and components for their next generation product families. Up to now, in their “GeViScope” and “re-porter” product families, Geutebrueck has been using Fujitsu D3062-B Micro-ATX mainboards, based on the Intel® Q67 Express Chipset. When the board was first presented to Geutebrueck by Fujitsu, its Intel® HD Graphics and PCI Express Gen. 2 capabilities, along with the board’s overall performance, convinced Geutebrueck designers that it could meet the challenge. This led to a far-reaching integration of the Intel-based board in the company’s products.

 

For instance consider the Geutebrueck GSCSpeedView-AP high speed evaluation and playback station for GeViScope, re_porter and MultiScope III. The unit employs a Fujitsu D3062-B mainboard featuring an Intel® Core™ i7-2600 processor running at 3.4 GHz or better, 2 x 2GB DDR3 RAM, Dual DVI (DVI-I, DVI-D) output, a 40 GB SSD system hard disk, 1 Gb Ethernet interface, a RS-232 serial interface, 8 x USB 2.0 (2 in front), a DVD/RW drive and the Microsoft Windows 7 embedded operating system.

GSCSpeedView-ST_0.35002_AP_0.35003_d.jpg

Fig. 1 Geutebrueck’s GSCSpeedView-AP


The: 19" rack-mountable unit comes preinstalled with GSCView software, which forms the central monitoring interface over which the entire system, with all its connected components, can be controlled. With the installed quad PCI-Express graphic card it is possible to simultaneously view live streams, alarms or recorded pictures of various Geutebrueck digital recording systems. The built-in graphic card allows it to connect up to 4 DVI-D or VGA monitors which can display pictures using freely defined templates and the dynamic livestream function (DLS) for optimized utilization of the available bandwidth.

 

Given its history with Intel-based Fujitsu boards it should not be suprising that when Fujitsu Technology Solutions announced a new family of industrial-grade mainboards with LGA1150 sockets for the 4th Generation Intel® Core™  processors (Haswell microarchitecture), Geutebrueck was interested.The new product family of the Fujitsu Industrial Series includes one board each with the form factors ATX, μATX, and - for the first time - Mini-ITX, all with an Intel® Q87 Chipset; as well as a cost-optimised ATX board with an Intel® H81 Chipset.

 

All Fujitsu mainboards follow a family concept with the same drivers. This provides integrators with flexibility in meeting the requirements and budgets of their customers. The basic layouts of the new ATX and μATX boards are largely identical. Connectors, such as RAM, SATA, internal USB, are situated in the same places to facilitate interchange within the family. Furthermore, the design ensures a high degree of layout consistency with regard to the supplier‘s preceding D307x-S platform. This allows users to change the platform while retaining the chassis.

 

The Fujitsu “Extended Lifecycle” Mainboard D3222-B is a Micro ATX board featuring the Intel® Q87 Chipset supporting DDR3 1600 SDRAM memory and Intel® Rapid Start Technology 2 for increased system responsiveness and fast recovery from sleep states. The board is available with a range of 4th generation Intel® Core™ i3 / i5 / i7 processors (socket LGA1150) and supports faster connectivity and flexibility with integrated next-generation I/O technologies such as PCI Express Gen 2.0, SATA 6.0 Gbps, and USB 3.0 with Intel® Flex I/O.


fujitsu.png

Fig. 2 Fujitsu’s Mainboard D3222-B


To fulfil the requirement for stable and manageable network connections, the Fujitsu D3222-B is equipped with an Intel® Ethernet Connection I217-LM which supports the latest Intel® Active Management Technology (Intel® AMT9.0). Intel AMT is part of the Intel® vPro™ technology offering. Platforms equipped with Intel AMT can be managed remotely, regardless of whether they are powered up or whether they have a functioning OS. 

 

Other security features like TPM v.1.2 or Secure Boot Options are also important to assure a high level of confidence. By way of review, the Intel® Trusted Platform Module (Intel® TPM) is a hardware-based security device that addresses growing concern about boot process integrity and offers improved data protection. TPM protects the system start-up process by ensuring it is tamper-free before releasing system control to the operating system. A TPM device provides secured storage to store data, such as security keys and passwords. In addition, a TPM device has encryption functionality.

 

A TPM device affixed to the motherboard is secured from external software attacks and physical theft. A pre-boot environment, such as the BIOS and operating system loader, uses the TPM to collect and store unique measurements from multiple factors within the boot process to create a system fingerprint. This fingerprint remains the same unless the pre-boot environment is tampered with and can be used to compare to future measurements to verify the integrity of the boot process.

 

Secure boot options can secure the boot process by preventing the loading of drivers or OS loaders that are not signed with an acceptable digital signature. The BIOS maintains a list of platforms keys which are used to verify that the OS loader and drivers are secure.

 

What’s more, and for reasons that should be obvious in DSS solutions requiring highly reliable 24/7 operation, long-life components and the robust tin plating of the printed circuit board make the Fujitsu D3222-B particularly rugged and resistant. EMC and burst resistance, tested at the Fujitsu Product Compliance Center in Augsburg, Germany are said by the supplier to meet all industrial standards.

 

Thanks to its excellent in-the-field experience with the performance of the Fujitsu D3062-B board and the design-in support of Fujitsu, Geutebrueck has made an early decision to integrate Intel’s innovations into their next product generation product. Fujitsu further facilitated Geutebrueck’s decision in favor of the Intel-based system by guaranteeing an extended lifecycle of their boards. Fujitsu Extended Lifecycle Mainboards are optimized for 24/7 continuous operation in an extended temperature range of 0 to 60 degrees Celsius and has an extended lifecycle with five year part availability. This is an important economic factor in the video surveillance market where long product life is crucial to recovering the high R&D and certification costs.


Learn More

Contact Featured Alliance Members:

Solutions in this blog:

 

Related topics:

 

Fujitsu is a General member of the Intel® Intelligent Systems Alliance.

Microsoft Corp. is an Associate Member of the Intel® Intelligent Systems Alliance.

Murray Slovick

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Providing warfighters with accurate situational awareness on the battlefield requires computing platforms that have high levels of graphics performance and video processing. The ultimate goal is for every soldier, sailor and airman to be able to convey real-world information to their comrades as well as all to levels of the DoD command structure--and to carry this task out in a harsh environment.

 

The vastly improved integrated GPU on 4th Generation Intel® Core™ processors (code name Haswell) offers powerful features that enable a dramatic 3D visual experience via faster 3D rendering and more complex shading, creating more realistic 3D graphics, which is vitally important in terrain visualization as well as other imagery features presented on military displays. The graphics processing unit (GPU) has been enhanced via additional execution units for 3D /media graphics and L3 cache, resulting in up to a 25% increase in raw 3D performance. In addition, Intel has increased the bandwidth of display ports, enabling higher resolution and the ability to connect up to three screens as a Collage Display.

 

Image quality is crucial for target recognition. The integrated processor graphics (IPG) in the 4th generation Intel Core includes the latest version of Intel® Quick Sync hardware-accelerated transcoding technology in the HD graphics portion of the chip. Intel® Quick Sync Video 2.0 technology almost doubles the H.264 transcoding speed of its predecessors and now supports up to 4K display resolutions. By way of review, video transcoding involves converting one compressed video format to another. The process can apply changes to the format, such as moving from MPEG-2 to H.264, or changes in the properties of a given format, such as bit rate or resolution.

 

4th generation Intel Core architecture brings with it other improvements when compared with predecessor generations. These include Intel® Advanced Vector Extensions (Intel® AVX 2.0) enhancements which provide a significant performance improvement in floating-point-intensive computations, which are a key part of digital signal and image processing applications.

 

The integrated GPU on 4th generation Intel Core processor offers additional SWAP (size, weight and power consumption) benefits, too.  Since the graphics processor is one of the most power intensive processor blocks the on-chip graphics engine can represent a savings of 50 W to 75 W compared to solutions that require separate graphics co-processors.  Cost and board space is saved by eliminating the need for these expensive, external graphics chips. A further bonus is that designers can use the processing capability of the integrated GPU for other aspects of the application at no additional hardware cost.

 

As an example consider the Curtiss-Wright VPX6-1958, a rugged 6U OpenVPX single board computer (SBC) based on the new 4th Generation Intel® Core™ i7 processor. Each of the Intel Core i7 processor's four cores delivers 2.4 GHz of performance. The SBC is available with up to 32 GB of high-bandwidth DDR3 SDRAM (1600 MHz) and supports Dual 40 GbE, PCI Express (PCIe) Gen3, Quad GbE, and PMC/XMC expansion. The board's integral high-speed Ethernet, PCIe fabric and XMC/PMC mezzanine module connectivity enables high bandwidth data flows; data can also flow from the VPX (Versatile Performance Switching) backplane to the XMC (Switched Mezzanine Card) site to support demanding high bandwidth applications such as video acquisition, processing and distribution.


VPX6-1958-mock_up-web.gif

 

 

The Intel® Core™ i7-4700EQ processor powering the VPX6-1958 features Intel’s HD Graphics 4600, which represents an advancement of the previous generation HD Graphics technologies that were first introduced in 2010. It offers an integrated ring bus technology that connects all CPU components (computational cores, L3 cache, graphics, and system agent) with the memory controller to enable an optimized approach of communicating with the system memory via the fast L3 cache. Intel® HD Graphics P4600 processor-based graphics support Microsoft DirectX11.1, OpenGL 4.0, OpenCL 1.2 and DirectCompute 5.0 standards. OpenGL is the most widely adopted 2D and 3D graphics API in the industry; it is window-system and operating-system independent as well as network-transparent. Support of OpenCL 1.2, also enables the chips to perform as general-purpose graphics processing units (GPGPUs).


The XVR16 6U VME SBC from GE Intelligent Platforms is also based on the 4th Generation Intel Core i7 processor architecture, which offers integrated graphics and memory controller plus quad core processing of up to 2.4GHz all in one device. Capable of delivering high quality 2D/3D graphics for a compelling visual experience the XVR16 features graphics support for Microsoft DirectX11.1, OpenCL 1.2 and OpenGL 3.2. It can utilize up to three independent displays with one DisplayPort routed to the front panel, two VGA ports routed to P0/P2 and two DVI ports routed to P0/P2.  GE.png

The devices offer support of PCI Express Gen3 technology and USB3.0, providing even greater bandwidth for on-board and off-board connectivity. In addition to its range of onboard I/O features, the XVR16 also offers two on-board mezzanine expansion sites for enhanced system flexibility. Memory resources include up to 16 GB DDR3 SDRAM, up to 64 GB NAND Flash, optional SATA hard drive, BIOS Flash and BIOS backup Flash. The XVR16 is available in five build versions from air-cooled to fully rugged with extended temperature capability, providing solutions in environments from benign to extremely harsh.

 

There are many excellent COTS hardware vendors providing latest generation embedded computing solutions. However, the hardware is only as good as the graphics driver powering its associated graphics processing unit (GPU). Creating and maintaining a software driver for COTS hardware can be a time consuming and expensive task. The certification process for software integral to airborne equipment, for example, is subject to very stringent standards and this certification process adds substantially to the costs and time involved in building an aircraft, making it a significant risk factor. Presagis, a provider of commercial-off-the shelf (COTS) modeling, simulation and embedded display graphics software for the aerospace and defense industry offers production-quality source and object code for an OpenGL API-capable environment that is compatible with real-time system constraints.

 

Designed from inception to support the delivery of future cockpits, the Presagis VAPS XT provides a rapid prototyping, design, and deployment environment for the creation of 2D and 3D photo-realistic graphical displays, including instrumentation and equipment models, specifically for real time 3D simulation and training applications. Using VAPS XT, engineers can: develop virtual flight displays including primary flight displays (PFD).

 

The company also offers its VAPS XT ARINC 661 solution for those wishing to develop their displays in conformance with the ARINC 661 standard. Originating directly from the aviation industry, ARINC 661 was created to initiate a fundamental shift away from having to integrate monolithic systems where each supplier provided its own HMI to a position where different systems share a common HMI. It addresses a need to allow multiple systems to interact with a display system in a safety critical environment in such a way that system changes do not require a re-certification of the display system.

 

 

Contact Featured Alliance Members:

Solutions in this blog:

Related topics:

 

GE Intelligent Platforms and Microsoft are Associate Members of the Intel® Intelligent Systems Alliance.

Curtiss-Wright Controls Defense Solutions is a General Member of the Intel® Intelligent Systems Alliance.

Murray Slovick 

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Safety concerns have car makers and technology suppliers continuing to search for ways that drivers can avoid fishing for switches, buttons or icons on a touchscreen as a car zooms along the highway at 65 miles an hour. In my last Roving Reporter entry on IVI I focused on new voice- assisted technologies that are giving people the freedom to interact with the center console display in a more natural, intuitive way. This time, I want to turn to other "Perceptual Computing" technologies where devices will take on human-like senses to perceive the user's intentions, employing the context of an event or action to facilitate and even anticipate what the driver wants to do. Among the contextually aware elements that will be used as IVI system input controls are gesture interaction, facial tracking and attribution (such as a smile or nod) and eye tracking.

 

Gesture-recognition technology is widely expected to be the next-generation in-car user interface. In gesture recognition the idea is to determine whether the driver has performed a recognizable hand or finger gesture within the interaction space without the “middleman” of touchscreen controls. In operation a camera or infrared sensor recognizes and interprets hand movements as in-car commands. Perceptual Computing systems can take into account not just that you made a hand gesture, but how far or fast you made the gesture in order to, for example, open a car window the amount you desire..

 

Examples of gestures that can be used include the driver touching the gear-shift knob, then raising or lowering a hand to change the temperature via the climate control system, or the driver tilting his head left or right to turn the volume of the stereo up or down. As another example, using facial analysis a camera can “see” where you are looking while talking to determine whether it should interpret your voice as a command or determine that it is simply a conversation with a passenger.

 

Omek_edited-1.png

 

 

A camera placed in the steering wheel or on the dashboard is programmed to "watch" for certain gestures. When it sees them, it sends a signal to the head unit processor (or an in-vehicle computer) that handles the connected IVI hardware. The data is analyzed to determine what the driver is doing and ascertain which central information display controls the driver wants to adjust and then activate the appropriate features.

 

This key role of processing elements is where advances in Intel® processor architecture come in, delivering at low power levels the computational muscle to process inputs from multiple sensor technologies to track gestures, objects and faces as well as recognize voice commands-- all of which is coming to IVI units enabled by Intel® Core™ and Intel® Atom™ processor-based devices.

 

Gesture control in automobiles is not a futuristic fantasy Earlier this year Hyundai unveiled its HCD-14, a luxury four-door concept sedan featuring gesture controls for its navigation, infotainment, audio, HVAC and smartphone connectivity functions.

 

To bring gesture-based interaction via camera monitoring to Intel platforms SoftKinetic, a Belgian-based developer focused on gesture-recognition technology has developed close-range hand and finger tracking capabilities via its iisu (‘The Interface is You’) middleware. As a result, for engineers who want to create close range applications using hand, finger and face tracking and voice recognition for Intel platforms these functions are included in the Intel® Perceptual Computing SDK available to developers free of charge.

 

To handle predefined facial analysis and recognition, the Intel SDK includes seven-point landmark detection and “attribution” detection including smiles, winks and blinks. What is more, SDK’s speech-recognition capabilities permit voice command and controls, as well as dictation and text-to-speech analysis. The SDK includes manuals, code samples, algorithms, example applications, and tutorials to help developers integrate perceptual computing interfaces in as simple a way as possible.

 

When paired with the Intel Perceptual Computing SDK the Creative Senz3D camera enables developers to create the next generation of natural, immersive, innovative software applications that incorporate lose-range hand tracking, face analysis and 2D/3D object tracking on Intel  platforms.

 

perceptual-tech-camera-3x2-top-left.jpg.rendition.cq5dam.webintel.288.192.jpg

 

 

Designed with ease of setup and portability in mind, the Creative package combines advanced QVGA depth sensor technology with an HD (720p) camera and dual-array microphone for capturing and recognizing gestures, voice control and face detection.Measuring just 4.27” x 2.03” x 2.11” and weighing only 9.56 oz. the camera is small enough to integrate with either fixed or mobile devices.

 

The camera lets users manipulate objects on the screen using gestures and is able to completely eliminate the background. It is a USB-powered (power <2.5W) camera and is optimized for close-range (6 inches to 3 feet) tracking of fingers, static hand poses, moving hand gestures, as well as facial detection/analysis. The Senz3D includes drivers that are compatible with Microsoft Windows 7 and 8.

 

Speaking of Microsoft, if you are a game player some of this may sound familiar. That’s because it is not unlike Microsoft’s Kinect system for the Xbox game console, which detects motion from distances of up to about 10 feet. With the Creative camera and Perceptual Computing SDK rather than track a user’s full body as you would with a Kinect, only users’ hand and forearm gestures are analyzed. According to numerous industry reports Microsoft is itself looking to adapt Kinect’s gesture-recognition technology into future Windows-driven connected car platforms.

 

Advantech’s ARK-DS762 integrates Microsoft® Kinect™ technology that allows a screen to become a virtual interactive mirror that can be controlled with gestures or spoken commands. It employs a 3rd generation Intel® Core™i7/i5/i3 processor (up to 45 watts TDP) with a powerful graphics engine that supports three independent HDMI displays and features rich I/O (including USB 3.0) and extra flexibility via optional expansion modules. The unit also supports wireless IP connection for remote communications and incorporates Advantech’s remote control hardware monitoring technology (SUSI Access), which provides computers with off-site system diagnosis and self-recovery capabilities.

 

Eye tracking systems utilize gaze technology—what the subject is looking at--to warn drivers if they are not looking at an oncoming pedestrian or nearby vehicular traffic and if as a consequence there is an immediate danger. Cameras mounted in the cabin can determine if the driver takes his or her eyes off the road-- the car would recognize that--and then could sound a warning. Similarly, eye tracking would allow your car to know you’ve been staring at something too long and alert you to watch the road instead.

 

A gaze technology demonstration was conducted at IDF 2013 in San Francisco. Called "Tobii Eyetracker by Dell OEM.” it worked with technology provided by Tobii, a company based in Sweden that specializes in eye tracking and gaze interaction that enables users to accomplish a specific set of tasks with one’s eyes, or, more specifically, by allowing the system to track your eye movements. For more information see the Roving Reporter blog. “Computer Control is in the Eye of the Beholder

 

 

Contact Featured Alliance Members:

Contact Advantech>>

Contact Microsoft>>

Contact Dell>>

 

Solutions in this blog:

Advantech’s ARK-DS762

Intel® Perceptual Computing SDK
Creative Senz3D Camera

Softkinetic

Tobii gaze

 

Related topics:

Sensing and Analytics (Top Picks blogs, white papers, and more)

 

 

Advantech is a Premier Member of the Intel® Intelligent Systems Alliance

 

Microsoft is an Associate member of the Intel® Intelligent Systems Alliance

 

Dell is a Premier member of the Intel® Intelligent Systems Alliance

 

Murray Slovick

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

Except in cases where they are being flambéed by unscrupulous financial types, numbers can give you a pretty accurate assessment of the health of a business or market sector. Consider Intelligent Vehicle Infotainment Systems (IVI), for example.  Earlier this year the technology market intelligence company ABI Research provided the best evidence yet that multimedia information consoles are now the pre-dominant feature of dashboards in new cars; the firm predicted that shipments of connected automotive infotainment systems would grow from 9 million in 2013 to more than 62 million in 2018—a nearly 7 fold increase-- with connected navigation, multimedia streaming, social media, high resolution displays and in-car Wi-Fi hotspots among the key reasons why.


But while there’s no questioning the attractiveness of seamlessly keeping up to date with your life while motoring down the highway, if there is a potential fly in the ointment it is this: with 90% of accidents said to be caused by driver error and, even without knobs (today’s cars come with touch screens and flat panel buttons), to utilize the display drivers will take their eyes off of the road for up to two seconds, so driver distraction concerns are becoming an issue.


One potential answer is speech recognition. Until recently car-based, voice-response systems were used mainly for tasks such as automated dialing of hands-free Bluetooth devices or to operate the basic functions of radio/music head units. Going forward, however, voice based control is poised to make the leap from purely task-oriented command and control to a more sophisticated, user-centric interface designed to negate distraction issues at a time when the social media interactivity phenomenon is quickly making its way into the automotive cockpit.


Speech-recognition in the form of hands-free text-to-voice or voice-to-text operations are delivering greater accuracy and employing more flexible grammar libraries, which allow users to interact in a more conversational way instead of being restricted to fixed, predefined menu phrases. For example, Ford's SYNC, based on the Microsoft's Windows CE operating system, supports up to 10,000 voice commands with no training required for the system to recognize voice commands.


Speech based IVI applications require more computing performance to increase accuracy and improve UIs. Developers can meet these challenges using the Intel® Atom™ processor and Intel Software Development products. Improved microphone performance is one method to enhance voice recognition systems. Today’s microphones offer 61dB SNR and the industry is quickly adopting 63DB as its standard (employing three MEMS microphones) with 70DB not far off. MEMS microphones are behind much of this advance; they produce an electrical signal from a change in capacitance caused by the movement of a membrane relative to a stationary plate. A MEMS acoustic sensor uses a vibrating membrane and a fixed plate to convert acoustic pressure waves into this change in capacitance. A CMOS interface IC then converts the change in capacitance into a digital PDM modulated output (Fig. 1).


MEMSmic.png

Figure 1 Basic operation of a MEMS mic (Source: IDF 2013).


From an engineering perspective the different microphone types and placements pose design challenges to enabling natural speech because microphone configuration can vary dramatically both in type and number. It has been demonstrated that 3 and 4 mics used in similar environments can improve speech recognition performance by 10% or more compared to one or 2 mic set-ups.


When using more microphones, the signal processing workload increases for the IVI system with the average normalized latency increasing about linearly with the number of microphone channels. Happily, the Intel® Atom™ processor family E3800 (formerly code named Bay Trail)  is capable of handling inputs from multiple sources simultaneously, with low CPU overhead and without using a DSP, which can lower system cost. The processor also is industrial grade (-40o to +85oC), satisfying automotive requirements. No fan is needed since the processors in the family deliver their performance in a sub-10-watt system on a chip.


If more DSP functionality is needed for audio apps the Intel Haswell microarchitecture is the first Intel product with an Integrated DSP core dedicated to audio (an180MHZ core, 200KB SRAM and  I2C I2S audio codecs are used). I n this way developers can deliver a differentiated user experience and still have computing headroom on-board for future acoustic improvements or to run other applications concurrently.


On the visual side the Intel Atom processor E3800 product family delivers enhanced media and graphics capabilities at low power. These SoCs are based on the Silvermont microarchitecture, utilizing Intel’s industry-leading 22nm process technology with 3-D Tri-Gate transistors. This new microarchitecture delivers significant improvements in computational performance and energy efficiency over previous-generation CPU microarchitectures. It includes a new out-of-order execution engine for superior compute performance, outstanding power management capabilities, and enhanced security. Intel® Virtualization Technology increases virtualization performance by allowing the operating system more direct access to the hardware.


But getting back to Intel Atom processor, using the new Bay Trail product can enhance animated text and graphics, facilitate camera use (increasingly popular in vehicles) and improve visual textures,  transitions and other effects that are all glued together to achieve a truly immersive in-car experience. Other benefits include faster media conversions, stereoscopic 3D capabilities, highly efficient image processing, and enhanced HD video transcoding over HDMI with Intel® Gen 7 graphics.


Other innovative and original automotive solutions to control in-car infotainment consoles are being made possible through the Tizen OS development platform for IVI embedded computing systems. Intel is a key contributor and part of the Technical Steering Group of Tizen. Last week at CEATEC, the IT and Electronics exhibition and conference held at the Makuhari Messe complex just north of Tokyo), Intel booth drew attention via a demo in which Systenahttp://www.systena.co.jp/eng/(Tokyo, Japan) showed off a unique software-based Human Machine Interface (HMI) for Tizen IVI that can change the look of the HMI display to suit the user’s taste and personality (Fig.2). The solution utilizes information provided by the driver’s smartphone or other handheld device which is recognized through Near Field Communication (NFC).

 

CEATECFig2.jpg

Figure 2 Systena’s Tizen-based HMI can vary according to user’s preference.


 

Also at the Intel booth OBIGO of Korea displayed an HTML5 browser for automotive use employing Tizen IVI that not only showed infotainment features on screen-- such as the ability to access e-mail-- but also provided vehicle data such as air conditioning settings.


An IVI Telematics computer using Intel® Atom™ processor E3827


NexcomFig3.png


Figure 3 The Nexcom VTC 1010


Nexcom describes its VTC Series as “the Intelligent Vehicle Telematics Computer for a Mobile World.” A quick look at its features and specs shows why. The Nexcom VTC 1010 fanless In-Vehicle Computer (Fig. 3 above) features the next generation Intel Atom processor E3827 at 1.75GHz, with powerful graphic and multimedia enhancement. The unit comes in a 1 DIN compact enclosure with built-in CAN BUS 2.0B interface and optional OBDII (ASE J1939/ J1708) port to monitor the vehicle’s operating status real-time. VTC 1010 can be configured to work with two independent WWAN connections and can effectively increase bandwidth for faster data transfer over the air. VTC 1010 also supports two-way voice communication. Equipped with intelligent vehicle power management, VTC 1010 can be woken up by ignition, timer, or remote dial-up for flexible operation.


 

For in-vehicle infotainment applications the VTC 1010 boasts a built-In G-sensor, Gyroscope, and e-Compass sensors. A built-In Ublox 6 GPS comes with, optional Dead Reckoning or GLONASS support. Among the computer’s multitude of interface connections are a VGA/DP Output, 1 LAN, 2 x RS-232, 1 x RS-422/485, 3 x DI, 3 x DO, and 3 x USB.

 

Gen7 Graphics proces

Learn More

Contact Featured Alliance Member:

Solutions in this blog:

Related topics:

Nexcom is an Associate Member of the Intel® Intelligent Systems Alliance

 

Murray Slovick

Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance

a

At first glance the Dell Latitude 6530 doesn’t look like anything special, until you notice the black module marked “Tobii” placed just beneath the screen and connected via USB input.Tobii, based in Sweden, is a company specializing in eye tracking and gaze interaction technology. Their module is the principal element of a demonstration at IDF called "Tobii Eyetracker by Dell OEM,” that enables users to accomplish a specific set of tasks-- including navigating, scrolling, zooming and selecting items on a laptop screen-- without keystrokes, mouse clicks or touchpad intervention. The goal is for the person interacting with the demo to see a few of the potential applications for operating computer systems with one’s eyes, or, more specifically, by allowing the system to track your eye movements.

 

Before you can get started you need to do a one-time calibration process, which involves following a circle moving around the screen.It takes about one a minute or so -- the user stands about an arm’s length from the PC or tablet and an icon indicates when the system has located one and then both eyes (more on how this is done shortly).

 

You then simply look at what you want to launch with your eyes and that initiates the equivalent of a mouse or keypad click.

interaction.jpg

 

 

While the Dell Latitude 6530 was running an Intel® Core™ i5-3230M Processor (@2.6GHz), the computing power of iCore is not needed to run the eye tracking app. A Dell Lati10  tablet also was available to be connected to the Tobii module;  this is a 10” tablet powered by the Intel® Atom™ processor Z2760 (with processor frequency of up to 1.8GHz with Intel® Burst Technology). The computing devices can be running either Microsoft Windows 7 or Windows 8.

 

While the demo offered several games that can be played using the system, such as blowing up Asteroids by simply looking at your target, there are more serious applications intended for Tobii’s eyetracker:

 

  • It is intended as an assistive communication device for those with disabilities. Someone without the ability to speak, for example, could communicate by looking at sounds or words on a screen. It is also envisioned as part of a healthcare product that bolts to a wheelchair, enabling users such as a quadriplegic to both operate their computer and physically move about.  Eye-tracking control also can reduce the strain on users with carpal tunnel syndrome.
  • In education, children with dyslexia have non-traditional screen viewing techniques. By tracking their eye movements and giving the data to their reading teachers instead of jumping around on the page trying to gather information they can be taught a better pattern to use The system can determine retail shopping habits—where the consumer is looking when he or she approaches a retail display.
  • Eyetracking data can be used to review what a vehicle operator was looking at prior to an accident, or prevent an accident by sounding an alarm if the driver takes his eyes off of the road.
  • It can improve website usability; the eye tracker captures where on a screen your eyes look.
  • It can facilitate other computing tasks include reading e-mail aloud or using a calculator.

  

How it works

 

It turns out that the light sensitive surface of our eye—the retina--is not equally sensitive everywhere. Of its three main regions-- foveal, parafoveal and peripheral-- we see details most clearly in the area called the fovea. Researchers tell us the information registered through the foveal region constitutes 50% of what is sent to the brain through our optic nerve. To maximize our visual processing resources, when we move our eyes to focus on a particular visual area of interest, we are subconsciously  placing the foveal region of the eye on the area currently within main focus. To do this we use what is called a fixation, or a pause of the eye movement on a specific area of the visual field. By letting the foveal region register the image, the brain get the highest resolution possible for the image as well as the most amount of data registered by the eye about that area. Research also has shown that the retina needs about 80 msec to see a new image before that image is registered in normal light conditions.

 

These movements are the enablers of eye tracking technology.According to Tobii, the most common non-invasive eye tracking technique is Pupil Center Corneal Reflection (PCCR). The basic concept is to use infrared light to illuminate the eye causing highly visible reflections, and a camera to
capture an image of the eye showing these reflections. It is then possible to calculate a vector formed by the angle between the cornea and pupil reflections– the direction of this vector, combined with other geometrical features of the reflections, is used to calculate the gaze direction. Using two cameras provides a robust calculation of the position of the eye in space and the point of gaze, even if the position of the head changes.

 

I found that the eye-tracker technology works very well and each movement that was attempted during the demo was accurately performed. If you’re at IDF the Dell OEM eye tracker demo can be found in the Intelligent Systems Zone on Level 2.

 

Learn More

 

 

Solutions in this blog:

 

 

•  Tobii Eyetracker 

Medical
- Top Picks (blogs, white papers, and more)

Sensing and Analytics -

Top Picks (blogs, white papers, and more)

 

Dell is a Premier Member of the Intel Intelligent Systems
Alliance

 

Murray Slovick

 

Roving Reporter (Intel
Contractor), Intel® Intelligent Systems Alliance

Connected automotive infotainment systems are taking off with all the subtlety of a rocket launching. According to ABI Research, driven by the integration of smart phones and applications into the vehicular environment global shipments of connected in-vehicle Infotainment System (IVI) will exceed 62 Million by 2018. These next-generation systems will be able to push e-mails from a smartphone to a vehicle’s multimedia screen and read them aloud to the driver, offer voice-controlled instant messaging or social networking posts and transfer video content from the home TV to the car for passenger entertainment.

 

But given that smartphones and smartphone apps were not made to be used while driving automakers are understandably reluctant to fully integrate a smartphone interface into the car’s dashboard because they cannot control how it will be used and could face liability issues if a driver got into an accident while, for example, using an entertainment app.

 

The problem is that the car IVI system must, at once: 1) function as a master controller for communication with portable consumer devices; 2) have the speed and reliability of an embedded device; and 3) interact with automotive systems to keep the driver informed about the status of brakes, engine performance, tires and other key vehicular systems.  And all of this will be done via the increasingly common touchscreen that manufacturers are building into new cars' center consoles.

 

At the heart of the solution are operating systems and middleware. Automotive infotainment software must effectively merge the feature-rich aspects of consumer products while preserving the enhanced quality and robustness requirements of an automotive system. Ideally, this would be done through standardized automotive Web APIs. Unfortunately there is no standardized Web API with which an IVI system can access automotive data.

So a solution must be created that meets these automotive and consumer device requirements and at the same time provides a consistent target for a developer community. Among Intel’s® automotive activities are its involvement with two groups trying to make this happen: GENIVI® and Tizen.

 

GENIVI® is an industry alliance committed to driving the broad adoption of an IVI common open source automotive infotainment reference platform and standardizing the platform so it can be upgraded over the life of the vehicle. Comprised of more than 165 member companies the GENIVI® Alliance will deliver reference implementations and offer certification programs to an open source IVI community resulting in shortened development cycles, quicker time-to-market, and reduced cost for companies developing IVI equipment and software (Fig.1).

 

 

IntelBlog.jpg

 

Figure 1-- IVI systems must handle a wide range of vehicle and consumer product data

 

GENIVI®’s version 3.0 refines the infotainment platform by updating some existing component versions, removing some that are not used, and promoting some optional items in GENIVI® 2.0 to mandatory in 3.0. One of the most important updates is the requirement to use a Linux Kernel version 3.0 (or greater). New mandatory components in 3.0 include a Bluetooth Hands-Free capability, and a Bluetooth Phone Book. These are declared as Abstract Components by GENIVI®, which means that the implementer has freedom as to how the capability is implemented, provided it meets the functionality and interface requirements.

 

The Tizen Project can be looked at as a totally open-sourced version of GENIVI® that, like GENIVI®, is closely aligned with the open-source Linux Foundation. But unlike GENIVI® Tizen is embedded Linux that also targets smartphones and other embedded devices. Tizen is a standards-based software platform supported by leading mobile operators, device manufacturers, and silicon suppliers for multiple device categories, including smartphones, tablets, netbooks, IVI devices, smart TVs, and more.

 

Based on HTML5 and designed for automotive OEMs, Tizen IVI allows developers to use web technologies to write applications that run across multiple car units. Tizen IVI is GENIVI® compliant and includes such core services as an Automotive Network Manager that provides APIs to access vehicle sensors and diagnostics data. Tizen includes an Automotive Message Broker which abstracts CAN messages and other low level vehicle information and exposes them to developers as a high level API. The Automotive Message Broker can be used to translate the action of a steering wheel, pedals, gear shift or any other vehicle information into data that can be displayed in a web application.

 

Other Tizen IVI characteristics also include fast boot-up--less than 7 seconds to home screen—and a small footprint. Intel® Atom™ IVI processors for Tizen include Tunnel Creek and Valley View.

 

Last month Tizen released version 2.0, code-named Magnolia. This release includes many new features and improvements over Tizen 1.0 including strong HTML5/W3C APIs and a new Web UI framework, including full-screen and multi-window support.  Additional Tizen device APIs, such as Bluetooth and NFC support are provided as well as access to the device’s calendar, call history, media gallery, settings panel and video player.

 

Intel® Intelligent System Alliance Member Solutions


Green Hills Software has shown how automotive OEMs and Tier 1’s can reliably combine GENIVI®-compliant operating systems with real-time safety functions on Intel®  Atom™ processors. The INTEGRITY® RTOS is a royalty-free, modern partition-architecture Real-Time Operating System for use in embedded systems that require maximum safety or reliability or security. Based on INTEGRITY Multivisor™ secure virtualization, this platform enables the safe, seamless combination of any GENIVI®-compliant operating system with time-critical safety applications on a single processor — delivering significant per unit cost savings, future-proofing and maximum automotive robustness in the head unit.

 

The Wind River Platform for Infotainment is a Linux-based, GENIVI® 3.0 -compliant run-time platform specifically tuned for the IVI market. The platform takes full advantage of open source community innovations while providing a foundation on top of which on top of which tier 1 and original equipment manufacturers (OEMs) can add functional differentiation and value-add applications. IVI system developers can use Wind River’s commercial-grade platform to reduce overall development cost, minimize project risk, and shorten project timelines. Wind River is a founding member of the GENIVI® Alliance and its Platform for Infotainment supports the Intel® Atom™ Crossville architecture.

 

Mentor Graphics has made available a GENIVI® 3.0 specification-compliant Linux®- based Infotainment (IVI) product. The Mentor Embedded IVI Base Platform is a GENIVI® compliant foundation for IVI software development. The platform integrates graphics, communication and multimedia middleware with libraries, system infrastructure, and management components on top of Linux and relevant drivers. When combined with Mentor Embedded development tools and services, automotive OEMs and their suppliers have a comprehensive solution for building the next generation of IVI systems on open source software. The Mentor Embedded IVI Base Platform is available for Intel® Atom™ architectures

 

LEARN MORE >>

 

Solutions in this blog

  Related Topics 

 

Wind River is an Associate Member of the Intel Intelligent Systems Alliance

Green Hills and Mentor Graphics are Affiliate Members of the Intel Intelligent Systems Alliance


Murray Slovick

Roving Reporter (Intel Contractor) Intel Intelligent Systems Alliance


http://embedded.communities.intel.com/community/en/applications/blog/2013/03/22/roving-reporter- IVI Integration of Consumer Device and Automotive Systems Data

Filter Blog

By date: By tag: