Safety concerns have car makers and technology suppliers continuing to search for ways that drivers can avoid fishing for switches, buttons or icons on a touchscreen as a car zooms along the highway at 65 miles an hour. In my last Roving Reporter entry on IVI I focused on new voice- assisted technologies that are giving people the freedom to interact with the center console display in a more natural, intuitive way. This time, I want to turn to other "Perceptual Computing" technologies where devices will take on human-like senses to perceive the user's intentions, employing the context of an event or action to facilitate and even anticipate what the driver wants to do. Among the contextually aware elements that will be used as IVI system input controls are gesture interaction, facial tracking and attribution (such as a smile or nod) and eye tracking.
Gesture-recognition technology is widely expected to be the next-generation in-car user interface. In gesture recognition the idea is to determine whether the driver has performed a recognizable hand or finger gesture within the interaction space without the “middleman” of touchscreen controls. In operation a camera or infrared sensor recognizes and interprets hand movements as in-car commands. Perceptual Computing systems can take into account not just that you made a hand gesture, but how far or fast you made the gesture in order to, for example, open a car window the amount you desire..
Examples of gestures that can be used include the driver touching the gear-shift knob, then raising or lowering a hand to change the temperature via the climate control system, or the driver tilting his head left or right to turn the volume of the stereo up or down. As another example, using facial analysis a camera can “see” where you are looking while talking to determine whether it should interpret your voice as a command or determine that it is simply a conversation with a passenger.
A camera placed in the steering wheel or on the dashboard is programmed to "watch" for certain gestures. When it sees them, it sends a signal to the head unit processor (or an in-vehicle computer) that handles the connected IVI hardware. The data is analyzed to determine what the driver is doing and ascertain which central information display controls the driver wants to adjust and then activate the appropriate features.
This key role of processing elements is where advances in Intel® processor architecture come in, delivering at low power levels the computational muscle to process inputs from multiple sensor technologies to track gestures, objects and faces as well as recognize voice commands-- all of which is coming to IVI units enabled by Intel® Core™ and Intel® Atom™ processor-based devices.
Gesture control in automobiles is not a futuristic fantasy Earlier this year Hyundai unveiled its HCD-14, a luxury four-door concept sedan featuring gesture controls for its navigation, infotainment, audio, HVAC and smartphone connectivity functions.
To bring gesture-based interaction via camera monitoring to Intel platforms SoftKinetic, a Belgian-based developer focused on gesture-recognition technology has developed close-range hand and finger tracking capabilities via its iisu (‘The Interface is You’) middleware. As a result, for engineers who want to create close range applications using hand, finger and face tracking and voice recognition for Intel platforms these functions are included in the Intel® Perceptual Computing SDK available to developers free of charge.
To handle predefined facial analysis and recognition, the Intel SDK includes seven-point landmark detection and “attribution” detection including smiles, winks and blinks. What is more, SDK’s speech-recognition capabilities permit voice command and controls, as well as dictation and text-to-speech analysis. The SDK includes manuals, code samples, algorithms, example applications, and tutorials to help developers integrate perceptual computing interfaces in as simple a way as possible.
When paired with the Intel Perceptual Computing SDK the Creative Senz3D camera enables developers to create the next generation of natural, immersive, innovative software applications that incorporate lose-range hand tracking, face analysis and 2D/3D object tracking on Intel platforms.
Designed with ease of setup and portability in mind, the Creative package combines advanced QVGA depth sensor technology with an HD (720p) camera and dual-array microphone for capturing and recognizing gestures, voice control and face detection.Measuring just 4.27” x 2.03” x 2.11” and weighing only 9.56 oz. the camera is small enough to integrate with either fixed or mobile devices.
The camera lets users manipulate objects on the screen using gestures and is able to completely eliminate the background. It is a USB-powered (power <2.5W) camera and is optimized for close-range (6 inches to 3 feet) tracking of fingers, static hand poses, moving hand gestures, as well as facial detection/analysis. The Senz3D includes drivers that are compatible with Microsoft Windows 7 and 8.
Speaking of Microsoft, if you are a game player some of this may sound familiar. That’s because it is not unlike Microsoft’s Kinect system for the Xbox game console, which detects motion from distances of up to about 10 feet. With the Creative camera and Perceptual Computing SDK rather than track a user’s full body as you would with a Kinect, only users’ hand and forearm gestures are analyzed. According to numerous industry reports Microsoft is itself looking to adapt Kinect’s gesture-recognition technology into future Windows-driven connected car platforms.
Advantech’s ARK-DS762 integrates Microsoft® Kinect™ technology that allows a screen to become a virtual interactive mirror that can be controlled with gestures or spoken commands. It employs a 3rd generation Intel® Core™i7/i5/i3 processor (up to 45 watts TDP) with a powerful graphics engine that supports three independent HDMI displays and features rich I/O (including USB 3.0) and extra flexibility via optional expansion modules. The unit also supports wireless IP connection for remote communications and incorporates Advantech’s remote control hardware monitoring technology (SUSI Access), which provides computers with off-site system diagnosis and self-recovery capabilities.
Eye tracking systems utilize gaze technology—what the subject is looking at--to warn drivers if they are not looking at an oncoming pedestrian or nearby vehicular traffic and if as a consequence there is an immediate danger. Cameras mounted in the cabin can determine if the driver takes his or her eyes off the road-- the car would recognize that--and then could sound a warning. Similarly, eye tracking would allow your car to know you’ve been staring at something too long and alert you to watch the road instead.
A gaze technology demonstration was conducted at IDF 2013 in San Francisco. Called "Tobii Eyetracker by Dell OEM.” it worked with technology provided by Tobii, a company based in Sweden that specializes in eye tracking and gaze interaction that enables users to accomplish a specific set of tasks with one’s eyes, or, more specifically, by allowing the system to track your eye movements. For more information see the Roving Reporter blog. “Computer Control is in the Eye of the Beholder”
Contact Featured Alliance Members:
Solutions in this blog:
Sensing and Analytics (Top Picks blogs, white papers, and more)
Advantech is a Premier Member of the Intel® Intelligent Systems Alliance
Microsoft is an Associate member of the Intel® Intelligent Systems Alliance
Dell is a Premier member of the Intel® Intelligent Systems Alliance
Roving Reporter (Intel Contractor), Intel® Intelligent Systems Alliance