A Taiwan-based nonprofit research and development organization announced last week a virtual display that allows users to control virtual keyboards and touchscreens that float out in front of users.
The Industrial Technology Research Institute (ITRI) said its new technology uses special glasses and DDDR (defined distance with defined range) camera technology to allow users to see and interact with virtual data, images, and devices with finger strokes.
The new i-Air Touch technology is being developed for an array of devices, including PCs and laptops, wearable computers, and mobile devices, that allows a user's hand to be free of any physical device such as a touchpad or keyboard for touch input.
i-Air Touch's see-through capability enables a user wearing a pair of special eyeglasses to see and interact with a virtual input device, such as a touchscreen or mouse that appears to be floating in the air, while still being able to see and interact with the real world.
"i-Air Touch creates new possibilities for wearable and mobile computing by freeing users from the distraction of locating and touching keys on a physical input device for hands-free computing and improving security over voice commands," Golden Tiao, deputy general director of ITRI's Electronics and Optoelectronics Research Laboratories, said in a statement.
ITRI plans to license the patented technology to manufacturers. The company sees the heads-up display technology being used in not only consumer arenas, but also for medical applications such as endoscopic surgery and any industrial applications that benefit from hands-free input.
The DDDR camera is the key functional component of i-Air Touch, ITRI said. The camera discerns the virtual images for interaction, but it conserves battery power, which is a major issue facing manufacturers of many wearable computers, the company stated.
How it works
The camera uses a phase- and color-coded lens to discern an object at a predetermined distance of 11 inches to 12.5 inches away from the eyeglasses. The camera detects and activates only in the presence of a fingertip within that input range. The virtual images shut off if a user's fingertip isn't present, allowing a clear field of view.
The DDDR camera essentially captures the image of a user's fingertip out in front of it and splits the image into green and red color codes to provide segmentation in image processing, while phase coding provides distance and depth perception of the fingertip.
The DDDR camera lens focuses the green light component at 11 inches and the red at 12.5 inches. The combined green and red components resolve to the strongest image signal at the midpoint between the two light components (about 11 13/16-inches). The camera then captures the image signal at that midpoint as "input."
Because the camera does not register signals outside of the 11-inch to 12.5-inch virtual target plane, it consumes no power other than when a fingertip is present.
Additionally, by detecting when a user's fingertip is in input range, the camera ensures that the user is intentionally trying to air-touch the virtual input device and that the camera does not mistake other user movements for input.
"A successful virtual touch triggers i-Air Touch to send a signal to the host device (a computer, laptop, smartphone, etc.) signifying that a key has been pressed or a touchscreen function has been touched," ITRI stated in a news release.
While the glasses cannot take photos, like Google Glass, they can be used with cameras in others kinds of wearable computers, the company said.
This story, "Goggles interact with floating keyboards and virtual displays" was originally published by Computerworld.