New Interfaces Challenge Touch

Touchscreens could be extinct if researchers pioneering new human-computer interfaces have anything to say about it. From brain-controlled machines to gesture-driven devices, there's a range of technologies in development that may find their way into everyday electronic devices.

Several conferences this year have given a great glimpse into innovative interfaces and what the future may hold.

Touchscreens are somewhat limited in giving feedback to a user. The screen may vibrate when tapped, but that's just about all it can do. At this year's Computer Human Interaction (CHI) conference in Vancouver in May, a researcher from the University of British Columbia showed a way to completely change the feeling of a screen, at times making it slippery and other times making it sticky.

The prototype screen has four actuators that shake the screen.

"This is actually the same technology used in many cell phones or other devices, but it runs at a higher frequency so you don't feel the vibration itself," said Vincent Levesque, who is a post-doctoral fellow. "It pushes your finger away from the piece of glass, a bit like an air hockey table."

Levesque's team had a demonstration set up with basic file folders on screen. When a folder is selected the screen becomes slippery. When it is dragged over another folder or the trash, the screen became sticky.

The prototype occupied a sizeable section of the table on which it sat. Wires protruded and circuit boards were visible, making it too bulky to integrate into any mobile devices. The system uses lasers to determine the position of the finger. As the team continues work on the project, it hopes to reduce the system's size and replace the lasers with a capacitive touchscreen.

At the CHI conference, university students and research groups dreamt up most of the projects on display and shared them with potential employers who could license the technology and invest in developing it.

Texas A&M University's Interface Ecology Lab favored gestures over touch, creating a gesture-controlled system called ZeroTouch. It looks like an empty picture frame and the edges are lined with a total of 256 infrared sensors pointing toward the center. The frame is connected to a computer and the computer to a digital projector.

"I like to consider it an optical force field," said Jonathan Moeller, a research assistant in the lab.

When the spiderweb of light created by the sensors is broken, the computer interprets the size and depth of the break and displays it as a brushstroke. If just a pencil breaks the beam, the brushstroke will be thin. If an entire arm or head breaks the beam, the stroke will be thick.

While painting on the digital canvas, users hold an iPhone on which they can select the color of the brush.

Drawing in the air is just a proof of concept. When ZeroTouch is placed over a traditional computer screen it becomes a touchscreen. Instead of creating brushstrokes, the system moves a cursor.

Moeller started working on the project in 2009. It was born out of research that used a projection screen and a camera. He said he thought the system was bulky and wanted to reduce its size.

He considers two-dimensional interaction just the beginning.

"You can stack layers [of ZeroTouch] together to get depth sensing," he said.

The system could then sense objects in a 3D space, but also allow users to hover over objects. Typically, hovering isn't available with touch systems because a finger would occlude what it's hovering over, he said.

If ZeroTouch becomes the new technology to create 3D objects, the Snowglobe project could provide a way to view and interact with them.

Snowglobe is a large acrylic ball that has an image projected on its inner walls from a hole in the bottom. Two Microsoft Kinect sensors are pointed at users and when they approach and move around the ball, the object on the inside follows them. If they stretch out their hands, their gestures can control the orientation and size of the object inside the globe. The image is cast by a 3D projector so wearing 3D glasses adds another dimension to the experience.

John Bolton, with the Human Media Lab at Queens University, came up with the idea, on show at CHI 2011, and had been working on it for two years.

"If we nest an object inside we can present all 360 degrees of that object if somebody walks around the display," Bolton explained. "So opposed to just sitting there with a mouse you can walk around and you're presented with the correct view as your position changes."

Bolton said that, as is true for many of the projects at CHI, there were no immediate plans for commercialization.

Subscribe to the Power Tips Newsletter

Comments