“We’re wondering if the main ideas of multitouch as we know from Microsoft Surface and a couple of tables can be expanded into rooms,” said Patrick Baudisch, a professor at the Institute and chair of its Human Computer Interaction group. “We’re building a floor that is based on the same concept as multitouch tables.”
The group’s prototype is a proof of concept, relatively small and can accommodate about one user, but Baudisch said that he hopes the system could one day scale much larger, perhaps the size of a whole warehouse.
“Direct touch is typically considered a very desirable and natural way of interaction,” Baudisch said. “On the other hand direct touch is kind of limited to arms length right now. If I have a table and the table is substantially larger than my arm then I can walk around the table, but at some point it becomes impossible.”
The system sits flush with the floor and when someone stands on it, the floor will light up. According the Baudisch, the system can store user profiles based each user’s shoe sole. He said that each shoe sole is slightly different, even different sizes of the same model shoe appear differently, and the software can tell the difference. Once the profiles are stored, the interface can identify users.
In order to enable direct manipulation on floors, the group uses a technique called frustrated total internal reflection (FTIR) with high camera resolution. The concept is complex, but light is first injected to the pane of glass, on which a user stands, from below.
“When something touches the glass (sole and glass have a similar index of refraction) the light is not reflected anymore,” Baudish explained in a follow-up email exchange. “It escapes and illuminates the sole instead, but only those parts that touch (the reflection is ‘frustrated’).” The camera below the glass can see this and “all is black except the light spots where the contact is taking place.”
Baudisch said that one of the challenges was trying to tell whether or not a user wanted to interact with the system. “How not to interact with a table is you take your hands off,” he said. “How not to interact with the floor is kind of hard because you basically have to levitate,” he joked.
With pressure sensors and gait detection the software can understand when someone is walking and ignores the input, focusing only on users who want to interact with the system.
The current prototype allows users to draw, control a game and use a keyboard. Even though each “key” of the keyboard is smaller than someone’s foot, the group found that error rates per letter were relatively low when large (5.3cm by 5.8cm) or medium (3.1cm by 3.5cm) keys were used. The error rates were 3 percent and 9.5 percent, respectively, compared to 28.6 percent for small (1.1cm by 1.7cm) keys.
The prototype Multitoe system measures less than one square meter, but the team plans to install a much larger unit when a new research building opens at the Hasso Plattner Institute in July. It will measure three meters by 2.15 meters and weigh 1.2 metric tons.
Since it’s still a research project there’s no word on when or if Multitoe will be commercialized, but Baudisch hopes to present more research on it at the Symposium on User Interface Software and Technology, or UIST, this October in New York.
At CHI 2009, Baudisch showed a project he worked on with Microsoft Research called Nanotouch that allowed users to interact with a touch screen through the backside of the device. That method allowed users fine control of a small touchscreen without occluding any of the images on it. He worked with Microsoft Research for six years, but recently left his post to focus on projects at the Institute. He said he doesn’t know about the status of Nanotouch and would not be able to comment if he did.