Another week, another hacked Kinect story. At this rate you’d think Microsoft actually intended the device for homebrew. It’s like watching America’s Got Talent for geeks. Or voyeurs.
The latest hack’s a head-spinner in both the strict and figurative sense.
Imagine running a realtime 3D feed of yourself plunked in front of your computer, a spatially precise video stream you could snap out and spin around at leisure.
Drawing on Kinect’s dual cameras, a UC Davis researcher managed to tap the motion sensor’s depth-tracking prowess with software that scans in objects and reconstructs them in realtime. One camera grabs live video, the other gauges depth, and presto, a camera that captures your good and bad sides simultaneously.
Oliver Kreylos pulled it off just three days after news broke that Hector Martin had uncorked Kinect’s mojo using Linux and a custom USB driver.
“I didn’t use any of [Martin’s] code, [except for] the ‘magic incantations’ that need to be sent to the Kinect to enable the cameras and start streaming,” wrote Kreylos. “Those incantations were essential, because I don’t own an Xbox myself, so I couldn’t snoop its USB protocol.”
Kreylos says the code that lets him reconstruct 3D objects in realtime is written “from scratch” in C++, using his own “virtual reality” software. Kreylos had previously created a Vrui VR toolkit to support “3D rendering management and interaction.”
In a second video, Kreylos demonstrates the tool’s ability to accurately measure reconstructed 3D objects.
Next up for Kreylos? Try “augmented reality.”
“There’s more to come,” said Kreylos. “I’ll try next to see if I can use the 3D views and put them into another 3D environment, in order to mix realistic people captured with the Kinect with computer generated imagery.”
Machinima meets Kinect? Kinectima?
Follow us on Twitter (@game_on)