Real-Time 3D Models Created Using Only a DSLR and Kinect
By Kevin Lee
Some of the biggest uncanny valley problems we have with making 3D-models (i.e. video games and CGI movies) involve fine detail and making moving characters not look like marionette dolls. In most cases, animators have to create extremely detailed models and apply them on top of a 3D-dot frame taken from motion capture.
Filmmaker Jonathan Minard and artist/programmer James George have created a new imaging system, called “virtual cinematography,” that manages to do both at the same time using a stock Kinect and SLR camera. The duo used the SLR camera to capture a high-definition image of Carnegie Melon University’s Golan Levin, and grafted his face onto the Kinect’s depth model.
The end result is a real-time 3D model that captures Levin’s motions on a 1:1 scale, as well as all his facial expressions. In the video, Levin is filmed by the imaging system while he answers Reddit AMA questions.
The system is not perfect, though; in the video you can see parts of Levin’s model oscillate and drop out to blank spots as he moves about. But it is pretty incredible that the system still works as the camera pans around and zooms in and out on Levin.
The 3D models created by virtual cinematography somewhat replicate the models captured by Team Bondi for the video game L.A. Noire, but that used a system of 32-cameras surrounding a stationary actor. A system using only one camera and one Kinect that captures motion and faces at the same time is much more approachable. This technology could completely revolutionize video games, virtual filmmaking, computer avatars…and non-Wolf Blitzer holograms?