Depth your pixels!

Depthkit is a new technology to film 3D video. The tool allows you to record pixels with depth, relating two sources: the depth data from a Kinect XBOX and the image from a DSLR camera.

And it’s all open source!

Maybe you are familiar with this documentary project that appeared two years ago in a Kickstarter campaign:

The DepthKit, which many call “the future of filmmaking” is being developed by James George , Alexander Porter, Jonathan Minard and Mike Heavers.

DepthKit has been used in many projects, for example Katerina Cizek, used it for “The Universe Within”, the last part of her multi-year interactive piece Highrise.

Depth Kit is a really complicated process (don’t mistake it with the format that Radiohead used for her videoclip House of Cards in 2008).

Luckily for me this semester I met Sisa Bueno a graduate student at ITP-NYU that had taken an elective class with two of the developers of the Depth Kit and she was looking for a partner in crime to shoot with this format. Sisa and I stayed two nights at the Tisch Film School at NYU, filming for a story she is preparing.

Me, trying to understand that I’m shooting for a depth sensor.

To create a 3D image of a video you are filming first you need to communicate to the DepthKit software the physical distance between both cameras. You mount the Kinect and the DSLR together and you move around the room 180 degree — , so the software can calibrate the distance between the cameras and a black and white check board.

This process took us around 6 hours. Normally it’s not as long as that but the whole process was long (first to understand how the studio worked, the lights, get the passwords for the computer, understand what we were doing…).

And after all that, we were ready to start.

In this picture you can see Sisa Bueno, the DSLR, the kinect and even myself.
Sisa ready to be filmed with the Depth Kit

Sisa managed to direct the whole operation very well, reading her notes from the class and the explanations in the DepthKit website. She is also a documentary filmmaker and it was a pleasure for me to enter into the world of computational video by her hand.

A few days after recording, Sisa showed me the results of our work.

And here a screenshot of our first 3D image!!

Using this technology made me realize how the future of filmmaking will be be made by cameras that can position pixels in the space and by software that allow computational images to be edited together, in a time sequence.

Right now, computer generated content and 360-degree video is processed in different softwares (Unity or Cinema 4D for the first one, Autopano and Adobe Premiere for the second one). But this technologies will probably merge soon and the possibilities of virtual reality and 360-documentary will explode!!!

Last but not least, thanks to Nicholas Hubbard, also a grad student ITP I met at the Storyscapes Tribeca, that connected me with Sisa Bueno!!

Leave a Reply

Your email address will not be published. Required fields are marked *