Rover at Black Box 2.0 Festival

edit-8279

Rover presented at the Black Box 2.0 Festival, May 28 – June 7 2015.

Description

ROVER, made in collaboration with Robert Twomey, is a mechatronic imaging device inserted into quotidian space, transforming the sights and sounds of the everyday into a dreamlike cinematic experience. A kind of machine observer or probe, it knows very little of what it sees. It records sequential images to document where and when it is, and then later, through algorithmic manipulation of those images, the scenes it previously inhabited can be seen. It also listens and records the audio and can retrieve sounds otherwise dismissed. These sounds are then reshaped until they are no longer commonplace.

Rover (detail)

The result is a kind of cinema that follows the logic of dreams: suspended but still mobile, familiar yet infinitely variable in detail.

Rover (film still)

Rover (film still)

Rover installed in shipping container in Seattle Center

Rover installed in shipping container in Seattle Center

Installation view

Installation view

Development

The imagery gathered for this iteration of Rover was captured with a custom mechatronic light field capture system designed to be portable and scalable according to the framing and depth required for each scene.

The light field capture rig on site

The light field capture rig on site

Camera positions plotted for each image in the scene, as determined by VSFM software.

Camera positions plotted for each image in the scene, as determined by VSFM software.

By gathering hundreds of images in a structured way, we are able to create a synthetic camera “aperture” which allows us to resynthesize a scene after the fact, re-focusing, obscuring and revealing points of interest in real-time.  The result is a non-linear hybrid between photography and video.

A snapshot of experimental composition views

Experimental composition views

In a somewhat analogous process, audio is recorded at the site of each light field capture and analyzed to find events and textures of interest through an audio classification system called Music Information Retrieval.  Based on the features discovered in the recordings, sonic moments or textures which may have otherwise gone unnoticed are exposed and recomposed in concert with the visual system.

Some of the techniques and technologies used include:

  • Music Information Retrieval for audio classification (using SCMIR by Nick Collins)
  • K-means clustering for ordering sound according to self-similarity
  • Visual Structure From Motion for gathering images locations and rectifying all images to a common image plane
  • Custom software driving the resynthesis of the light field scenes (via OSC from SuperCollider)
  • A real-time audio granulation software written in SuperCollider
Early experiments in depth focusing. A coat draped over a chair in the lab is selected while the surrounding scene is obscured.

Early experiments in depth focusing. A coat draped over a chair in the lab is selected while the surrounding scene is obscured.