Immersed in Restoration

res·tor·a·tion [noun]

– the return of something to a former or original state. (Canadian Oxford Dictionary, Second Edition)

– the process of restoring a building, work of art, etc. to its original condition. (Oxford Dictionary of English, Third Edition)

The Legacy Project

A few years ago, I was hired as a VR consultant, (initially) to restore the audio for Char Davies’ two pioneering immersive artworks, Osmose (1995) and Ephémère (1998). These works used the latest immersive technology available at the time, including a Silicon Graphics (SGI) Onyx computer for the main program and visuals, Division head-mounted display and Polhemus 3D trackers, along with a bespoke navigation system based on breath and balance. The audio alone for the works depended on an entire rack of hardware, including a PC and a Mac, two Kurzweil K2000 samplers, a mixer, effects units and a Crystal River Engineering (later AuSIM) Acoustetron for binaural spatialization.

This is the original equipment required to run the “legacy” versions of Osmose and Ephémère (pre-2013). [Photo: Dorota Blaszczak]

The works had already evolved several times in their past lives. Some of the audio gear had been upgraded during the first half-decade. The last major porting effort was in 2002, when John Harrison updated the graphics code to run on a high-end Linux machine (instead of SGI) — but otherwise using the same hardware and software. The last time this most recent configuration was shown was here in Spain, in 2007. The hefty and expensive Onyx was gone, but installing the works in a gallery still required shipping several crates, with two large racks of gear.

Immersence is Char’s research company, which was founded in 1998 to continue the work she’d started at Softimage. Shortly after its founding, I believe Immersence purchased the last four dVisor headsets produced, just before Division went out of the HMD business entirely. This turned out to be a smart move; a couple of those headsets continue to function — sort of — though neither of them does so gladly. And until the Vive and Rift headsets came along this year, the pickings were slim for a good replacement headset.

The Immersence team did a good job preserving Char’s works in (more or less) their original state; a number of replacement parts such as those HMDs, samplers, memory chips and disk drives had been acquired over the years. Failing pieces were swapped out with components scavenged from these reserves. Public exhibitions, and the passage of time, had taken their toll…and it was becoming infeasible to present the works in their original forms. This degradation process doesn’t just occur with hardware devices and physical media, though, it also happens with software: programming languages, libraries, operating systems and other dependencies evolve, diverge and eventually disappear altogether. Old computers fail, and over time it also becomes complicated (eventually impossible) to build or even run an old program on a modern-day computer. Sometimes you can’t teach a new dog old tricks…

Audio Work

The plan in 2013 was for me to replace the refrigerator-sized rack of physical gear with a single recent-model computer, to perform all audio processing in software, removing the dependency on failing, two-decades-old hardware. I had access to some of the original hardware, some of the time I was working on this conversion, but not all of it — that complicated my task. Therefore, it entailed a kind of archaeology, digging deep into murky layers and shining a (virtual) light into black boxes — studying old user manuals, doing some reverse engineering, a lot of A/B listening tests, and often taking a best guess at how some of those boxes made their sounds.

Osmose (1995) image used with permission of the artist.

I started working on Osmose, knowing I’d be able to reuse much of the work when I got to Ephémère (a more complex work, but using the same audio setup). In the SuperCollider audio programming language, I coded software synths, recreated hundreds of preset programs, virtual busses and mixers, effects and all the control code to glue it together, along with binaural spatialization using the Ambisonic Toolkit (ATK) library. Osmose had used Opcode (now Cycling ’74) Max for most of the audio “intelligence” in the work, and fortunately the original patches loaded and worked (with minimal changes) in the latest version of Max. It’s something of a miracle when anything in the domain of electronic art can load and run on contemporary hardware without changes, twenty years later!

I was very fortunate to be in contact with Dorota Blaszczak (in Warsaw), who originally developed the works’ sonic architecture (along with the composer Rick Bidlack). Her help was invaluable, because Dorota knows these works better than anyone, and could listen to my work critically and give guidance on whether something was “good enough”. We were able to meet and work together in person on several occasions during this process, so she could answer detailed questions, help identify problems and validate my ongoing work.

As the Osmose audio port wrapped up, I began to work on Ephémère, which was a more complex task. Although lots of the groundwork was done — it shares the basic sound framework with Osmose — the synth instruments were considerably more varied and complex. Also, it used a custom Windows program (Windows 95, no less!), written by Rick Bidlack, as the controlling sonic “brain”. This program had to be ported to run on a Mac (and other platforms, for future flexibility). As with the Max patches for Osmose, these audio programs now communicate with the main “graphics” programs via OSC messages — previously they used a mix of physical MIDI and serial cables to send messages back and forth between programs and devices.

For Ephémère, I obtained the most recent copy of the source code from Rick (in Seattle; he dug it up on an old floppy disk), and set to work. Since all the audio gear has now been transformed into source code, future conservation/restoration should be much easier, as everything is now contained in a Git repository that includes all required code and data. By using a version control system, not only is everything consistent, organized and backed up, but it also provides a historical record of future work over time.

For both legacy projects, much of the audio work was meticulous “detail work”, going through hundreds of synth programs, parameter by parameter, seeing what functionality was needed for each layer and patch, what range of MIDI notes and controllers it needs to respond to, and trying to reproduce the correct sound for all possible inputs. Adding to the difficulty is that these are interactive works, not simple “playbacks”. On one run-through, certain events and parameters may be encountered that won’t appear in a subsequent run; everything depends on the user’s specific interactions, as well as some degree of predefined behaviour and randomness. I obviously did not try to recreate a K2000 sampler with its vast (pun intended) range of functionality — only the specifics needed for these works. That alone was a big enough job. However, having everything explicitly written in code now means that it should be easier to port the audio to any other software implementation, in future. The work was not just about making things run in a specific hardware and software environment today, but also preparing things to be more flexible and portable for the future.

Ephémère (1998) image used with permission of the artist.

Graphics Work

I was also in close contact with John Harrison (in Montreal) during this time. He was the programmer of the original VR software for both Osmose and Ephémère. The plan had been that he would port the graphical parts of the two legacy works, in parallel with my audio restoration work. However, John was busy developing software for a new work with Char, so when my work eventually became blocked (the graphics and audio programs need to interoperate), I offered to do the graphics port as well. This offer was enthusiastically received.

The original works ran on the IRIX operating system, and were reliant on SGI’s Performer high-performance visual simulation library. Performer is no longer available and, although there is an open source library that offers similar functionality (OpenSceneGraph), there was a strong desire to minimize dependencies on external software (within reason). At the time I began working on the project, the goal was to run the two works on a Mac Pro computer, using OpenGL. So, in a similar way to the audio work, I began studying and dissecting the ways these programs used Performer. I concluded it was not necessary to rely on a 3rd-party toolkit, and set about creating a layer of pure OpenGL functionality that reproduced the required parts previously provided by Performer (no, I didn’t write a full Performer implementation!).

There was a large amount of cleanup and refactoring to do in the code, especially to bring it from 1990s C to modern C++11, with an eye towards future cross-platform compatibility. GLFW was used for windowing and event management, the Oculus (Mac beta) SDK and eventually Valve’s OpenVR were used to interface with HMDs, but otherwise external dependencies were minimized. Inter-process communication and device driver code was ported and made cross-platform using Boost.

The complete porting/restoration of Osmose and Ephémère took place (on-and-off, i.e. part time) from 2013 to 2016, and included a number of in-person sessions (trips to Canada) to work with Char, John, Dorota, Daniel Chudak — who managed the restoration work — as well as a session with Georges Mauro (the animator who had worked under Char’s direction to create the original textures and models). The vast majority of the work, however, was done remotely (here, in Barcelona).

One factor that complicated the port was the desire to keep the works “exactly as they were” — as much as possible, at least. I’d understood that to be my original mandate: the graphics and sound should be the same as the original “legacy” works. Later in the project, when we began meeting with Char to present the new versions, this mandate was relaxed somewhat, because in some cases it was sufficient (or better) to aim instead for the same “sensibility” as the original works. Modern HMDs (such as the Vive and Rift) provided a somewhat different immersive experience, and not just because the displays and optics were different. The old works tended to run at around 20 fps — even on that SGI graphics supercomputer! — whereas with some optimization, the new works were able to run at 90 fps on modern hardware (while rendering over seven times the pixels).

The look in these new headsets is quite different, not just in terms of resolution, but also colour, distortions and “softness” (or rather, a distinct lack thereof, when compared to the old Division headsets). The low resolution of the original HMD (and its lack of screen-door effect) contributed to Char’s desired soft aesthetic; however, with the new high-resolution headsets, we decided that the low polygon counts of a few of the original models looked too harsh and hard-edged. In fact, we chose to deliberately add back in blur in some places, to get closer to the original softness. In general, though, because the works largely rely on semi-transparency, softness, and layering, they stand up quite well, even by “today’s standards” (whatever those may be).

To a large degree, this restoration was an objective, methodical, almost scientific process: exploration, experimentation, implementation…and repetition. However, these are artworks, and therefore “success” is more subjective than the meticulous process might suggest; a pixel-for-pixel match may not be the best criterion after all, some aesthetic decisions needed to be re-evaluated, negotiated and decided case-by-case.

This is the equipment required to run the “new” restored/remastered versions of Osmose and Ephémère (2016). Just a high-end PC, HTC Vive, audio interface with headphones, and the redesigned breathing/leaning navigation vest.

Ultimately, Char’s vision for these works — then and now — is what guided the work. We were not out to remake them, creating a v2.0 or a director’s cut “with a different ending”. The goal was to bring the works back to life, preserving their place in the history of interactive, immersive artworks. As part of this process, it was necessary to be true to the works, but also flexible, allowing ourselves some small “improvements”, in the same way that an old tape recording might be remastered, or the colour subtly restored to a fading painting. The restored/remastered versions of Osmose and Ephémère are not identical to the originals, but they are very close, and hopefully are the same in spirit. And…now the graphics and audio all run on a single PC!

Building M.U.R.S.

I was fortunate to be able to work a few days last fall with Pelayo Méndez. His company was contracted by the Catalan theatre troupe La Fura dels Baus to create interactive visuals for their new “smartshow”, titled M.U.R.S. (murs are walls in Catalan). Pelayo and his team created software for animated visuals, interactive “games” that turned spectators into participants and a networked mobile application that made audience members part of the show.

M.U.R.S. (Barcelona, 2014) from Pelayo Méndez.

Pelayo hired me to help write and tune some OpenCV code, using optical flow to allow Tetris-like blocks to respond to audience interaction (based on video captured by a pair of stage-mounted security cameras).

The best part of the job was spending a day near Manlleu at the troupe’s rehearsal space. I had a chance to test my block-busting code using the real setup of two cameras mixed into a live feed, with the visuals projected onto a big screen. It was a privilege to join Pelayo and Rafael and be a (very small) part of this, to witness the tension and excitement of last-minute rehearsals — it was just a week prior to the premiere in Murcia — with all the crew and actors extremely focused, doing their thing…

Sónar, Take 2

Last year, as a member of the Barcelona Laptop Orchestra, I programmed on various pieces and helped prepare for our performance at Sónar (Barcelona’s annual International Festival of Advanced Music and New Media Art). However, in the end, a wedding in Canada (and “best man” responsibilities) forced me to miss taking part in the show.

Sónar 2014 - thumbs up! But this year, fate came calling. More specifically, Sam at l’ull cec came calling, asking for help setting up Daito Manabe‘s Sónar show on June 12. Daito is a renowned artist/programmer who also runs the Rhizomatiks design studio in Tokyo. He was featured in Apple’s “Thirty Years of Mac” web pages, and has done all kinds of crazy and cool projects.

Sónar 12.13.14 June 2014 :: Daito Manabe

Daito Manabe setup @ Sónar 2014.
Daito Manabe setup @ Sónar 2014.
The performance featured three dancers, three remotely-controlled flying drones, a wide-angle projector with depth sensor (for projection mapping onto the dancers), ten infrared tracking cameras, and a bunch of computers and other gear. Our contributions (as last-minute helpers) were limited: mounting IR cameras, wiring them to routers, taping down cables — whatever we could do to get things done in the tight schedule between other sound checks and performances. Meanwhile, Daito and Motoi worked like crazy to fine-tune their software and fix a wonky drone. And choreographer Mikiko and the three dancers from the Eleven Play dance troupe went through last-minute rehearsals.

To give an idea: the performance was (approximately) a mixture of this one — with three dancers rather than five:

…and this, with dancing drones — although because of technical issues, sadly at Sónar the drones danced alone:

I didn’t contribute much to the whole affair, but it was inspiring and a privilege to be able to take part and help out, even in a small way.

Rehearsal/testing, Daito Manabe @ Sónar 2014.
Rehearsal/testing, Daito Manabe @ Sónar 2014.

Of gigs and sound bites

After a long span of lots of hard work (more about that in the coming month) but no performances, I had not one but two gigs this week, performing with my co-conspirators from the Wú:: Collective, Alex and Roger.

Announcement of the WeArt SubverJam.
Announcement of the WeArt SubverJam.

First, on Wednesday, we took part in the SubverJam session (in polite company, referred to as a New Media Art event), as part of the closing of the 2013 WeArt Festival. This involved a multitude of groups (at least six or seven), all jamming together, firing on all cylinders with audio and video “injections”. Barcelona’s newly-opened El Born Centre Cultural proved to be a fantastic venue.

The El Born CC is an impressive new art and culture space, located in the historic el Born market. The market was closed (as a market) in 1971, saved from destruction by neighbourhood protest, renovated and used for various events before being slated as a new library in the late ’90s. As work got underway on the library, they unearthed an important Catalan archaeological site that needed preserving (though there was debate about that, too). The library plan was eventually scrapped, and finally in September 2013, it opened as a beautiful new cultural centre, designed around the archaeological site, which occupies most of the interior space.

SubverJam (WeArt 2013)
Part of the elaborate setup from the WeArt SubverJam (low-quality photo, but it’s the best I’ve got).

The WeArt event was in the centre’s “espai polyvalent”, Sala Moragues. In this large space, there were six smaller projections (one for each group: three each on opposing long walls), plus a big (6m-wide) projection at the far end of the room. The folks from Telenoika were doing video mixing and manipulations on the the large screen. On our Wú:: screen, I was projecting images from an openFrameworks application I created, taking input from webcam and pre-recorded video, manipulating it with GLSL shaders and live audio input (as well as my own live inputs and coding).

Audio came from re-jigged turntables and diverse analog gadgets on which Alex and Roger were performing, as well as a SuperCollider program I’d prepared for the occasion. The only problem is that, with so many groups, it ended up being…quite loud. It was difficult to hear your own contributions (hard even to think!), so mostly we just played and experimented with audio through our own headphones, while I also manipulated the video projection, responding to the room noise ambience. I got a few nice comments about my low-key visual effects. The event was open to the public for a couple of hours, during which we all “did our thing”. The public was free to wander around, look at what we were doing, interact and ask questions. At the peak, the room was fairly full (one or a few hundred people?). For my taste, it was a bit too loud and unstructured, but most spectators I asked told me they were enjoying it. I must be I’m getting old.

Our main focus this week, however, was a performance on Saturday (November 9), with New York-based sonic artist Thessia Machado. This was at Homesession, a small art loft in the Poble Sec neighbourhood. Thessia has been there for a couple of months on a residency, and during that time built some new instruments that amplify and manipulate the sound from simple bumping/scraping/vibrating/clicking objects. The objects are a mix of repurposed electrical mechanisms and hand-made paper sculptures. She was asked to perform three sessions at the conclusion of her residency, and invited Wú:: to collaborate with her for one of these events.

Thessia Machado and the Wú:: Collective (Glen, Roger and Alex) perform at a Thessia's "end of residency" concert.
Thessia Machado and the Wú:: Collective (Glen, Roger and Alex) perform at a Thessia’s “end of residency” concert.

We used a similar setup to the WeArt show. For our half-hour set, Alex and Roger played modified turntables and various analog effects and filters. Thessia performed with her new instruments, and although I was prepared to contribute some SuperCollider audio, in the end I mostly focused on visuals, which were projected on a wall of the gallery. In the days after the WeArt gig, I was able to refine my GLSL shader programs further, and also get live input from two webcams. I could trigger them based on audio input (for example, a camera would fade in more as one performer or another played sound snippets).

A different angle, showing some of Thessia's instruments, while Glen gets an aerial view with a camera.
A different angle, showing some of Thessia’s instruments, while Glen goes for an aerial view and Roger and Alex deconstruct the wheels of steel.

I started with a base of procedural noise and added in the camera images, some soft glitchy effects that deliberately misused the webcam data, kaleidoscope-y effects and a few other manipulations I’d written in OpenGL’s shading language. The images were also distorted and pulsed using audio control data piped in from SuperCollider. Mostly, I spent the time finding interesting things to look at with the webcams.

After several changes of plans (on our side) in the preceding week, and much patience from Thessia, I think we can safely to call the Homesession performance a success. An “intimate” crowd (aka one or two dozen people) were witness to our Saturday evening playtime.

Wú projection
If you see Thessia Machado’s wires and gadgets in this “Rorschach test”, you’re probably on the right track. Photo from one of my projections during Saturday’s performance.