Immersed in Restoration

res·tor·a·tion [noun]

– the return of something to a former or original state. (Canadian Oxford Dictionary, Second Edition)

– the process of restoring a building, work of art, etc. to its original condition. (Oxford Dictionary of English, Third Edition)

The Legacy Project

A few years ago, I was hired as a VR consultant, (initially) to restore the audio for Char Davies’ two pioneering immersive artworks, Osmose (1995) and Ephémère (1998). These works used the latest immersive technology available at the time, including a Silicon Graphics (SGI) Onyx computer for the main program and visuals, Division head-mounted display and Polhemus 3D trackers, along with a bespoke navigation system based on breath and balance. The audio alone for the works depended on an entire rack of hardware, including a PC and a Mac, two Kurzweil K2000 samplers, a mixer, effects units and a Crystal River Engineering (later AuSIM) Acoustetron for binaural spatialization.

This is the original equipment required to run the “legacy” versions of Osmose and Ephémère (pre-2013). [Photo: Dorota Blaszczak]

The works had already evolved several times in their past lives. Some of the audio gear had been upgraded during the first half-decade. The last major porting effort was in 2002, when John Harrison updated the graphics code to run on a high-end Linux machine (instead of SGI) — but otherwise using the same hardware and software. The last time this most recent configuration was shown was here in Spain, in 2007. The hefty and expensive Onyx was gone, but installing the works in a gallery still required shipping several crates, with two large racks of gear.

Immersence is Char’s research company, which was founded in 1998 to continue the work she’d started at Softimage. Shortly after its founding, I believe Immersence purchased the last four dVisor headsets produced, just before Division went out of the HMD business entirely. This turned out to be a smart move; a couple of those headsets continue to function — sort of — though neither of them does so gladly. And until the Vive and Rift headsets came along this year, the pickings were slim for a good replacement headset.

The Immersence team did a good job preserving Char’s works in (more or less) their original state; a number of replacement parts such as those HMDs, samplers, memory chips and disk drives had been acquired over the years. Failing pieces were swapped out with components scavenged from these reserves. Public exhibitions, and the passage of time, had taken their toll…and it was becoming infeasible to present the works in their original forms. This degradation process doesn’t just occur with hardware devices and physical media, though, it also happens with software: programming languages, libraries, operating systems and other dependencies evolve, diverge and eventually disappear altogether. Old computers fail, and over time it also becomes complicated (eventually impossible) to build or even run an old program on a modern-day computer. Sometimes you can’t teach a new dog old tricks…

Audio Work

The plan in 2013 was for me to replace the refrigerator-sized rack of physical gear with a single recent-model computer, to perform all audio processing in software, removing the dependency on failing, two-decades-old hardware. I had access to some of the original hardware, some of the time I was working on this conversion, but not all of it — that complicated my task. Therefore, it entailed a kind of archaeology, digging deep into murky layers and shining a (virtual) light into black boxes — studying old user manuals, doing some reverse engineering, a lot of A/B listening tests, and often taking a best guess at how some of those boxes made their sounds.

Osmose (1995) image used with permission of the artist.

I started working on Osmose, knowing I’d be able to reuse much of the work when I got to Ephémère (a more complex work, but using the same audio setup). In the SuperCollider audio programming language, I coded software synths, recreated hundreds of preset programs, virtual busses and mixers, effects and all the control code to glue it together, along with binaural spatialization using the Ambisonic Toolkit (ATK) library. Osmose had used Opcode (now Cycling ’74) Max for most of the audio “intelligence” in the work, and fortunately the original patches loaded and worked (with minimal changes) in the latest version of Max. It’s something of a miracle when anything in the domain of electronic art can load and run on contemporary hardware without changes, twenty years later!

I was very fortunate to be in contact with Dorota Blaszczak (in Warsaw), who originally developed the works’ sonic architecture (along with the composer Rick Bidlack). Her help was invaluable, because Dorota knows these works better than anyone, and could listen to my work critically and give guidance on whether something was “good enough”. We were able to meet and work together in person on several occasions during this process, so she could answer detailed questions, help identify problems and validate my ongoing work.

As the Osmose audio port wrapped up, I began to work on Ephémère, which was a more complex task. Although lots of the groundwork was done — it shares the basic sound framework with Osmose — the synth instruments were considerably more varied and complex. Also, it used a custom Windows program (Windows 95, no less!), written by Rick Bidlack, as the controlling sonic “brain”. This program had to be ported to run on a Mac (and other platforms, for future flexibility). As with the Max patches for Osmose, these audio programs now communicate with the main “graphics” programs via OSC messages — previously they used a mix of physical MIDI and serial cables to send messages back and forth between programs and devices.

For Ephémère, I obtained the most recent copy of the source code from Rick (in Seattle; he dug it up on an old floppy disk), and set to work. Since all the audio gear has now been transformed into source code, future conservation/restoration should be much easier, as everything is now contained in a Git repository that includes all required code and data. By using a version control system, not only is everything consistent, organized and backed up, but it also provides a historical record of future work over time.

For both legacy projects, much of the audio work was meticulous “detail work”, going through hundreds of synth programs, parameter by parameter, seeing what functionality was needed for each layer and patch, what range of MIDI notes and controllers it needs to respond to, and trying to reproduce the correct sound for all possible inputs. Adding to the difficulty is that these are interactive works, not simple “playbacks”. On one run-through, certain events and parameters may be encountered that won’t appear in a subsequent run; everything depends on the user’s specific interactions, as well as some degree of predefined behaviour and randomness. I obviously did not try to recreate a K2000 sampler with its vast (pun intended) range of functionality — only the specifics needed for these works. That alone was a big enough job. However, having everything explicitly written in code now means that it should be easier to port the audio to any other software implementation, in future. The work was not just about making things run in a specific hardware and software environment today, but also preparing things to be more flexible and portable for the future.

Ephémère (1998) image used with permission of the artist.

Graphics Work

I was also in close contact with John Harrison (in Montreal) during this time. He was the programmer of the original VR software for both Osmose and Ephémère. The plan had been that he would port the graphical parts of the two legacy works, in parallel with my audio restoration work. However, John was busy developing software for a new work with Char, so when my work eventually became blocked (the graphics and audio programs need to interoperate), I offered to do the graphics port as well. This offer was enthusiastically received.

The original works ran on the IRIX operating system, and were reliant on SGI’s Performer high-performance visual simulation library. Performer is no longer available and, although there is an open source library that offers similar functionality (OpenSceneGraph), there was a strong desire to minimize dependencies on external software (within reason). At the time I began working on the project, the goal was to run the two works on a Mac Pro computer, using OpenGL. So, in a similar way to the audio work, I began studying and dissecting the ways these programs used Performer. I concluded it was not necessary to rely on a 3rd-party toolkit, and set about creating a layer of pure OpenGL functionality that reproduced the required parts previously provided by Performer (no, I didn’t write a full Performer implementation!).

There was a large amount of cleanup and refactoring to do in the code, especially to bring it from 1990s C to modern C++11, with an eye towards future cross-platform compatibility. GLFW was used for windowing and event management, the Oculus (Mac beta) SDK and eventually Valve’s OpenVR were used to interface with HMDs, but otherwise external dependencies were minimized. Inter-process communication and device driver code was ported and made cross-platform using Boost.

The complete porting/restoration of Osmose and Ephémère took place (on-and-off, i.e. part time) from 2013 to 2016, and included a number of in-person sessions (trips to Canada) to work with Char, John, Dorota, Daniel Chudak — who managed the restoration work — as well as a session with Georges Mauro (the animator who had worked under Char’s direction to create the original textures and models). The vast majority of the work, however, was done remotely (here, in Barcelona).

One factor that complicated the port was the desire to keep the works “exactly as they were” — as much as possible, at least. I’d understood that to be my original mandate: the graphics and sound should be the same as the original “legacy” works. Later in the project, when we began meeting with Char to present the new versions, this mandate was relaxed somewhat, because in some cases it was sufficient (or better) to aim instead for the same “sensibility” as the original works. Modern HMDs (such as the Vive and Rift) provided a somewhat different immersive experience, and not just because the displays and optics were different. The old works tended to run at around 20 fps — even on that SGI graphics supercomputer! — whereas with some optimization, the new works were able to run at 90 fps on modern hardware (while rendering over seven times the pixels).

The look in these new headsets is quite different, not just in terms of resolution, but also colour, distortions and “softness” (or rather, a distinct lack thereof, when compared to the old Division headsets). The low resolution of the original HMD (and its lack of screen-door effect) contributed to Char’s desired soft aesthetic; however, with the new high-resolution headsets, we decided that the low polygon counts of a few of the original models looked too harsh and hard-edged. In fact, we chose to deliberately add back in blur in some places, to get closer to the original softness. In general, though, because the works largely rely on semi-transparency, softness, and layering, they stand up quite well, even by “today’s standards” (whatever those may be).

To a large degree, this restoration was an objective, methodical, almost scientific process: exploration, experimentation, implementation…and repetition. However, these are artworks, and therefore “success” is more subjective than the meticulous process might suggest; a pixel-for-pixel match may not be the best criterion after all, some aesthetic decisions needed to be re-evaluated, negotiated and decided case-by-case.

This is the equipment required to run the “new” restored/remastered versions of Osmose and Ephémère (2016). Just a high-end PC, HTC Vive, audio interface with headphones, and the redesigned breathing/leaning navigation vest.

Ultimately, Char’s vision for these works — then and now — is what guided the work. We were not out to remake them, creating a v2.0 or a director’s cut “with a different ending”. The goal was to bring the works back to life, preserving their place in the history of interactive, immersive artworks. As part of this process, it was necessary to be true to the works, but also flexible, allowing ourselves some small “improvements”, in the same way that an old tape recording might be remastered, or the colour subtly restored to a fading painting. The restored/remastered versions of Osmose and Ephémère are not identical to the originals, but they are very close, and hopefully are the same in spirit. And…now the graphics and audio all run on a single PC!

Binaural blast (from the past?)

Recently I was doing more experiments in lightweight binaural spatialization for VR. Using the Ambisonic Toolkit (aka ATK), in SuperCollider, I implemented a binaural mixer for arbitrary number of 3D sources that includes Doppler shift and a faked reverb cue as audio sources move further away.

Here’s a quick test, with some spaceship-like sounds whizzing past (mostly from side to side, because the effect is more obvious). Listen with headphones if possible:

When running this with head tracking on the Oculus display, it’s quite convincing as the sources fly toward and past you in all directions and at different elevations. You can easily guess the direction to an approaching source, even with closed eyes. The graphics part is done in openFrameworks, with that app sending binaural source and listener information to SuperCollider via OSC messages. The graphics are very minimalistic — spheres — enough to ensure the sounds match their source positions as you look around in the Oculus DK2.

I also tried some other approaches, doing my own convolution (in SC) with the HRIR data directly (e.g. taken from the CIPIC or Listen datasets). I was able to support hundreds of simultaneous sources (more than 250 on my 2012 MacBook Air without dropping audio frames), but it’s lighter and simpler to use the ATK, because the decoding (HRTF convlution) only needs to happen once.

I also tried a few experiments with the Oculus Audio SDK, but just in a DAW, not with programmatic control. I prefer something that runs in SuperCollider, where I like to generate my sounds, and this unfortunately rules out VST or AU solutions (at least, without bending over backwards to make a plugin host UGen, or piping audio in and out using Jack or SoundFlower).

Over 20 years ago, we did binaural audio for VR using Crystal River Engineering‘s Convolvatron (or Acoustetron), running on a dedicated PC and usually providing eight spatialized sources. Those turnkey systems were great (and still exist in some form), but it’s nice to be able to support hundreds of simple sources — plus the graphic rendering — on a single machine! And be able to check my mail at the same time. (-;

Building M.U.R.S.

I was fortunate to be able to work a few days last fall with Pelayo Méndez. His company was contracted by the Catalan theatre troupe La Fura dels Baus to create interactive visuals for their new “smartshow”, titled M.U.R.S. (murs are walls in Catalan). Pelayo and his team created software for animated visuals, interactive “games” that turned spectators into participants and a networked mobile application that made audience members part of the show.

M.U.R.S. (Barcelona, 2014) from Pelayo Méndez.

Pelayo hired me to help write and tune some OpenCV code, using optical flow to allow Tetris-like blocks to respond to audience interaction (based on video captured by a pair of stage-mounted security cameras).

The best part of the job was spending a day near Manlleu at the troupe’s rehearsal space. I had a chance to test my block-busting code using the real setup of two cameras mixed into a live feed, with the visuals projected onto a big screen. It was a privilege to join Pelayo and Rafael and be a (very small) part of this, to witness the tension and excitement of last-minute rehearsals — it was just a week prior to the premiere in Murcia — with all the crew and actors extremely focused, doing their thing…

Sónar, Take 2

Last year, as a member of the Barcelona Laptop Orchestra, I programmed on various pieces and helped prepare for our performance at Sónar (Barcelona’s annual International Festival of Advanced Music and New Media Art). However, in the end, a wedding in Canada (and “best man” responsibilities) forced me to miss taking part in the show.

Sónar 2014 - thumbs up! But this year, fate came calling. More specifically, Sam at l’ull cec came calling, asking for help setting up Daito Manabe‘s Sónar show on June 12. Daito is a renowned artist/programmer who also runs the Rhizomatiks design studio in Tokyo. He was featured in Apple’s “Thirty Years of Mac” web pages, and has done all kinds of crazy and cool projects.

Sónar 12.13.14 June 2014 :: Daito Manabe

Daito Manabe setup @ Sónar 2014.
Daito Manabe setup @ Sónar 2014.
The performance featured three dancers, three remotely-controlled flying drones, a wide-angle projector with depth sensor (for projection mapping onto the dancers), ten infrared tracking cameras, and a bunch of computers and other gear. Our contributions (as last-minute helpers) were limited: mounting IR cameras, wiring them to routers, taping down cables — whatever we could do to get things done in the tight schedule between other sound checks and performances. Meanwhile, Daito and Motoi worked like crazy to fine-tune their software and fix a wonky drone. And choreographer Mikiko and the three dancers from the Eleven Play dance troupe went through last-minute rehearsals.

To give an idea: the performance was (approximately) a mixture of this one — with three dancers rather than five:

…and this, with dancing drones — although because of technical issues, sadly at Sónar the drones danced alone:

I didn’t contribute much to the whole affair, but it was inspiring and a privilege to be able to take part and help out, even in a small way.

Rehearsal/testing, Daito Manabe @ Sónar 2014.
Rehearsal/testing, Daito Manabe @ Sónar 2014.

Teatrillu goes to Hell

Circle "0" - Charon and the ferry across the river Acheron.
Circle “0” – Charon and the ferry across the river Acheron.

Eighth Circle (Bolgia 3) - Simony
Eighth Circle (Bolgia 3) – Simony

On May 17, we paid a (surprisingly pleasant and handbasket-free) visit to Hell — more specifically, to Dante’s Inferno, as one of the Insectotròpics‘ invited guests. Between May and September of this year,
Alex "painting" the Teatrillu world with an infrared flashlight.
Alex “painting” the Teatrillu world with an infrared flashlight.
the “Insectos” (a Barcelona-based theatre troupe) are organizing a series of collaborative theatrical/performance events at the old Fabra i Coats textile factory (now art centre), one for each cantica of Dante’s Divina Commedia, in which they
Second Circle - Lust. Souls eternally blown by the winds of a storm.
Second Circle – Lust. Souls eternally blown by the winds of a storm.
invite other artists to participate. This first voyage, to Hell (Un viatge a l’Infern in Catalan), included more than a dozen artistic groups (musicians, sculptors, video artists, painters, dancers, actors and more!), and lasted five hours on a Saturday evening.

Riki and Alex getting things set up.
Riki and Alex getting things set up.
We (the Wú Collective) contributed live imagery using two different versions of our Teatrillu software. For the event, we were fortunate to be joined by illustrator Riki Blanco, who provided graphical designs (drawings and cutouts) for us
Ninth and final circle - Treachery.  Satan, at the centre of the Earth. (Technically he should be surrounded by ice, we opted for more stereotypical fire).
Ninth and final circle – Treachery. Satan, at the centre of the Earth. (Technically he should be surrounded by ice, we opted for more stereotypical fire).
to animate.

One of our setups consisted in a “traditional” Teatrillu, making live stop-motions and other animated effects under a webcam, based on hand-made drawings and cutouts.

Fifth Circle - Anger. The wrathful wrestle with one another while the sullen flounder underwater.
Fifth Circle – Anger. The wrathful wrestle with one another while the sullen flounder underwater.
The output of these minimalist animations was fed to a TV on the Insectos’ video wall, as well as to a makeshift viewer we made out of an old wooden drawer, a tablet, a macro lens and some cardboard and
One Teatrillu "scene" appearing inside another, within the loop made by Alex's fingers.
One Teatrillu “scene” appearing inside another, within the loop made by Alex’s fingers.
aluminum foil.

A second Teatrillu program received input from the first (over the local network, using TCPSyphon), and then manipulated it with further effects. Alex experimented with projecting “my” world onto the pages of a book, at other times

Fourth Circle - Greed. Fortune carries "those empty goods from nation unto nation, clan to clan…"
Fourth Circle – Greed. Fortune carries “those empty goods from nation unto nation, clan to clan…”
masking it with hand-drawn (or infrared-projected) shapes on a whiteboard, at others still adding little flames to all its shapes. It’s a little hard to describe — basically we played and explored for five hours, adding our few small
Fourth Circle - Greed. Figures hoard, carry and push great weights.
Fourth Circle – Greed. Figures hoard, carry and push great weights.
drops of Wú flavour into the overall cauldron of chaos.

One thing we missed was interaction with the other video groups and painters — we’d hoped to send our outputs to others for further manipulation, as well as receiving their feeds

Third Circle - Gluttony (Gula in Spanish and Catalan).  Multi-headed Cerberus guards the gluttons.
Third Circle – Gluttony (Gula in Spanish and Catalan). Multi-headed Cerberus guards the gluttons.
(and hand-made imagery or even photo print-outs) to use as source material. Hopefully in subsequent events this can happen — in the end we mostly kept to our own little corner (of hell). As often happens,
Sixth Circle - Heresy. Heretics (among them, those who say "the soul dies with the body"), are trapped in flaming tombs. The irony is killing me.
Sixth Circle – Heresy. Heretics (among them, those who say “the soul dies with the body”), are trapped in flaming tombs. The irony is killing me.
everyone was really busy getting their own things ready until the last moment, and there wasn’t time to plan for more dynamic interaction between groups, as everyone had hoped.

I made a compilation of various short movie clips I recorded, as we worked our way through the nine circles of hell. Sorry about the audio and video quality, they were just recorded with a little compact camera, but it may give a vague idea of what we were up to that evening…

http://vimeo.com/98163506

…then I jumped…

I’ve recently taken on a few more of the Disquiet Junto weekly musical/audio challenges.


Last week’s project used a very technical Oulipian-style constraint.

Disquiet Junto Project 0097: Ford Madox Ford Page 99 Remix
This week’s project takes as its source a comment attributed to the author Ford Madox Ford: “Open the book to page ninety-nine and read, and the quality of the whole will be revealed to you.” We will convert text from page 99 of various books into music.

  • Step 1: Pick up the book you are currently reading, or otherwise the first book you see nearby.
  • Step 2: Turn to page 99. Confirm that the page has enough consecutive text in it to add up to 80 characters.
    Step 2a: If the page is blank or otherwise has no text, turn to page 98. Continue this process of moving backward through the book until your find an appropriate page.
    Step 2b: If you are reading an ebook that lacks page numbers, or a book that happens to lack page numbers, then use the first page of the main body of the book (i.e., not the Library of Congress information or the table of contents) or flip to a random spot/page in the book.
  • Step 3: When you have located 80 consecutive characters, type them into a document on your computer or write the down on a piece of paper.
  • Step 4: You will turn these characters into music by following the following rules:
    Step 4a: The letters A through L will correspond with the notes along the chromatic scale from A to G#. To convert a letter higher than L, simply cycle through the scale again (i.e., L = G#, M = A, etc.). Capital letters should be played slightly louder than lowercase letters.
    Step 4b: Any spaces and any dashes/hyphens will be treated as blank, as a silent moment.
    Step 4c: A comma or semicolon will signify a note one step below the preceding note.
    Step 4d: A period, question mark, or exclamation point will signify a note one step above the preceding note.
    Step 4e: All other punctuation (colon, ampersand, etc.) will be heard as a percussive beat.
  • Step 5: Record the piece of music using a digital or analog instrument.
  • Step 6: Set the pace for the recording to between 160 and 80 beats per minute (BPM). In other words, the track should be between 30 and 60 seconds in length.

In my case, the text was from “If On A Winter’s Night A Traveler” by Italo Calvino (English translation by William Weaver; published by Alfred A. Knopf), which probably fits a bit too literally into the Oulipo theme for this week. My segment of 80 characters from the top of p. 99 reads:

“is an important document; it can’t leave these offices, it’s the corpus delicti,”

I converted the characters to notes in SuperCollider, according to the project rules for this week. I played various versions of the note stream to different instruments (using NI Kontakt), and layered on some psychedelic effects, to give an oneiric, vaguely jazzy quality to the whole thing.


I found this week’s project (#98) particularly interesting. It also used a similar idea of text and constraints. The cacophonous layering of voices is really compelling.


In this project, we were asked to do an “audio biography” of sorts. In particular, we had to write three short texts, each beginning with the same words (the starting phrase was chosen randomly from a list of six options). In my case, the texts begin with: “This morning I had a sense that…” The first text contains 100 words, the second 90, the third 80. They were to be played simultaneously, such that the first (identical) words lined up, and then they diverge.

I recorded myself reading my texts, then did some light editing and added various effects in Reaper.

Disquiet Junto Project 0098: Woven Audiobiography
The steps for this week’s project are as follows:

  • Step A: Choose a number from 1 through 6. You can roll a die or use an online number generator, or come to a decision on your own.
  • Step B: Write a 100-word text beginning with one of the following phrases, depending on the number you selected. Where there are brackets fill them in with the appropriate information.
    “I was born in [ ] and I like …”
    “My name is [ ] and I was thinking …”
    “This morning I had a sense that …”
    “Try as I might, the same thing …”
    “The last book I read was and …”
    “On a Sunday morning I usually …”
  • Step C: Write a 90-word text beginning with the same phrase.
  • Step D: Write an 80-word text beginning with the same phrase.
  • Step E: Record yourself reading the three texts as three separate tracks. Record each at the same pace. Speak slowly and take an extended pause after any period.
  • Step F: Layer the three tracks into one track. They should all begin at the same point and the first few words should, more or less, overlap to the point of being indistinguishable.

Of gigs and sound bites

After a long span of lots of hard work (more about that in the coming month) but no performances, I had not one but two gigs this week, performing with my co-conspirators from the Wú:: Collective, Alex and Roger.

Announcement of the WeArt SubverJam.
Announcement of the WeArt SubverJam.

First, on Wednesday, we took part in the SubverJam session (in polite company, referred to as a New Media Art event), as part of the closing of the 2013 WeArt Festival. This involved a multitude of groups (at least six or seven), all jamming together, firing on all cylinders with audio and video “injections”. Barcelona’s newly-opened El Born Centre Cultural proved to be a fantastic venue.

The El Born CC is an impressive new art and culture space, located in the historic el Born market. The market was closed (as a market) in 1971, saved from destruction by neighbourhood protest, renovated and used for various events before being slated as a new library in the late ’90s. As work got underway on the library, they unearthed an important Catalan archaeological site that needed preserving (though there was debate about that, too). The library plan was eventually scrapped, and finally in September 2013, it opened as a beautiful new cultural centre, designed around the archaeological site, which occupies most of the interior space.

SubverJam (WeArt 2013)
Part of the elaborate setup from the WeArt SubverJam (low-quality photo, but it’s the best I’ve got).

The WeArt event was in the centre’s “espai polyvalent”, Sala Moragues. In this large space, there were six smaller projections (one for each group: three each on opposing long walls), plus a big (6m-wide) projection at the far end of the room. The folks from Telenoika were doing video mixing and manipulations on the the large screen. On our Wú:: screen, I was projecting images from an openFrameworks application I created, taking input from webcam and pre-recorded video, manipulating it with GLSL shaders and live audio input (as well as my own live inputs and coding).

Audio came from re-jigged turntables and diverse analog gadgets on which Alex and Roger were performing, as well as a SuperCollider program I’d prepared for the occasion. The only problem is that, with so many groups, it ended up being…quite loud. It was difficult to hear your own contributions (hard even to think!), so mostly we just played and experimented with audio through our own headphones, while I also manipulated the video projection, responding to the room noise ambience. I got a few nice comments about my low-key visual effects. The event was open to the public for a couple of hours, during which we all “did our thing”. The public was free to wander around, look at what we were doing, interact and ask questions. At the peak, the room was fairly full (one or a few hundred people?). For my taste, it was a bit too loud and unstructured, but most spectators I asked told me they were enjoying it. I must be I’m getting old.

Our main focus this week, however, was a performance on Saturday (November 9), with New York-based sonic artist Thessia Machado. This was at Homesession, a small art loft in the Poble Sec neighbourhood. Thessia has been there for a couple of months on a residency, and during that time built some new instruments that amplify and manipulate the sound from simple bumping/scraping/vibrating/clicking objects. The objects are a mix of repurposed electrical mechanisms and hand-made paper sculptures. She was asked to perform three sessions at the conclusion of her residency, and invited Wú:: to collaborate with her for one of these events.

Thessia Machado and the Wú:: Collective (Glen, Roger and Alex) perform at a Thessia's "end of residency" concert.
Thessia Machado and the Wú:: Collective (Glen, Roger and Alex) perform at a Thessia’s “end of residency” concert.

We used a similar setup to the WeArt show. For our half-hour set, Alex and Roger played modified turntables and various analog effects and filters. Thessia performed with her new instruments, and although I was prepared to contribute some SuperCollider audio, in the end I mostly focused on visuals, which were projected on a wall of the gallery. In the days after the WeArt gig, I was able to refine my GLSL shader programs further, and also get live input from two webcams. I could trigger them based on audio input (for example, a camera would fade in more as one performer or another played sound snippets).

A different angle, showing some of Thessia's instruments, while Glen gets an aerial view with a camera.
A different angle, showing some of Thessia’s instruments, while Glen goes for an aerial view and Roger and Alex deconstruct the wheels of steel.

I started with a base of procedural noise and added in the camera images, some soft glitchy effects that deliberately misused the webcam data, kaleidoscope-y effects and a few other manipulations I’d written in OpenGL’s shading language. The images were also distorted and pulsed using audio control data piped in from SuperCollider. Mostly, I spent the time finding interesting things to look at with the webcams.

After several changes of plans (on our side) in the preceding week, and much patience from Thessia, I think we can safely to call the Homesession performance a success. An “intimate” crowd (aka one or two dozen people) were witness to our Saturday evening playtime.

Wú projection
If you see Thessia Machado’s wires and gadgets in this “Rorschach test”, you’re probably on the right track. Photo from one of my projections during Saturday’s performance.

Step on a Crack…

…but try to avoid doing any harm to your mother’s back.

For this piece, I took a map showing a small portion of the San Andreas fault, and mapped the fault lines into melodic and harmonic lines. The map was randomly assigned to me (see details of this 73rd Disquiet Junto project below). I programmed the score and instruments in SuperCollider, recorded three complete takes in real time, and finally mixed them together. Each part is different, because there is some random variation in the patterns. However, they are similar enough that they blend together well, like different musicians improvising to the same piece.

Map showing a small segment of the San Andreas fault, used as basis for "Step on a Crack".  Obtained from USGS via Disquiet Junto.
Map showing a small segment of the San Andreas fault, used as basis for “Step on a Crack”. Obtained from USGS via Disquiet Junto.

I started by importing a (hand-processed) map image containing only the relevant black lines, as a Portable Greymap (PGM) text file. Then, I created a series of SuperCollider patterns that read and indexed into this data, using it to pick degrees from different scales. The musical score moves from left to right through the image, taking the horizontal axis as time.

The solid and dashed black (fault-line) pixels are taken to represent “potential” eighth notes. There were 1500 columns across the original image, so there were about 188 4-beat bars in the piece. The tempo varies, though, between one bar per second and four seconds per bar. There can be more than one black line at any given time — since the faults bifurcate and merge — and these correspond to different voices. The lead voice comes from the “strongest” line, and is quite a simple tone with a percussive envelope. Beneath that are several analog-esque monophonic voices, plus extra hits at “geographically busy” places, using a synthesized plucked-string sound.

Pauses and modal changes were chosen manually, at points that seemed musically interesting.

Produced for Disquiet Junto (Project 0073: Faulty Notation).

Instructions: This week’s project is about earthquakes. Each participant will receive a distinct section of a map of the San Andreas Fault. The section will be interpreted as a graphic notation score. The resulting music will, in the words of Geoff Manaugh of BLDG BLOG, “explore the sonic properties of the San Andreas Fault.”

There are 4 steps to this project:

Step 1: To be assigned a segment of the map, go to the following URL. You will be asked to enter your SoundCloud user name, and then to enter your email address. You will receive via that email address a file, approximately 1MB in size, containing your map segment.

Step 2: Study the map segment closely. Develop an approach by which you interpret the map segment as a graphic notation score. The goal is for you to “read” the image as if it were presented as a piece of notated music. Read the image from left to right. Pay particular attention to solid black lines, which represent fault lines. For additional guidance and inspiration, you may refer to the map legend at the following URL. The extent to which you take the legend into consideration is entirely up to you.

Step 3: Record an original piece of music based on Step 2. It should be between two and six minutes in length. You can use any instrumentation you choose, except the human voice. (Note: Do not use any source material to which you do not yourself outright possess the copyright. This is highly important, because we may look into developing a free iOS app of the resulting recordings.)

Step 4: When posting your track, include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto.

An unusual Mixtur

Josep, Orm and Glen talk during rehearsal, prior to Barcelona Laptop Orchestra's performance at Festival Mixtur 2013.
Josep, Orm and Glen talk during rehearsal, prior to Barcelona Laptop Orchestra’s performance at Festival Mixtur 2013.

Last Friday (April 26), the Barcelona Laptop Orchestra performed at the Mixtur Festival, an event featuring musical and sonic art, research and experimentation. The venue was beautiful — inside the renewed industrial space of Fabra i Coats. The space was previously a textile factory, created following the 1903 fusion of Catalan textile producer Fabra y Portabella with the ancient J & P Coats company, with roots in Paisley, Scotland. The old FiC factory has been closed for decades, but recently took on new life as a “creative factory”, and now is home to artists, studios and creative events. It is well known to me as home of l’Ull Cec (for courses and workshops, SuperCollider meetings, music events), as well as where the Insectotròpics theatre group is currently rehearsing for their next piece.

For Mixtur, we only performed performed one piece: Quo-tr, a piece specially created for us by German composer Orm Finnendahl, with support from the Goethe Institut. Four performers and speakers were located around the audience, and we played different “instruments” consisting of — almost anything. I played Tibetan bowls and bells, a comb, a lens blower, a pair of metal Korean chopsticks, a “bird chirper”, some marbles, … you get the idea. Trying to make unusual and distinct sounds to play with Orm’s piece, which relies on live sound mixed with sound recorded and played back by his elaborate software. Besides my contributions, John played a “prepared” electric ukulele, Álvaro played bottles, whistles, a balloon and other squeaky/scratchy things, while Victor used sampled source sounds, triggered by an iPad and keyboard.

Putting the final touches on our setup for the Mixtur performance at Fabra i Coats.
Putting the final touches on our setup for the Mixtur performance at Fabra i Coats.
Mixtur attendees were the right crowd for this kind of music, and it was rewarding to perform here — plenty of people in the audience, a curious and enthusiastic response (several people commented that we should have played longer). The venue itself was another major attraction. We had a beautiful, big space to perform, and it was moodily lit with teardrop-shaped lamps. For Orm, it is important that people see the relation between what we were doing and the sound being produced. His piece is not about random things happening; there is an order, and although there is an element of improvisation, we had to learn to play the scores we were using, to anticipate and respond. Also, I think the fact that attendees were free to roam around (if they didn’t want to lounge on a big pile of comfy cushions), helped the experience.

Here’s a raw video of the event (missing video of some parts, replaced by soothing darkness).

An Auditori experience

Auditori show announcement
Show announcement.

Last week was a busy one! After Tuesday’s Moritz/Insectotròpics event, on Thursday I had another concert with the Barcelona Laptop Orchestra — this time at l’Auditori, in Sala 2 (Oriol Martorell). There wasn’t enough advance publicity, and we ended up very (very!) far from filling the 600-seat venue, but it was a great experience nonetheless. It was fun to be behind the scenes at such a large and professionally-run venue. It reminded me of my Banff Centre days.

Rehearsing Quo-tr with composer Orm Finnendahl
Rehearsing Quo-tr with composer Orm Finnendahl.

We performed three pieces. First up was a revival of one of the BLO’s classics from previous years: la Roda (“the wheel”), in which fragments of audio pass through the hands of each player successively, allowing them to modify and then pass them on — much like the children’s game “telephone”.

After that, we performed a new work in progress, Quo-tr, which was specially created for us by composer Orm Finnendahl, with support from the Barcelona branch of the Goethe Institut (we will perform this piece again this Friday, April 26, as part of the Mixtur festival). In it, five performers make real-world sounds (in my case, using Tibetan bowls, marbles, velcro, paper, a bird chirper, and more), which are incorporated and “quoted back” by a graphical score, controlled by elaborate Pd patches created by Orm.

Our final piece of the evening was our popular CliX ReduX, which I revisited for this latest performance. Since we were outputting to stereo PA and not one speaker per player, I created a centralized client/server version in SuperCollider, which sends instructions from each player over the network. The new version also makes it easy to build looping fragments of letters/notes, which means that interesting and complex rhythms can be improvised. As usual, a large video projection showed our faces and other snippets from my “videoSampler” in the background, synchronized with the audio playback.

Barcelona Laptop Orchestra members set up for a show at l'Auditori, April 18, 2013.
Barcelona Laptop Orchestra members set up for a show at l’Auditori, April 18, 2013.

The ESMUC building, where we normally rehearse, is physically connected to these big concert halls, so we hauled all our gear down the back corridors to Sala 2 on three trolleys, starting around 15h00. The show was at 19h00, and we made full use of those hours to get ready. We had a total of nine performers, spread over the three pieces. Another group, the Unmapped collective (from Paris), shared the stage with us, performing an interesting mix of live instruments (bassoon and flute) with laptop sound manipulations.

Barcelona Laptop Orchestra members packing up their setup for Auditori show, April 18, 2013.
Barcelona Laptop Orchestra members packing up (or looking for contact lenses?) in Auditori’s Sala 2, after the show on April 18, 2013.

It was an interesting experience to be on-stage at such a large venue. We had to clear the auditorium before the public entered, and it was odd to be outside, sitting and having a “relaxed” tea at the café, watching people go into the theatre just minutes before our show, then running back in the stage door and through the labyrinthine underground halls to magically appear on stage, just in the nick of time. I’m getting more used to it, but these things are always somewhat nerve-wracking (especially the setups, rushed rehearsals, last-minute substitutions of faulty WiFi routers, etc), but ultimately rewarding. Thankfully, everyone is very focused and competent, when it comes to “crunch time”. These shows really are a team effort.