Of gigs and sound bites

After a long span of lots of hard work (more about that in the coming month) but no performances, I had not one but two gigs this week, performing with my co-conspirators from the Wú:: Collective, Alex and Roger.

Announcement of the WeArt SubverJam.
Announcement of the WeArt SubverJam.

First, on Wednesday, we took part in the SubverJam session (in polite company, referred to as a New Media Art event), as part of the closing of the 2013 WeArt Festival. This involved a multitude of groups (at least six or seven), all jamming together, firing on all cylinders with audio and video “injections”. Barcelona’s newly-opened El Born Centre Cultural proved to be a fantastic venue.

The El Born CC is an impressive new art and culture space, located in the historic el Born market. The market was closed (as a market) in 1971, saved from destruction by neighbourhood protest, renovated and used for various events before being slated as a new library in the late ’90s. As work got underway on the library, they unearthed an important Catalan archaeological site that needed preserving (though there was debate about that, too). The library plan was eventually scrapped, and finally in September 2013, it opened as a beautiful new cultural centre, designed around the archaeological site, which occupies most of the interior space.

SubverJam (WeArt 2013)
Part of the elaborate setup from the WeArt SubverJam (low-quality photo, but it’s the best I’ve got).

The WeArt event was in the centre’s “espai polyvalent”, Sala Moragues. In this large space, there were six smaller projections (one for each group: three each on opposing long walls), plus a big (6m-wide) projection at the far end of the room. The folks from Telenoika were doing video mixing and manipulations on the the large screen. On our Wú:: screen, I was projecting images from an openFrameworks application I created, taking input from webcam and pre-recorded video, manipulating it with GLSL shaders and live audio input (as well as my own live inputs and coding).

Audio came from re-jigged turntables and diverse analog gadgets on which Alex and Roger were performing, as well as a SuperCollider program I’d prepared for the occasion. The only problem is that, with so many groups, it ended up being…quite loud. It was difficult to hear your own contributions (hard even to think!), so mostly we just played and experimented with audio through our own headphones, while I also manipulated the video projection, responding to the room noise ambience. I got a few nice comments about my low-key visual effects. The event was open to the public for a couple of hours, during which we all “did our thing”. The public was free to wander around, look at what we were doing, interact and ask questions. At the peak, the room was fairly full (one or a few hundred people?). For my taste, it was a bit too loud and unstructured, but most spectators I asked told me they were enjoying it. I must be I’m getting old.

Our main focus this week, however, was a performance on Saturday (November 9), with New York-based sonic artist Thessia Machado. This was at Homesession, a small art loft in the Poble Sec neighbourhood. Thessia has been there for a couple of months on a residency, and during that time built some new instruments that amplify and manipulate the sound from simple bumping/scraping/vibrating/clicking objects. The objects are a mix of repurposed electrical mechanisms and hand-made paper sculptures. She was asked to perform three sessions at the conclusion of her residency, and invited Wú:: to collaborate with her for one of these events.

Thessia Machado and the Wú:: Collective (Glen, Roger and Alex) perform at a Thessia's "end of residency" concert.
Thessia Machado and the Wú:: Collective (Glen, Roger and Alex) perform at a Thessia’s “end of residency” concert.

We used a similar setup to the WeArt show. For our half-hour set, Alex and Roger played modified turntables and various analog effects and filters. Thessia performed with her new instruments, and although I was prepared to contribute some SuperCollider audio, in the end I mostly focused on visuals, which were projected on a wall of the gallery. In the days after the WeArt gig, I was able to refine my GLSL shader programs further, and also get live input from two webcams. I could trigger them based on audio input (for example, a camera would fade in more as one performer or another played sound snippets).

A different angle, showing some of Thessia's instruments, while Glen gets an aerial view with a camera.
A different angle, showing some of Thessia’s instruments, while Glen goes for an aerial view and Roger and Alex deconstruct the wheels of steel.

I started with a base of procedural noise and added in the camera images, some soft glitchy effects that deliberately misused the webcam data, kaleidoscope-y effects and a few other manipulations I’d written in OpenGL’s shading language. The images were also distorted and pulsed using audio control data piped in from SuperCollider. Mostly, I spent the time finding interesting things to look at with the webcams.

After several changes of plans (on our side) in the preceding week, and much patience from Thessia, I think we can safely to call the Homesession performance a success. An “intimate” crowd (aka one or two dozen people) were witness to our Saturday evening playtime.

Wú projection
If you see Thessia Machado’s wires and gadgets in this “Rorschach test”, you’re probably on the right track. Photo from one of my projections during Saturday’s performance.

Step on a Crack…

…but try to avoid doing any harm to your mother’s back.

For this piece, I took a map showing a small portion of the San Andreas fault, and mapped the fault lines into melodic and harmonic lines. The map was randomly assigned to me (see details of this 73rd Disquiet Junto project below). I programmed the score and instruments in SuperCollider, recorded three complete takes in real time, and finally mixed them together. Each part is different, because there is some random variation in the patterns. However, they are similar enough that they blend together well, like different musicians improvising to the same piece.

Map showing a small segment of the San Andreas fault, used as basis for "Step on a Crack".  Obtained from USGS via Disquiet Junto.
Map showing a small segment of the San Andreas fault, used as basis for “Step on a Crack”. Obtained from USGS via Disquiet Junto.

I started by importing a (hand-processed) map image containing only the relevant black lines, as a Portable Greymap (PGM) text file. Then, I created a series of SuperCollider patterns that read and indexed into this data, using it to pick degrees from different scales. The musical score moves from left to right through the image, taking the horizontal axis as time.

The solid and dashed black (fault-line) pixels are taken to represent “potential” eighth notes. There were 1500 columns across the original image, so there were about 188 4-beat bars in the piece. The tempo varies, though, between one bar per second and four seconds per bar. There can be more than one black line at any given time — since the faults bifurcate and merge — and these correspond to different voices. The lead voice comes from the “strongest” line, and is quite a simple tone with a percussive envelope. Beneath that are several analog-esque monophonic voices, plus extra hits at “geographically busy” places, using a synthesized plucked-string sound.

Pauses and modal changes were chosen manually, at points that seemed musically interesting.

Produced for Disquiet Junto (Project 0073: Faulty Notation).

Instructions: This week’s project is about earthquakes. Each participant will receive a distinct section of a map of the San Andreas Fault. The section will be interpreted as a graphic notation score. The resulting music will, in the words of Geoff Manaugh of BLDG BLOG, “explore the sonic properties of the San Andreas Fault.”

There are 4 steps to this project:

Step 1: To be assigned a segment of the map, go to the following URL. You will be asked to enter your SoundCloud user name, and then to enter your email address. You will receive via that email address a file, approximately 1MB in size, containing your map segment.

Step 2: Study the map segment closely. Develop an approach by which you interpret the map segment as a graphic notation score. The goal is for you to “read” the image as if it were presented as a piece of notated music. Read the image from left to right. Pay particular attention to solid black lines, which represent fault lines. For additional guidance and inspiration, you may refer to the map legend at the following URL. The extent to which you take the legend into consideration is entirely up to you.

Step 3: Record an original piece of music based on Step 2. It should be between two and six minutes in length. You can use any instrumentation you choose, except the human voice. (Note: Do not use any source material to which you do not yourself outright possess the copyright. This is highly important, because we may look into developing a free iOS app of the resulting recordings.)

Step 4: When posting your track, include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto.

Brand Niu Video

I finally managed to put together a video of my live coding performance at Niu. Enjoy! (that is, if you have 23 minutes to kill)

2nd annual “Live Coding Sessions” evening at Niu, on March 22, 2013

Coding was done in SuperCollider 3.6, and I set myself the constraint of only using the most basic building blocks of sound: sine waves (as oscillators, LFOs, envelopes, even as arpeggiators driving patterns of scale degrees). I also tried to create everything pretty much from scratch, although I did “cheat” a few times (cheating, perhaps, according to live coding purists). As I’m still wishing for macro functionality in SC’s new IDE (if I had some time I ought to just contribute it myself), I decided to use an external macro program to speed the coding in a few cases (just a marginally sexier version of cut & paste). It’s too bad you have to strain a bit to read the projected text in the video — that’s the most interesting part of live coding: seeing the code that goes along with the sounds you’re hearing! Otherwise, it can be a bit long… (-;

Audio was captured on a Zoom H1 recorder, including room ambience (not to mention me pounding the keyboard). I also taped a contact mic to my laptop, giving an enhanced sense of “liveness” to the coding). The sound is probably best appreciated with headphones (for the lower sine wave frequencies).

(Spot the glitch! From about 4:10 to 5:10, I make a mistake, and have to work my way through it. In trying to pack a lot into 20 minutes, I make an error and it takes me a moment to figure out where I’ve gone wrong… I should have scrolled back to read the errors in the post window right away, but my pride was trying to avoid doing that, to make it seem as if all were under control. Live coding is an interesting mental challenge — It’s easy to code from the comfort of home or office, but your brain works differently (or doesn’t ;-)) under “live” (i.e. people watching you) conditions! It’s not unlike meditation, in that it needs extreme focus and concentration. Turns out I just needed to keep calm and carry on…and add that missing variable name!

Merci beaucoup à Anna Duriez — for bringing her SLR camera and recording the video (and letting me use it)! It was quite flickery (a strobing effect from the shutter speed interfering with the lights and projector), but I managed to even it out by blending frames together — luckily there’s not a lot of action or camera moves in live coding, so I could get away with it.

Live concerts and live coding

A nice new video of our January Phonos concert is available now on Youtube (thanks, Sònia!).

Since then, we had a more intimate and playful performance (February 8) at a small art space called Niu — it went down really well (maybe drinks helped — audience and/or performers ;-)). We performed three pieces, including extended and more improvisatorial versions of CliX ReduX and Six Pianos (which didn’t use pianos at all this time, instead Hammond organs, electric guitar/bass and a few other funky things).

I updated CliX to use a synchronized clock (MandelClock) from BenoitLib. Proper synchronization between machines helped the piece a great deal, allowing us to get into some really interesting grooves, especially with the sampled sounds and projected video snippets. Caballé is always a hit… However, we did have a few glitches (still not 100% sure why), where the tempo would occasionally change without warning. It corrected itself within a few seconds, but was quite disconcerting (although several people in the audience claim not to have noticed anything wrong). In subsequent rehearsals the problem wasn’t as severe (I made some changes to reduce network traffic), but did still occur from time to time. I suspect it’s to do with lost or out-of-order OSC messages, which happens regularly on busy WiFi networks.

Most recently, the Barcelona Laptop Orchestra performed (March 8) as “pre-dinner entertainment” at the Polifonia conference (a mostly-European grouping of music conservatories), held in Barcelona. It was located in the restaurant area of the Museu Marítim, in a beautiful stone building that used to be part of the old shipyards. We were only performing CliX ReduX, and had managed to build to a nice “welcome” point after five or ten minutes, when suddenly — BOOM! — our power went out. (Everyone applauded; I assume because it was a particularly dramatic stop, but perhaps they were simply glad they could start eating.) Somewhere, we had tripped a circuit-breaker.

It took at least 15 minutes until we found a functional plug (downstairs, using an extremely long extension cord) and got the projector working. Our laptops waited patiently, chugging along on battery power. But by then, we’d lost some of our vibe, and the audience had moved on to chit-chat, toasting and appetizers. We performed the last section of our show, but I wouldn’t claim it was a huge hit. We did get free dinner out of it, though. I’m sure many of the classical music professors were thinking: “Hah – all this new-fangled technology, what a disaster! That’s why violins, pianos and oboes are better!”


In other news:

Live Coding Sessions II @ NiuBcn

I’ve agreed to perform at a live coding event at Niu, on March 22. Yikes, my first time flying solo. I’ve been spending the last few weeks trying things out in SuperCollider, but still (with only a week and a half left to go) haven’t found a good flow. I decided to set myself a constraint — skipping fancier synthesis techniques and only working with sine curves. Well, that’s the plan…

If you can’t read the Catalan on the Niu upcoming activities page or the Spanish on Arte Sonoro, here’s an English translation of my bit:

Glen Fraser (Canada) has always preferred “live coding” to dead coding. Although he’s programmed interactive graphics and sound for fun and profit for a quarter century, it’s always been from the relative safety of his home or office. This will be his first time doing it for an audience. In this performance, Glen will use SuperCollider to explore what he calls “Sines and Symbols”. He is currently a member of the Barcelona Laptop Orchestra and of the Wù:: Collective, where he develops technology for the performing arts.

The concert is also mentioned on Modisti (though I prefer my own English translation…)

The Post-Phonos Post

Barcelona Laptop Orchestra performing Cage's 'Variations II', with remote contributions (and Skype presence, on an iPad on the pedestal) by developer William Brent.  (Photo: Álvaro Sarasúa)
Barcelona Laptop Orchestra performing Cage’s ‘Variations II’, with remote contributions (and Skype presence, on an iPad on the pedestal) by developer William Brent. (Photo: Álvaro Sarasúa)

We had our Phonos concert last Thursday (January 31), and in spite of not being 100% prepared — in spite of a big final-week push — I think it was a success. Probably 30-40 people attended in the “sala polivalent” at the UPF, and they were treated to a very complex setup and six very different pieces. The setup took most of the day; some of us were there from 10h to 22h, and the concert was at 19h30. Eight speakers in a ring around the audience, with eight tables and one or two performers seated at each table. In addition, eight channels of video, fed directly, or (for lack of hardware) in some cases reshot by cameras from external monitors to capture the output (quality of video was not great, in these cases). All those feeds were sent through video mixers to create two projections, each with a 2×2 grid of videos. In some pieces (e.g. Light Scratch), these showed the faces of the performers, in most other pieces they showed the contents of our screens (e.g. Six Pianos). Laptop music can be rough going if the audience has no sense of the interaction between what the people sitting behind the computers are doing, and what they’re hearing. Hopefully the video feeds helped a bit with that.

Roger, Tim, Alex and Nadine of Barcelona Laptop Orchestra perform at Phonos, showing our complex setup!  (Photo: Álvaro Sarasúa)
Roger, Tim, Alex and Nadine of Barcelona Laptop Orchestra perform at Phonos, showing our complex setup! (Photo: Álvaro Sarasúa)

But really, it’s all about the music. Minimalist classics, in most cases, reworked to give them our own touch. We played our reinterpretations of six pieces:

  • In C (Terry Riley) — based on a Pure Data patch by our director Josep, I reworked this piece to run in SuperCollider. We played it as people came in, to create a mesmerizing ambience to open the concert. Audio is a simple synthesizer, to allow people to focus on the interesting phasing effects produced by each performer’s place in the score.
  • Rimandi (Ivano Morrone) — a piece that uses contact microphones stuck to the laptop, and makes ring-modulated noisy goodness from internal computer noises, fingers tapping, rubbing and scratching the laptop itself.
  • CliX ReduX (Ge Wang / BLO) — the original piece, that makes rhythmic clicks based on networked typing, was create in ChucK (Ge Wang, Princeton Laptop Ensemble), but I totally redid it in SuperCollider. Our version sounds similar when in “clix mode”, but beyond that it’s barely the same thing. Now you work with a longer buffer of characters, can change the rhythm and also can change the sound of the “clicks” to be sample-based. We created audio and video samples with fragments of “interesting things” scavenged from various sources (for ASCII characters which are not letters), and for the letters of the alphabet, we use each performer’s voice (and face) making the letter’s sound. We created a video player, written in Open Frameworks, that plays back “video samples” that complement the sound. You can get an idea of what this can look like (for one performer, at least) with the videos in a previous post. This latest version of the piece was very well received.
  • Light Scratch (BLO) — created by one of our members, Nadine Kroher, it uses the webcam to look for bright light spots, and do funky things with an audio samples based on the user moving a light source, or jamming their face up close to the camera, waving hands, etc. It can be quite entertaining (or frightening) to see a macro view of Enric’s nostrils or Jan’s eye…
  • Variations II live (John Cage / William Brent) — this one is a live reworking of an installation piece, created for us by William Brent. It involves each player making a series of very simple sketches (each with six lines and five points), which treated as mini-scores and sent to a central server and used to create the audio of the piece. There is also visual feedback of the scores as they are played. William was located across the Atlantic in Washington, contributing sketches remotely, along with the rest of us. We also had him on a Skype connection, and placed him on a pedestal (literally) as the piece was being performed. This was the premiere for his piece.
  • Six Pianos cover (Steve Reich / BLO) — this is another favourite, as it is very obvious that we are actively doing something. Each player has a webcam pointing down at their workspace (playspace?), with a small light illuminating the space. An Open Frameworks application uses OpenCV and Gaussian classifiers to detect blobs of colour, with the colour indicating scale degree and size indicating octave (big = low, small = high). The playspace acts like a step sequencer, with time-steps along the horizontal, and the vertical axis used to control volume. It is called Six Pianos because that’s the piece that inspired it. In this concert, we performed an excerpt of Steve Reich’s piece, using this new visual instrument. Each player’s notes are sent to a SuperCollider program that is responsible for playing the synchronized audio. The instruments are high-quality sampled pianos, using NI’s Kontakt, output via a six-channel audio interface, and each output going to the speaker of its corresponding performer.

Tonight we have another gig, at Niu (an art centre in the Poblenou neighbourhood of Barcelona). I’m not sure what kind of audience we’ll have, since our concert listing is included in such websites as ClubbingSpain and Le Cool. Ah, if only we truly were. (Cool, I mean.)

Tonight we’ll just perform a few pieces: CliX, Six Pianos and Light Scratch. Even though only a week has passed, two of these pieces have already evolved (software-wise or performance-wise). Tonight we only have two loudspeakers, and a much more intimate space, so we decided not to use pianos but rather six distinct instruments (to distinguish individual players a bit). Also, we’ll jam with these pieces for a bit longer, improvising as we go. We had a good rehearsal last night, where we tried this more “free form” Six Pianos. Take a look (note that the audio level is quite low, so best to listen amplified, or with headphones to get the full effect).

Time War P

This project started from the recording of a clock. I recorded our kitchen clock, but unfortunately it ended up a bit noisier than I’d have liked. There’s some of hiss in there (more obvious once you layer dozens of versions of it!), and there might be the odd muffled street noise. Okay, so I don’t have a silent recording studio. Texture, yeah, that’s what I call it. (-;

I took that single monophonic sample (about 23 seconds long) and then used it as a buffer in SuperCollider, making various drone-like synths, plenty of funky ticking patterns and some weird warping transitions and granular stuff. The opening seconds are pretty much the original sound, albeit layered several times.

I wanted to go for a hypnotic, dream-like effect (yes, you are getting very, very sleeeeeepy) where I could move smoothly between different phases. In some, you’re aware of time being slowed down, in others it flies by, and at other moments you don’t even notice it’s there.

Produced for Disquiet Junto (Project 0056: Matter of Time).

Instructions: This week’s project requires you to make a field recording to serve as the source audio. These are the steps:

Step 1: Locate a clock that has an audible, even if very quiet, tick to its second hand. A watch or other timepiece is also appropriate to the task.

Step 2: Record the sound of the clock for at least 30 seconds, and do so in a manner captures the sound in the greatest detail. A contact mic is highly recommended.

Step 3: Adjust and otherwise filter the recording to reveal the various noises that make up its tick. The goal is to get at the nuance of its internal mechanism.

Step 4: Create an original piece of music employing only layered loops of that sound. These layered loops can individually be transformed in any manner you choose, but at least one unaltered version of the original recording should be included in your piece.

Phonos Frenzy

Have been working slavishly on several pieces for the Barcelona Laptop Orchestra.  Among them, an optical-recognition piece that took Steve Reich’s Six Pianos as its starting point (or — more accurately — it’s ultimate goal, and we’re not quite there yet!), and a piece we call CliX ReduX, inspired by Ge Wang and the Princeton Laptop Orchestra’s original CliX.

We have our Phonos concert coming up next Thursday (January 31, 2013), at the Universitat Pompeu Fabra.  Full details of the repertoire we’ll be performing is available on another page (only in Catalan, sorry). You can also read a description of the pieces, by director Josep Comajuncosas (also in Catalan) here.

First up, a sample of CliX ReduX, showing several enhancements, such as the audio and video snippets seen in this video.  In the video, I just run through the alphabet a few times, giving a taste of how it looks and sounds (when in “Vox” mode). The audio and UI components are written in SuperCollider, the video sampler program in Openframeworks.

Next, I show the more “classic” version of CliX ReduX. This one has sound that’s more in keeping with Ge Wang’s original piece, but adds visual display of “flying letters”, and also the possibility of multiple syncopated character streams per player.  The text here is from Hamlet’s famous soliloquy, and runs from: “To be, or not to be, that is the question” through to: “Tis a consummation / Devoutly to be wished.” At first there is only one stream, so it’s relatively easy to follow the letters (if you know what to expect!), but after a few lines I put it into “syncopated” mode, where more than one letter can play simultaneously.  It’s like a spelling bee on steroids…

Finally, here’s another example of the CliX ReduX piece (this one featuring another BLO member, Andrés, “speaking” the first part of the famous Hamlet soliloquy — “Whether ’tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles, and by opposing end them.“):

Tintineo refrescante

I recorded myself dropping and shaking ice in a pint glass. Then I used this single sample (of about 7 seconds) to produce all the sounds (percussion, drones, semi-pitched) in this track. Produced entirely in SuperCollider.

Produced for Disquiet Junto project 0053.

Instructions: Please record the sound of an ice cube rattling in a glass, and make something of it.

Background: Longtime participants in, and observers of, the Disquiet Junto series will recognize this single sentence as the very first Disquiet Junto project, the same one that launched the series on the first Thursday of 2012. Revisiting it a year later provides a fitting way to begin the new year. A weekly project series can come to overemphasize novelty, and it’s helpful to revisit old projects as much as it is to engage with new ones. Also, by its very nature, the Disquiet Junto suggests itself as a fast pace: a four-day production window, a weekly habit. It’s beneficial to step back and see things from a longer perspective.