But this year, fate came calling. More specifically, Sam at l’ull cec came calling, asking for help setting up Daito Manabe‘s Sónar show on June 12. Daito is a renowned artist/programmer who also runs the Rhizomatiks design studio in Tokyo. He was featured in Apple’s “Thirty Years of Mac” web pages, and has done all kinds of crazy and cool projects.
The performance featured three dancers, three remotely-controlled flying drones, a wide-angle projector with depth sensor (for projection mapping onto the dancers), ten infrared tracking cameras, and a bunch of computers and other gear. Our contributions (as last-minute helpers) were limited: mounting IR cameras, wiring them to routers, taping down cables — whatever we could do to get things done in the tight schedule between other sound checks and performances. Meanwhile, Daito and Motoi worked like crazy to fine-tune their software and fix a wonky drone. And choreographer Mikiko and the three dancers from the Eleven Play dance troupe went through last-minute rehearsals.
To give an idea: the performance was (approximately) a mixture of this one — with three dancers rather than five:
…and this, with dancing drones — although because of technical issues, sadly at Sónar the drones danced alone:
I didn’t contribute much to the whole affair, but it was inspiring and a privilege to be able to take part and help out, even in a small way.
On May 17, we paid a (surprisingly pleasant and handbasket-free) visit to Hell — more specifically, to Dante’s Inferno, as one of the Insectotròpics‘ invited guests. Between May and September of this year, the “Insectos” (a Barcelona-based theatre troupe) are organizing a series of collaborative theatrical/performance events at the old Fabra i Coats textile factory (now art centre), one for each cantica of Dante’s Divina Commedia, in which they invite other artists to participate. This first voyage, to Hell (Un viatge a l’Infern in Catalan), included more than a dozen artistic groups (musicians, sculptors, video artists, painters, dancers, actors and more!), and lasted five hours on a Saturday evening.
We (the Wú Collective) contributed live imagery using two different versions of our Teatrillu software. For the event, we were fortunate to be joined by illustrator Riki Blanco, who provided graphical designs (drawings and cutouts) for us to animate.
One of our setups consisted in a “traditional” Teatrillu, making live stop-motions and other animated effects under a webcam, based on hand-made drawings and cutouts. The output of these minimalist animations was fed to a TV on the Insectos’ video wall, as well as to a makeshift viewer we made out of an old wooden drawer, a tablet, a macro lens and some cardboard and aluminum foil.
A second Teatrillu program received input from the first (over the local network, using TCPSyphon), and then manipulated it with further effects. Alex experimented with projecting “my” world onto the pages of a book, at other times masking it with hand-drawn (or infrared-projected) shapes on a whiteboard, at others still adding little flames to all its shapes. It’s a little hard to describe — basically we played and explored for five hours, adding our few small drops of Wú flavour into the overall cauldron of chaos.
One thing we missed was interaction with the other video groups and painters — we’d hoped to send our outputs to others for further manipulation, as well as receiving their feeds (and hand-made imagery or even photo print-outs) to use as source material. Hopefully in subsequent events this can happen — in the end we mostly kept to our own little corner (of hell). As often happens, everyone was really busy getting their own things ready until the last moment, and there wasn’t time to plan for more dynamic interaction between groups, as everyone had hoped.
I made a compilation of various short movie clips I recorded, as we worked our way through the nine circles of hell. Sorry about the audio and video quality, they were just recorded with a little compact camera, but it may give a vague idea of what we were up to that evening…
I’ve recently taken on a few more of the Disquiet Junto weekly musical/audio challenges.
Last week’s project used a very technical Oulipian-style constraint.
Disquiet Junto Project 0097: Ford Madox Ford Page 99 Remix
This week’s project takes as its source a comment attributed to the author Ford Madox Ford: “Open the book to page ninety-nine and read, and the quality of the whole will be revealed to you.” We will convert text from page 99 of various books into music.
Step 1: Pick up the book you are currently reading, or otherwise the first book you see nearby.
Step 2: Turn to page 99. Confirm that the page has enough consecutive text in it to add up to 80 characters.
Step 2a: If the page is blank or otherwise has no text, turn to page 98. Continue this process of moving backward through the book until your find an appropriate page.
Step 2b: If you are reading an ebook that lacks page numbers, or a book that happens to lack page numbers, then use the first page of the main body of the book (i.e., not the Library of Congress information or the table of contents) or flip to a random spot/page in the book.
Step 3: When you have located 80 consecutive characters, type them into a document on your computer or write the down on a piece of paper.
Step 4: You will turn these characters into music by following the following rules:
Step 4a: The letters A through L will correspond with the notes along the chromatic scale from A to G#. To convert a letter higher than L, simply cycle through the scale again (i.e., L = G#, M = A, etc.). Capital letters should be played slightly louder than lowercase letters.
Step 4b: Any spaces and any dashes/hyphens will be treated as blank, as a silent moment.
Step 4c: A comma or semicolon will signify a note one step below the preceding note.
Step 4d: A period, question mark, or exclamation point will signify a note one step above the preceding note.
Step 4e: All other punctuation (colon, ampersand, etc.) will be heard as a percussive beat.
Step 5: Record the piece of music using a digital or analog instrument.
Step 6: Set the pace for the recording to between 160 and 80 beats per minute (BPM). In other words, the track should be between 30 and 60 seconds in length.
In my case, the text was from “If On A Winter’s Night A Traveler” by Italo Calvino (English translation by William Weaver; published by Alfred A. Knopf), which probably fits a bit too literally into the Oulipo theme for this week. My segment of 80 characters from the top of p. 99 reads:
“is an important document; it can’t leave these offices, it’s the corpus delicti,”
I converted the characters to notes in SuperCollider, according to the project rules for this week. I played various versions of the note stream to different instruments (using NI Kontakt), and layered on some psychedelic effects, to give an oneiric, vaguely jazzy quality to the whole thing.
I found this week’s project (#98) particularly interesting. It also used a similar idea of text and constraints. The cacophonous layering of voices is really compelling.
In this project, we were asked to do an “audio biography” of sorts. In particular, we had to write three short texts, each beginning with the same words (the starting phrase was chosen randomly from a list of six options). In my case, the texts begin with: “This morning I had a sense that…” The first text contains 100 words, the second 90, the third 80. They were to be played simultaneously, such that the first (identical) words lined up, and then they diverge.
I recorded myself reading my texts, then did some light editing and added various effects in Reaper.
Step A: Choose a number from 1 through 6. You can roll a die or use an online number generator, or come to a decision on your own.
Step B: Write a 100-word text beginning with one of the following phrases, depending on the number you selected. Where there are brackets fill them in with the appropriate information.
“I was born in [ ] and I like …”
“My name is [ ] and I was thinking …”
“This morning I had a sense that …”
“Try as I might, the same thing …”
“The last book I read was and …”
“On a Sunday morning I usually …”
Step C: Write a 90-word text beginning with the same phrase.
Step D: Write an 80-word text beginning with the same phrase.
Step E: Record yourself reading the three texts as three separate tracks. Record each at the same pace. Speak slowly and take an extended pause after any period.
Step F: Layer the three tracks into one track. They should all begin at the same point and the first few words should, more or less, overlap to the point of being indistinguishable.
After a long span of lots of hard work (more about that in the coming month) but no performances, I had not one but two gigs this week, performing with my co-conspirators from the Wú:: Collective, Alex and Roger.
First, on Wednesday, we took part in the SubverJam session (in polite company, referred to as a New Media Art event), as part of the closing of the 2013 WeArt Festival. This involved a multitude of groups (at least six or seven), all jamming together, firing on all cylinders with audio and video “injections”. Barcelona’s newly-opened El Born Centre Cultural proved to be a fantastic venue.
The El Born CC is an impressive new art and culture space, located in the historic el Born market. The market was closed (as a market) in 1971, saved from destruction by neighbourhood protest, renovated and used for various events before being slated as a new library in the late ’90s. As work got underway on the library, they unearthed an important Catalan archaeological site that needed preserving (though there was debate about that, too). The library plan was eventually scrapped, and finally in September 2013, it opened as a beautiful new cultural centre, designed around the archaeological site, which occupies most of the interior space.
The WeArt event was in the centre’s “espai polyvalent”, Sala Moragues. In this large space, there were six smaller projections (one for each group: three each on opposing long walls), plus a big (6m-wide) projection at the far end of the room. The folks from Telenoika were doing video mixing and manipulations on the the large screen. On our Wú:: screen, I was projecting images from an openFrameworks application I created, taking input from webcam and pre-recorded video, manipulating it with GLSL shaders and live audio input (as well as my own live inputs and coding).
Audio came from re-jigged turntables and diverse analog gadgets on which Alex and Roger were performing, as well as a SuperCollider program I’d prepared for the occasion. The only problem is that, with so many groups, it ended up being…quite loud. It was difficult to hear your own contributions (hard even to think!), so mostly we just played and experimented with audio through our own headphones, while I also manipulated the video projection, responding to the room noise ambience. I got a few nice comments about my low-key visual effects. The event was open to the public for a couple of hours, during which we all “did our thing”. The public was free to wander around, look at what we were doing, interact and ask questions. At the peak, the room was fairly full (one or a few hundred people?). For my taste, it was a bit too loud and unstructured, but most spectators I asked told me they were enjoying it. I must be I’m getting old.
Our main focus this week, however, was a performance on Saturday (November 9), with New York-based sonic artist Thessia Machado. This was at Homesession, a small art loft in the Poble Sec neighbourhood. Thessia has been there for a couple of months on a residency, and during that time built some new instruments that amplify and manipulate the sound from simple bumping/scraping/vibrating/clicking objects. The objects are a mix of repurposed electrical mechanisms and hand-made paper sculptures. She was asked to perform three sessions at the conclusion of her residency, and invited Wú:: to collaborate with her for one of these events.
We used a similar setup to the WeArt show. For our half-hour set, Alex and Roger played modified turntables and various analog effects and filters. Thessia performed with her new instruments, and although I was prepared to contribute some SuperCollider audio, in the end I mostly focused on visuals, which were projected on a wall of the gallery. In the days after the WeArt gig, I was able to refine my GLSL shader programs further, and also get live input from two webcams. I could trigger them based on audio input (for example, a camera would fade in more as one performer or another played sound snippets).
I started with a base of procedural noise and added in the camera images, some soft glitchy effects that deliberately misused the webcam data, kaleidoscope-y effects and a few other manipulations I’d written in OpenGL’s shading language. The images were also distorted and pulsed using audio control data piped in from SuperCollider. Mostly, I spent the time finding interesting things to look at with the webcams.
After several changes of plans (on our side) in the preceding week, and much patience from Thessia, I think we can safely to call the Homesession performance a success. An “intimate” crowd (aka one or two dozen people) were witness to our Saturday evening playtime.
…but try to avoid doing any harm to your mother’s back.
For this piece, I took a map showing a small portion of the San Andreas fault, and mapped the fault lines into melodic and harmonic lines. The map was randomly assigned to me (see details of this 73rd Disquiet Junto project below). I programmed the score and instruments in SuperCollider, recorded three complete takes in real time, and finally mixed them together. Each part is different, because there is some random variation in the patterns. However, they are similar enough that they blend together well, like different musicians improvising to the same piece.
I started by importing a (hand-processed) map image containing only the relevant black lines, as a Portable Greymap (PGM) text file. Then, I created a series of SuperCollider patterns that read and indexed into this data, using it to pick degrees from different scales. The musical score moves from left to right through the image, taking the horizontal axis as time.
The solid and dashed black (fault-line) pixels are taken to represent “potential” eighth notes. There were 1500 columns across the original image, so there were about 188 4-beat bars in the piece. The tempo varies, though, between one bar per second and four seconds per bar. There can be more than one black line at any given time — since the faults bifurcate and merge — and these correspond to different voices. The lead voice comes from the “strongest” line, and is quite a simple tone with a percussive envelope. Beneath that are several analog-esque monophonic voices, plus extra hits at “geographically busy” places, using a synthesized plucked-string sound.
Pauses and modal changes were chosen manually, at points that seemed musically interesting.
Instructions: This week’s project is about earthquakes. Each participant will receive a distinct section of a map of the San Andreas Fault. The section will be interpreted as a graphic notation score. The resulting music will, in the words of Geoff Manaugh of BLDG BLOG, “explore the sonic properties of the San Andreas Fault.”
There are 4 steps to this project:
Step 1: To be assigned a segment of the map, go to the following URL. You will be asked to enter your SoundCloud user name, and then to enter your email address. You will receive via that email address a file, approximately 1MB in size, containing your map segment.
Step 2: Study the map segment closely. Develop an approach by which you interpret the map segment as a graphic notation score. The goal is for you to “read” the image as if it were presented as a piece of notated music. Read the image from left to right. Pay particular attention to solid black lines, which represent fault lines. For additional guidance and inspiration, you may refer to the map legend at the following URL. The extent to which you take the legend into consideration is entirely up to you.
Step 3: Record an original piece of music based on Step 2. It should be between two and six minutes in length. You can use any instrumentation you choose, except the human voice. (Note: Do not use any source material to which you do not yourself outright possess the copyright. This is highly important, because we may look into developing a free iOS app of the resulting recordings.)
Step 4: When posting your track, include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto.
Last Friday (April 26), the Barcelona Laptop Orchestra performed at the Mixtur Festival, an event featuring musical and sonic art, research and experimentation. The venue was beautiful — inside the renewed industrial space of Fabra i Coats. The space was previously a textile factory, created following the 1903 fusion of Catalan textile producer Fabra y Portabella with the ancient J & P Coats company, with roots in Paisley, Scotland. The old FiC factory has been closed for decades, but recently took on new life as a “creative factory”, and now is home to artists, studios and creative events. It is well known to me as home of l’Ull Cec (for courses and workshops, SuperCollider meetings, music events), as well as where the Insectotròpics theatre group is currently rehearsing for their next piece.
For Mixtur, we only performed performed one piece: Quo-tr, a piece specially created for us by German composer Orm Finnendahl, with support from the Goethe Institut. Four performers and speakers were located around the audience, and we played different “instruments” consisting of — almost anything. I played Tibetan bowls and bells, a comb, a lens blower, a pair of metal Korean chopsticks, a “bird chirper”, some marbles, … you get the idea. Trying to make unusual and distinct sounds to play with Orm’s piece, which relies on live sound mixed with sound recorded and played back by his elaborate software. Besides my contributions, John played a “prepared” electric ukulele, Álvaro played bottles, whistles, a balloon and other squeaky/scratchy things, while Victor used sampled source sounds, triggered by an iPad and keyboard.
Mixtur attendees were the right crowd for this kind of music, and it was rewarding to perform here — plenty of people in the audience, a curious and enthusiastic response (several people commented that we should have played longer). The venue itself was another major attraction. We had a beautiful, big space to perform, and it was moodily lit with teardrop-shaped lamps. For Orm, it is important that people see the relation between what we were doing and the sound being produced. His piece is not about random things happening; there is an order, and although there is an element of improvisation, we had to learn to play the scores we were using, to anticipate and respond. Also, I think the fact that attendees were free to roam around (if they didn’t want to lounge on a big pile of comfy cushions), helped the experience.
Here’s a raw video of the event (missing video of some parts, replaced by soothing darkness).
Last week was a busy one! After Tuesday’s Moritz/Insectotròpics event, on Thursday I had another concert with the Barcelona Laptop Orchestra — this time at l’Auditori, in Sala 2 (Oriol Martorell). There wasn’t enough advance publicity, and we ended up very (very!) far from filling the 600-seat venue, but it was a great experience nonetheless. It was fun to be behind the scenes at such a large and professionally-run venue. It reminded me of my Banff Centre days.
We performed three pieces. First up was a revival of one of the BLO’s classics from previous years: la Roda (“the wheel”), in which fragments of audio pass through the hands of each player successively, allowing them to modify and then pass them on — much like the children’s game “telephone”.
After that, we performed a new work in progress, Quo-tr, which was specially created for us by composer Orm Finnendahl, with support from the Barcelona branch of the Goethe Institut (we will perform this piece again this Friday, April 26, as part of the Mixtur festival). In it, five performers make real-world sounds (in my case, using Tibetan bowls, marbles, velcro, paper, a bird chirper, and more), which are incorporated and “quoted back” by a graphical score, controlled by elaborate Pd patches created by Orm.
Our final piece of the evening was our popular CliX ReduX, which I revisited for this latest performance. Since we were outputting to stereo PA and not one speaker per player, I created a centralized client/server version in SuperCollider, which sends instructions from each player over the network. The new version also makes it easy to build looping fragments of letters/notes, which means that interesting and complex rhythms can be improvised. As usual, a large video projection showed our faces and other snippets from my “videoSampler” in the background, synchronized with the audio playback.
The ESMUC building, where we normally rehearse, is physically connected to these big concert halls, so we hauled all our gear down the back corridors to Sala 2 on three trolleys, starting around 15h00. The show was at 19h00, and we made full use of those hours to get ready. We had a total of nine performers, spread over the three pieces. Another group, the Unmapped collective (from Paris), shared the stage with us, performing an interesting mix of live instruments (bassoon and flute) with laptop sound manipulations.
It was an interesting experience to be on-stage at such a large venue. We had to clear the auditorium before the public entered, and it was odd to be outside, sitting and having a “relaxed” tea at the café, watching people go into the theatre just minutes before our show, then running back in the stage door and through the labyrinthine underground halls to magically appear on stage, just in the nick of time. I’m getting more used to it, but these things are always somewhat nerve-wracking (especially the setups, rushed rehearsals, last-minute substitutions of faulty WiFi routers, etc), but ultimately rewarding. Thankfully, everyone is very focused and competent, when it comes to “crunch time”. These shows really are a team effort.
This past Tuesday we were invited, by the Insectotròpics theatre troupe, to participate in an event presenting the Programa Suport a la Creació 2013 (production grants) from FiraTàrrega (an international performing and street arts festival in Tàrrega, Catalunya). The event was held at Barcelona’s beautiful Fàbrica Moritz (historical brewery), recently redesigned by architect Jean Nouvel.
Our friends from Insectotròpics are incorporating our Teatrillu software into their upcoming theatre production, BZZ, and we were delighted to be asked to be there with them at this event. And not only because of the free Moritz beer…
It was a great to be part of this event, which gave the public a preview of some early work on the Insects’ next piece, to be premiered at FiraTàrrega this September. Since January 2013, we’ve been invited to some of their rehearsals and production sessions, helping them incorporate our Teatrillu into their show. Among other things, our software will allow them to do live stop-motion animation and interactive visual trickery — allowing bits of paper, blobs of paint and other objects to take on a life of their own.
Here is some more (raw) footage from the Insectotròpics’ Moritz event, some of which shows the Teatrillu in action (thanks to Vicenç):
I finally managed to put together a video of my live coding performance at Niu. Enjoy! (that is, if you have 23 minutes to kill)
2nd annual “Live Coding Sessions” evening at Niu, on March 22, 2013
Coding was done in SuperCollider 3.6, and I set myself the constraint of only using the most basic building blocks of sound: sine waves (as oscillators, LFOs, envelopes, even as arpeggiators driving patterns of scale degrees). I also tried to create everything pretty much from scratch, although I did “cheat” a few times (cheating, perhaps, according to live coding purists). As I’m still wishing for macro functionality in SC’s new IDE (if I had some time I ought to just contribute it myself), I decided to use an external macro program to speed the coding in a few cases (just a marginally sexier version of cut & paste). It’s too bad you have to strain a bit to read the projected text in the video — that’s the most interesting part of live coding: seeing the code that goes along with the sounds you’re hearing! Otherwise, it can be a bit long… (-;
Audio was captured on a Zoom H1 recorder, including room ambience (not to mention me pounding the keyboard). I also taped a contact mic to my laptop, giving an enhanced sense of “liveness” to the coding). The sound is probably best appreciated with headphones (for the lower sine wave frequencies).
(Spot the glitch! From about 4:10 to 5:10, I make a mistake, and have to work my way through it. In trying to pack a lot into 20 minutes, I make an error and it takes me a moment to figure out where I’ve gone wrong… I should have scrolled back to read the errors in the post window right away, but my pride was trying to avoid doing that, to make it seem as if all were under control. Live coding is an interesting mental challenge — It’s easy to code from the comfort of home or office, but your brain works differently (or doesn’t ;-)) under “live” (i.e. people watching you) conditions! It’s not unlike meditation, in that it needs extreme focus and concentration. Turns out I just needed to keep calm and carry on…and add that missing variable name!
Merci beaucoup à Anna Duriez — for bringing her SLR camera and recording the video (and letting me use it)! It was quite flickery (a strobing effect from the shutter speed interfering with the lights and projector), but I managed to even it out by blending frames together — luckily there’s not a lot of action or camera moves in live coding, so I could get away with it.
El Teatrillu (a Catalan diminutive of teatre, meaning: “little theatre”) is software for the performing arts. It’s an application I’ve been working on with the other members of the Wù:: collective. Alex and Roger had already created the first version of this software — a mix of interactive theatre, live stop-motion animation, puppeteering and digital sleight of hand — when I came along last summer and asked if I could join the party. Once indoctrinated into their “collective”, I helped organize and tidy the code, added some new features, and started thinking about how to take the ideas from their prototype and develop them more completely in a rewrite. Then, in November 2012, we won a grant from Telenoika, a Barcelona-based “creative audiovisual community”, to continue this work and ultimately release a more refined second version to the public as open source software.
Since the start of this year, we’ve been working with the folks from Insectotròpics, with the idea that they use our software in their upcoming theatre production. We’ve found (not surprisingly) that speaking with real users has helped us to discover the possibilities and limitations of our own program, to figure out what works and what doesn’t, and to add to our never-ending list of “cool ideas” we’d like to implement. (Unfortunately, each of us likes to keep many plates spinning at the same time, so work recently has lagged on Teatrillu.)
In order to get more feedback, and to re-energize us, tomorrow we have an “evening of open experimentation” with our current Teatrillu software, to let the public play with it, give us their thoughts, ask us questions. It’s not a workshop — hopefully that will come in the future — but more of an open (play)house. Thanks to Telenoika for offering us their space in el Raval (c/ Sant Pau, 58) to hold this event, Thursday April 4, from 18h to 21h.
Live coding: I know, I haven’t yet posted any comment on my live coding performance of March 22. It went well; a small but enthusiastic crowd of maybe 25-30 people(?) came out. After weeks of trying all kinds of experiments, fretting and rehearsing, I was glad to get on with it, and ended up quite happy with my performance. The reaction from the crowd and comments afterwards were very favourable (Josep, the Laptop Orchestra’s director, even said that it: “…reminded [him] of the image of Bach improvising a fugue” — then again, he’s known for being extremely generous with his praise!). For me, it was a nice, relatively stress-free introduction to this new kind of performance. Thanks to Gerard and Graham for letting me in on this, their 2nd annual event!
I recorded ambient audio in the room, but it’s not too exciting without also seeing what’s going on on the screen at the same time (and even then…). I’m still hoping to get ahold of some video footage that was shot at the event and that shows the screen and code clearly enough. If I get some, I’ll put something up on Youtube or Vimeo, and link to it here.