But this year, fate came calling. More specifically, Sam at l’ull cec came calling, asking for help setting up Daito Manabe‘s Sónar show on June 12. Daito is a renowned artist/programmer who also runs the Rhizomatiks design studio in Tokyo. He was featured in Apple’s “Thirty Years of Mac” web pages, and has done all kinds of crazy and cool projects.
The performance featured three dancers, three remotely-controlled flying drones, a wide-angle projector with depth sensor (for projection mapping onto the dancers), ten infrared tracking cameras, and a bunch of computers and other gear. Our contributions (as last-minute helpers) were limited: mounting IR cameras, wiring them to routers, taping down cables — whatever we could do to get things done in the tight schedule between other sound checks and performances. Meanwhile, Daito and Motoi worked like crazy to fine-tune their software and fix a wonky drone. And choreographer Mikiko and the three dancers from the Eleven Play dance troupe went through last-minute rehearsals.
To give an idea: the performance was (approximately) a mixture of this one — with three dancers rather than five:
…and this, with dancing drones — although because of technical issues, sadly at Sónar the drones danced alone:
I didn’t contribute much to the whole affair, but it was inspiring and a privilege to be able to take part and help out, even in a small way.
I’ve recently taken on a few more of the Disquiet Junto weekly musical/audio challenges.
Last week’s project used a very technical Oulipian-style constraint.
Disquiet Junto Project 0097: Ford Madox Ford Page 99 Remix
This week’s project takes as its source a comment attributed to the author Ford Madox Ford: “Open the book to page ninety-nine and read, and the quality of the whole will be revealed to you.” We will convert text from page 99 of various books into music.
Step 1: Pick up the book you are currently reading, or otherwise the first book you see nearby.
Step 2: Turn to page 99. Confirm that the page has enough consecutive text in it to add up to 80 characters.
Step 2a: If the page is blank or otherwise has no text, turn to page 98. Continue this process of moving backward through the book until your find an appropriate page.
Step 2b: If you are reading an ebook that lacks page numbers, or a book that happens to lack page numbers, then use the first page of the main body of the book (i.e., not the Library of Congress information or the table of contents) or flip to a random spot/page in the book.
Step 3: When you have located 80 consecutive characters, type them into a document on your computer or write the down on a piece of paper.
Step 4: You will turn these characters into music by following the following rules:
Step 4a: The letters A through L will correspond with the notes along the chromatic scale from A to G#. To convert a letter higher than L, simply cycle through the scale again (i.e., L = G#, M = A, etc.). Capital letters should be played slightly louder than lowercase letters.
Step 4b: Any spaces and any dashes/hyphens will be treated as blank, as a silent moment.
Step 4c: A comma or semicolon will signify a note one step below the preceding note.
Step 4d: A period, question mark, or exclamation point will signify a note one step above the preceding note.
Step 4e: All other punctuation (colon, ampersand, etc.) will be heard as a percussive beat.
Step 5: Record the piece of music using a digital or analog instrument.
Step 6: Set the pace for the recording to between 160 and 80 beats per minute (BPM). In other words, the track should be between 30 and 60 seconds in length.
In my case, the text was from “If On A Winter’s Night A Traveler” by Italo Calvino (English translation by William Weaver; published by Alfred A. Knopf), which probably fits a bit too literally into the Oulipo theme for this week. My segment of 80 characters from the top of p. 99 reads:
“is an important document; it can’t leave these offices, it’s the corpus delicti,”
I converted the characters to notes in SuperCollider, according to the project rules for this week. I played various versions of the note stream to different instruments (using NI Kontakt), and layered on some psychedelic effects, to give an oneiric, vaguely jazzy quality to the whole thing.
I found this week’s project (#98) particularly interesting. It also used a similar idea of text and constraints. The cacophonous layering of voices is really compelling.
In this project, we were asked to do an “audio biography” of sorts. In particular, we had to write three short texts, each beginning with the same words (the starting phrase was chosen randomly from a list of six options). In my case, the texts begin with: “This morning I had a sense that…” The first text contains 100 words, the second 90, the third 80. They were to be played simultaneously, such that the first (identical) words lined up, and then they diverge.
I recorded myself reading my texts, then did some light editing and added various effects in Reaper.
Step A: Choose a number from 1 through 6. You can roll a die or use an online number generator, or come to a decision on your own.
Step B: Write a 100-word text beginning with one of the following phrases, depending on the number you selected. Where there are brackets fill them in with the appropriate information.
“I was born in [ ] and I like …”
“My name is [ ] and I was thinking …”
“This morning I had a sense that …”
“Try as I might, the same thing …”
“The last book I read was and …”
“On a Sunday morning I usually …”
Step C: Write a 90-word text beginning with the same phrase.
Step D: Write an 80-word text beginning with the same phrase.
Step E: Record yourself reading the three texts as three separate tracks. Record each at the same pace. Speak slowly and take an extended pause after any period.
Step F: Layer the three tracks into one track. They should all begin at the same point and the first few words should, more or less, overlap to the point of being indistinguishable.
After a long span of lots of hard work (more about that in the coming month) but no performances, I had not one but two gigs this week, performing with my co-conspirators from the Wú:: Collective, Alex and Roger.
First, on Wednesday, we took part in the SubverJam session (in polite company, referred to as a New Media Art event), as part of the closing of the 2013 WeArt Festival. This involved a multitude of groups (at least six or seven), all jamming together, firing on all cylinders with audio and video “injections”. Barcelona’s newly-opened El Born Centre Cultural proved to be a fantastic venue.
The El Born CC is an impressive new art and culture space, located in the historic el Born market. The market was closed (as a market) in 1971, saved from destruction by neighbourhood protest, renovated and used for various events before being slated as a new library in the late ’90s. As work got underway on the library, they unearthed an important Catalan archaeological site that needed preserving (though there was debate about that, too). The library plan was eventually scrapped, and finally in September 2013, it opened as a beautiful new cultural centre, designed around the archaeological site, which occupies most of the interior space.
The WeArt event was in the centre’s “espai polyvalent”, Sala Moragues. In this large space, there were six smaller projections (one for each group: three each on opposing long walls), plus a big (6m-wide) projection at the far end of the room. The folks from Telenoika were doing video mixing and manipulations on the the large screen. On our Wú:: screen, I was projecting images from an openFrameworks application I created, taking input from webcam and pre-recorded video, manipulating it with GLSL shaders and live audio input (as well as my own live inputs and coding).
Audio came from re-jigged turntables and diverse analog gadgets on which Alex and Roger were performing, as well as a SuperCollider program I’d prepared for the occasion. The only problem is that, with so many groups, it ended up being…quite loud. It was difficult to hear your own contributions (hard even to think!), so mostly we just played and experimented with audio through our own headphones, while I also manipulated the video projection, responding to the room noise ambience. I got a few nice comments about my low-key visual effects. The event was open to the public for a couple of hours, during which we all “did our thing”. The public was free to wander around, look at what we were doing, interact and ask questions. At the peak, the room was fairly full (one or a few hundred people?). For my taste, it was a bit too loud and unstructured, but most spectators I asked told me they were enjoying it. I must be I’m getting old.
Our main focus this week, however, was a performance on Saturday (November 9), with New York-based sonic artist Thessia Machado. This was at Homesession, a small art loft in the Poble Sec neighbourhood. Thessia has been there for a couple of months on a residency, and during that time built some new instruments that amplify and manipulate the sound from simple bumping/scraping/vibrating/clicking objects. The objects are a mix of repurposed electrical mechanisms and hand-made paper sculptures. She was asked to perform three sessions at the conclusion of her residency, and invited Wú:: to collaborate with her for one of these events.
We used a similar setup to the WeArt show. For our half-hour set, Alex and Roger played modified turntables and various analog effects and filters. Thessia performed with her new instruments, and although I was prepared to contribute some SuperCollider audio, in the end I mostly focused on visuals, which were projected on a wall of the gallery. In the days after the WeArt gig, I was able to refine my GLSL shader programs further, and also get live input from two webcams. I could trigger them based on audio input (for example, a camera would fade in more as one performer or another played sound snippets).
I started with a base of procedural noise and added in the camera images, some soft glitchy effects that deliberately misused the webcam data, kaleidoscope-y effects and a few other manipulations I’d written in OpenGL’s shading language. The images were also distorted and pulsed using audio control data piped in from SuperCollider. Mostly, I spent the time finding interesting things to look at with the webcams.
After several changes of plans (on our side) in the preceding week, and much patience from Thessia, I think we can safely to call the Homesession performance a success. An “intimate” crowd (aka one or two dozen people) were witness to our Saturday evening playtime.
…but try to avoid doing any harm to your mother’s back.
For this piece, I took a map showing a small portion of the San Andreas fault, and mapped the fault lines into melodic and harmonic lines. The map was randomly assigned to me (see details of this 73rd Disquiet Junto project below). I programmed the score and instruments in SuperCollider, recorded three complete takes in real time, and finally mixed them together. Each part is different, because there is some random variation in the patterns. However, they are similar enough that they blend together well, like different musicians improvising to the same piece.
I started by importing a (hand-processed) map image containing only the relevant black lines, as a Portable Greymap (PGM) text file. Then, I created a series of SuperCollider patterns that read and indexed into this data, using it to pick degrees from different scales. The musical score moves from left to right through the image, taking the horizontal axis as time.
The solid and dashed black (fault-line) pixels are taken to represent “potential” eighth notes. There were 1500 columns across the original image, so there were about 188 4-beat bars in the piece. The tempo varies, though, between one bar per second and four seconds per bar. There can be more than one black line at any given time — since the faults bifurcate and merge — and these correspond to different voices. The lead voice comes from the “strongest” line, and is quite a simple tone with a percussive envelope. Beneath that are several analog-esque monophonic voices, plus extra hits at “geographically busy” places, using a synthesized plucked-string sound.
Pauses and modal changes were chosen manually, at points that seemed musically interesting.
Instructions: This week’s project is about earthquakes. Each participant will receive a distinct section of a map of the San Andreas Fault. The section will be interpreted as a graphic notation score. The resulting music will, in the words of Geoff Manaugh of BLDG BLOG, “explore the sonic properties of the San Andreas Fault.”
There are 4 steps to this project:
Step 1: To be assigned a segment of the map, go to the following URL. You will be asked to enter your SoundCloud user name, and then to enter your email address. You will receive via that email address a file, approximately 1MB in size, containing your map segment.
Step 2: Study the map segment closely. Develop an approach by which you interpret the map segment as a graphic notation score. The goal is for you to “read” the image as if it were presented as a piece of notated music. Read the image from left to right. Pay particular attention to solid black lines, which represent fault lines. For additional guidance and inspiration, you may refer to the map legend at the following URL. The extent to which you take the legend into consideration is entirely up to you.
Step 3: Record an original piece of music based on Step 2. It should be between two and six minutes in length. You can use any instrumentation you choose, except the human voice. (Note: Do not use any source material to which you do not yourself outright possess the copyright. This is highly important, because we may look into developing a free iOS app of the resulting recordings.)
Step 4: When posting your track, include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto.
Last Friday (April 26), the Barcelona Laptop Orchestra performed at the Mixtur Festival, an event featuring musical and sonic art, research and experimentation. The venue was beautiful — inside the renewed industrial space of Fabra i Coats. The space was previously a textile factory, created following the 1903 fusion of Catalan textile producer Fabra y Portabella with the ancient J & P Coats company, with roots in Paisley, Scotland. The old FiC factory has been closed for decades, but recently took on new life as a “creative factory”, and now is home to artists, studios and creative events. It is well known to me as home of l’Ull Cec (for courses and workshops, SuperCollider meetings, music events), as well as where the Insectotròpics theatre group is currently rehearsing for their next piece.
For Mixtur, we only performed performed one piece: Quo-tr, a piece specially created for us by German composer Orm Finnendahl, with support from the Goethe Institut. Four performers and speakers were located around the audience, and we played different “instruments” consisting of — almost anything. I played Tibetan bowls and bells, a comb, a lens blower, a pair of metal Korean chopsticks, a “bird chirper”, some marbles, … you get the idea. Trying to make unusual and distinct sounds to play with Orm’s piece, which relies on live sound mixed with sound recorded and played back by his elaborate software. Besides my contributions, John played a “prepared” electric ukulele, Álvaro played bottles, whistles, a balloon and other squeaky/scratchy things, while Victor used sampled source sounds, triggered by an iPad and keyboard.
Mixtur attendees were the right crowd for this kind of music, and it was rewarding to perform here — plenty of people in the audience, a curious and enthusiastic response (several people commented that we should have played longer). The venue itself was another major attraction. We had a beautiful, big space to perform, and it was moodily lit with teardrop-shaped lamps. For Orm, it is important that people see the relation between what we were doing and the sound being produced. His piece is not about random things happening; there is an order, and although there is an element of improvisation, we had to learn to play the scores we were using, to anticipate and respond. Also, I think the fact that attendees were free to roam around (if they didn’t want to lounge on a big pile of comfy cushions), helped the experience.
Here’s a raw video of the event (missing video of some parts, replaced by soothing darkness).
Last week was a busy one! After Tuesday’s Moritz/Insectotròpics event, on Thursday I had another concert with the Barcelona Laptop Orchestra — this time at l’Auditori, in Sala 2 (Oriol Martorell). There wasn’t enough advance publicity, and we ended up very (very!) far from filling the 600-seat venue, but it was a great experience nonetheless. It was fun to be behind the scenes at such a large and professionally-run venue. It reminded me of my Banff Centre days.
We performed three pieces. First up was a revival of one of the BLO’s classics from previous years: la Roda (“the wheel”), in which fragments of audio pass through the hands of each player successively, allowing them to modify and then pass them on — much like the children’s game “telephone”.
After that, we performed a new work in progress, Quo-tr, which was specially created for us by composer Orm Finnendahl, with support from the Barcelona branch of the Goethe Institut (we will perform this piece again this Friday, April 26, as part of the Mixtur festival). In it, five performers make real-world sounds (in my case, using Tibetan bowls, marbles, velcro, paper, a bird chirper, and more), which are incorporated and “quoted back” by a graphical score, controlled by elaborate Pd patches created by Orm.
Our final piece of the evening was our popular CliX ReduX, which I revisited for this latest performance. Since we were outputting to stereo PA and not one speaker per player, I created a centralized client/server version in SuperCollider, which sends instructions from each player over the network. The new version also makes it easy to build looping fragments of letters/notes, which means that interesting and complex rhythms can be improvised. As usual, a large video projection showed our faces and other snippets from my “videoSampler” in the background, synchronized with the audio playback.
The ESMUC building, where we normally rehearse, is physically connected to these big concert halls, so we hauled all our gear down the back corridors to Sala 2 on three trolleys, starting around 15h00. The show was at 19h00, and we made full use of those hours to get ready. We had a total of nine performers, spread over the three pieces. Another group, the Unmapped collective (from Paris), shared the stage with us, performing an interesting mix of live instruments (bassoon and flute) with laptop sound manipulations.
It was an interesting experience to be on-stage at such a large venue. We had to clear the auditorium before the public entered, and it was odd to be outside, sitting and having a “relaxed” tea at the café, watching people go into the theatre just minutes before our show, then running back in the stage door and through the labyrinthine underground halls to magically appear on stage, just in the nick of time. I’m getting more used to it, but these things are always somewhat nerve-wracking (especially the setups, rushed rehearsals, last-minute substitutions of faulty WiFi routers, etc), but ultimately rewarding. Thankfully, everyone is very focused and competent, when it comes to “crunch time”. These shows really are a team effort.
I finally managed to put together a video of my live coding performance at Niu. Enjoy! (that is, if you have 23 minutes to kill)
2nd annual “Live Coding Sessions” evening at Niu, on March 22, 2013
Coding was done in SuperCollider 3.6, and I set myself the constraint of only using the most basic building blocks of sound: sine waves (as oscillators, LFOs, envelopes, even as arpeggiators driving patterns of scale degrees). I also tried to create everything pretty much from scratch, although I did “cheat” a few times (cheating, perhaps, according to live coding purists). As I’m still wishing for macro functionality in SC’s new IDE (if I had some time I ought to just contribute it myself), I decided to use an external macro program to speed the coding in a few cases (just a marginally sexier version of cut & paste). It’s too bad you have to strain a bit to read the projected text in the video — that’s the most interesting part of live coding: seeing the code that goes along with the sounds you’re hearing! Otherwise, it can be a bit long… (-;
Audio was captured on a Zoom H1 recorder, including room ambience (not to mention me pounding the keyboard). I also taped a contact mic to my laptop, giving an enhanced sense of “liveness” to the coding). The sound is probably best appreciated with headphones (for the lower sine wave frequencies).
(Spot the glitch! From about 4:10 to 5:10, I make a mistake, and have to work my way through it. In trying to pack a lot into 20 minutes, I make an error and it takes me a moment to figure out where I’ve gone wrong… I should have scrolled back to read the errors in the post window right away, but my pride was trying to avoid doing that, to make it seem as if all were under control. Live coding is an interesting mental challenge — It’s easy to code from the comfort of home or office, but your brain works differently (or doesn’t ;-)) under “live” (i.e. people watching you) conditions! It’s not unlike meditation, in that it needs extreme focus and concentration. Turns out I just needed to keep calm and carry on…and add that missing variable name!
Merci beaucoup à Anna Duriez — for bringing her SLR camera and recording the video (and letting me use it)! It was quite flickery (a strobing effect from the shutter speed interfering with the lights and projector), but I managed to even it out by blending frames together — luckily there’s not a lot of action or camera moves in live coding, so I could get away with it.
A nice new video of our January Phonos concert is available now on Youtube (thanks, Sònia!).
Since then, we had a more intimate and playful performance (February 8) at a small art space called Niu — it went down really well (maybe drinks helped — audience and/or performers ;-)). We performed three pieces, including extended and more improvisatorial versions of CliX ReduX and Six Pianos (which didn’t use pianos at all this time, instead Hammond organs, electric guitar/bass and a few other funky things).
I updated CliX to use a synchronized clock (MandelClock) from BenoitLib. Proper synchronization between machines helped the piece a great deal, allowing us to get into some really interesting grooves, especially with the sampled sounds and projected video snippets. Caballé is always a hit… However, we did have a few glitches (still not 100% sure why), where the tempo would occasionally change without warning. It corrected itself within a few seconds, but was quite disconcerting (although several people in the audience claim not to have noticed anything wrong). In subsequent rehearsals the problem wasn’t as severe (I made some changes to reduce network traffic), but did still occur from time to time. I suspect it’s to do with lost or out-of-order OSC messages, which happens regularly on busy WiFi networks.
Most recently, the Barcelona Laptop Orchestra performed (March 8) as “pre-dinner entertainment” at the Polifonia conference (a mostly-European grouping of music conservatories), held in Barcelona. It was located in the restaurant area of the Museu Marítim, in a beautiful stone building that used to be part of the old shipyards. We were only performing CliX ReduX, and had managed to build to a nice “welcome” point after five or ten minutes, when suddenly — BOOM! — our power went out. (Everyone applauded; I assume because it was a particularly dramatic stop, but perhaps they were simply glad they could start eating.) Somewhere, we had tripped a circuit-breaker.
It took at least 15 minutes until we found a functional plug (downstairs, using an extremely long extension cord) and got the projector working. Our laptops waited patiently, chugging along on battery power. But by then, we’d lost some of our vibe, and the audience had moved on to chit-chat, toasting and appetizers. We performed the last section of our show, but I wouldn’t claim it was a huge hit. We did get free dinner out of it, though. I’m sure many of the classical music professors were thinking: “Hah – all this new-fangled technology, what a disaster! That’s why violins, pianos and oboes are better!”
In other news:
I’ve agreed to perform at a live coding event at Niu, on March 22. Yikes, my first time flying solo. I’ve been spending the last few weeks trying things out in SuperCollider, but still (with only a week and a half left to go) haven’t found a good flow. I decided to set myself a constraint — skipping fancier synthesis techniques and only working with sine curves. Well, that’s the plan…
Glen Fraser (Canada) has always preferred “live coding” to dead coding. Although he’s programmed interactive graphics and sound for fun and profit for a quarter century, it’s always been from the relative safety of his home or office. This will be his first time doing it for an audience. In this performance, Glen will use SuperCollider to explore what he calls “Sines and Symbols”. He is currently a member of the Barcelona Laptop Orchestra and of the Wù:: Collective, where he develops technology for the performing arts.
The concert is also mentioned on Modisti (though I prefer my own English translation…)
We had our Phonos concert last Thursday (January 31), and in spite of not being 100% prepared — in spite of a big final-week push — I think it was a success. Probably 30-40 people attended in the “sala polivalent” at the UPF, and they were treated to a very complex setup and six very different pieces. The setup took most of the day; some of us were there from 10h to 22h, and the concert was at 19h30. Eight speakers in a ring around the audience, with eight tables and one or two performers seated at each table. In addition, eight channels of video, fed directly, or (for lack of hardware) in some cases reshot by cameras from external monitors to capture the output (quality of video was not great, in these cases). All those feeds were sent through video mixers to create two projections, each with a 2×2 grid of videos. In some pieces (e.g. Light Scratch), these showed the faces of the performers, in most other pieces they showed the contents of our screens (e.g. Six Pianos). Laptop music can be rough going if the audience has no sense of the interaction between what the people sitting behind the computers are doing, and what they’re hearing. Hopefully the video feeds helped a bit with that.
But really, it’s all about the music. Minimalist classics, in most cases, reworked to give them our own touch. We played our reinterpretations of six pieces:
In C (Terry Riley) — based on a Pure Data patch by our director Josep, I reworked this piece to run in SuperCollider. We played it as people came in, to create a mesmerizing ambience to open the concert. Audio is a simple synthesizer, to allow people to focus on the interesting phasing effects produced by each performer’s place in the score.
Rimandi (Ivano Morrone) — a piece that uses contact microphones stuck to the laptop, and makes ring-modulated noisy goodness from internal computer noises, fingers tapping, rubbing and scratching the laptop itself.
CliX ReduX (Ge Wang / BLO) — the original piece, that makes rhythmic clicks based on networked typing, was create in ChucK (Ge Wang, Princeton Laptop Ensemble), but I totally redid it in SuperCollider. Our version sounds similar when in “clix mode”, but beyond that it’s barely the same thing. Now you work with a longer buffer of characters, can change the rhythm and also can change the sound of the “clicks” to be sample-based. We created audio and video samples with fragments of “interesting things” scavenged from various sources (for ASCII characters which are not letters), and for the letters of the alphabet, we use each performer’s voice (and face) making the letter’s sound. We created a video player, written in Open Frameworks, that plays back “video samples” that complement the sound. You can get an idea of what this can look like (for one performer, at least) with the videos in a previous post. This latest version of the piece was very well received.
Light Scratch (BLO) — created by one of our members, Nadine Kroher, it uses the webcam to look for bright light spots, and do funky things with an audio samples based on the user moving a light source, or jamming their face up close to the camera, waving hands, etc. It can be quite entertaining (or frightening) to see a macro view of Enric’s nostrils or Jan’s eye…
Variations II live (John Cage / William Brent) — this one is a live reworking of an installation piece, created for us by William Brent. It involves each player making a series of very simple sketches (each with six lines and five points), which treated as mini-scores and sent to a central server and used to create the audio of the piece. There is also visual feedback of the scores as they are played. William was located across the Atlantic in Washington, contributing sketches remotely, along with the rest of us. We also had him on a Skype connection, and placed him on a pedestal (literally) as the piece was being performed. This was the premiere for his piece.
Six Pianos cover (Steve Reich / BLO) — this is another favourite, as it is very obvious that we are actively doing something. Each player has a webcam pointing down at their workspace (playspace?), with a small light illuminating the space. An Open Frameworks application uses OpenCV and Gaussian classifiers to detect blobs of colour, with the colour indicating scale degree and size indicating octave (big = low, small = high). The playspace acts like a step sequencer, with time-steps along the horizontal, and the vertical axis used to control volume. It is called Six Pianos because that’s the piece that inspired it. In this concert, we performed an excerpt of Steve Reich’s piece, using this new visual instrument. Each player’s notes are sent to a SuperCollider program that is responsible for playing the synchronized audio. The instruments are high-quality sampled pianos, using NI’s Kontakt, output via a six-channel audio interface, and each output going to the speaker of its corresponding performer.
Tonight we have another gig, at Niu (an art centre in the Poblenou neighbourhood of Barcelona). I’m not sure what kind of audience we’ll have, since our concert listing is included in such websites as ClubbingSpain and Le Cool. Ah, if only we truly were. (Cool, I mean.)
Tonight we’ll just perform a few pieces: CliX, Six Pianos and Light Scratch. Even though only a week has passed, two of these pieces have already evolved (software-wise or performance-wise). Tonight we only have two loudspeakers, and a much more intimate space, so we decided not to use pianos but rather six distinct instruments (to distinguish individual players a bit). Also, we’ll jam with these pieces for a bit longer, improvising as we go. We had a good rehearsal last night, where we tried this more “free form” Six Pianos. Take a look (note that the audio level is quite low, so best to listen amplified, or with headphones to get the full effect).
This project started from the recording of a clock. I recorded our kitchen clock, but unfortunately it ended up a bit noisier than I’d have liked. There’s some of hiss in there (more obvious once you layer dozens of versions of it!), and there might be the odd muffled street noise. Okay, so I don’t have a silent recording studio. Texture, yeah, that’s what I call it. (-;
I took that single monophonic sample (about 23 seconds long) and then used it as a buffer in SuperCollider, making various drone-like synths, plenty of funky ticking patterns and some weird warping transitions and granular stuff. The opening seconds are pretty much the original sound, albeit layered several times.
I wanted to go for a hypnotic, dream-like effect (yes, you are getting very, very sleeeeeepy) where I could move smoothly between different phases. In some, you’re aware of time being slowed down, in others it flies by, and at other moments you don’t even notice it’s there.
Instructions: This week’s project requires you to make a field recording to serve as the source audio. These are the steps:
Step 1: Locate a clock that has an audible, even if very quiet, tick to its second hand. A watch or other timepiece is also appropriate to the task.
Step 2: Record the sound of the clock for at least 30 seconds, and do so in a manner captures the sound in the greatest detail. A contact mic is highly recommended.
Step 3: Adjust and otherwise filter the recording to reveal the various noises that make up its tick. The goal is to get at the nuance of its internal mechanism.
Step 4: Create an original piece of music employing only layered loops of that sound. These layered loops can individually be transformed in any manner you choose, but at least one unaltered version of the original recording should be included in your piece.