Artefacts: A Self-Reflexive Essay For a Digital Future
Living as we are in the possible post-singularity well into digital frontier, I’ve been thinking a lot about the meaning of digital audio with regards to music creation. Taking the assumption that the two greatest factors at play in the development of contemporary music are the spread of personal digital technology and the somewhat-related slow death of the pop music industry as we know it, this is an unprecedented time for musical creation. Optimistically, this should lead to a death of criticism, established rules of musical tradition and commercial-led developments on music. Realistically, we don’t seem to have achieved the anarcho-digital utopia hinted at by Stockhausen et al, but in this first part of an ongoing series of the state of digital music, I’m just going to look at how this inspires me personally.
Production as Performance
I recently discovered an interview with now-regaled producer/songwriter Dan Snaith (of Caribou) that he gave back when his first record was released in 2007. He talks quite refreshingly about taking a slapdash, iconoclastic approach to production that throws the established rules out the window that’s particularly representative of the current state of digital music.
In particular, I highly agree with his views on compression – “Some people say that when compression is used right, the listener shouldn’t be able to discern it. I beg to differ. […] I just don’t know much about recording in the traditional sense, and any gear I’ve had I’ve always just messed around with.”
As a side note, it’s this specific point of being able to hear the traditionally-hidden production techniques that seem to be such a hallmark of contemporary digital music. As the world becomes dominated by bedroom musicians hooked up with pirated DAWs, this push towards creative naïveté has become more prominent. Consider recent bedroom-DJ-turned-blubcore supremo James Blake. Probably the best track on his album, The Wilhelm Scream, is a fierce slow build of strange reverberated tones and compression artefacts that gradually overpower the vocal hook; blurring the line between auxiliary sound-effect and musical instrument. This right here is the excitement of the brave new digital world; that (without reducing to vagaries) any object, device or technique can influence any other object device or technique. We live in a world of endless possibility, and we’re only just beginning to realise the ways in which this can influence our art.
This is nothing particularly new. In the world of pop, all the recording/production greats have had their own distinctive sounds that add the the sum total of the song – consider the video below, the Joe Meek-produced I Hear A New World, in which each melodic section is repeated twice, first with added reverb and secondly pitch shifted up (presumably a very laborious process that required creating a version at half the tempo and then playing it at twice the speed). It hardly needs pointing out that this call-and-response of production techniques neatly echoes the lyrical theme in an otherwise simple song.
But how does this influence the music I write? I’ve been drawn to digital lo-fi for quite some time now. Rather than using the term as a defence against a lack of hi-end recording facilities/technical know-how, I find it to be a creative driving force in quite a bit of the music I write.
Bits and Bites
I’m going to re-look at three or four pieces I’ve done at various points to illustrate this example. Bear with me, it won’t be too painful.
Firstly, there was this short piece (initially called Screams of the Past but later at some point retitled to simply Screams) I made late one night after I came in from (I seem to recall) the pub. Rather pissed and feeling creative, I decided to multitrack my voice 16 times with the crappy Dell microphone I had at the time. Fairly free improvisation led to me whistling, humming, shouting and eventually attempting to swallow the microphone.
I like the piece, because it is entirely apparent that the thing was recorded with a crappy tiny-diaphragm thing going right into the audio jack. There’s absolutely no pretence at fidelity, or any semblance of actual meaning. The sound spectrum is fairly limited, but the density of tracks (and subsequent processing) creates quite a rewarding “full” texture. Obviously it could be bassier, and there’s room for more dynamic variation, but I was 18 at the time. It’s precisely this – being able to hear the recording and creative process – that I feel pleased with. Lo-fi traditionally means an adherence to the warmth of tapes and analogue infidelity, but I think it can be turned into something quite dangerous.
Coming up far closer to the present day is a song I wrote over the summer, Aftermath, for my largely aborted 60 Second Songs project. Using a chromatic harmonica and a randomly-sampling Max/MSP patch, I recorded a single live performance of the song (the lyrics to which I cannot for the life of me remember) and did a little moderate mixing afterwards.
The patch itself simply randomly records very short snippets of audio into one of 8 channels, and plays them back accordingly. It’s entirely unpredictable, but an interesting effect of using it on my laptop’s built-in sound setup is that is feeds back slowly, creating a dense sonic texture that is slightly intuitively manipulatable. In short, I would play a phrase on the harmonica, find portions of it looped back at me, play off these and sing the verses as they fitted into the overall structure. Thus, the sound moves slowly between being recognisably a harmonica, recognisably my voice, and an unrecognisable wash of noisy feedback; a trio for one performer.
It’s a technique I initially employed for a live performance – which I slightly facetiously entitled Playing – I gave while at Goldsmiths. Enjoy the ukulele.
Again, I’m drawn to the ability to hear the edges of the digital audio – the clips and pops of aleatoric sampling and the roughness of low-fidelity equipment. To what extent is the sampling an enabling technical process, and to what extent is it a musical instrument in its own right?
Lastly, a piece I created more recently, entitled Call of the Ouroboros. By manually writing into four audio buffers in Max/MSP that modulated themselves at various changing factors (creating a waveform that defined both the spectral timbre and rhythm of a sound) I was able create a shifting mass of digital noise that obeys a strict sense of inaudible internal structure. The spur of this piece was a realisation that buffers of audio data are simply a set of mechanical instructions given to a loudspeaker, and thus one can affect the physical movement of a loudspeaker cone with a mere flick of the mouse. Going back to my initial point, this is a direct response to the personal revelation I had that anything and everything can be connected musically.
Of course, all this is very well and good. But what about the wider world of today’s digitalism? I’ll address that (and hopefully more!) next time. Until then, I’ve got some music to listen to.