Last week I spent two days at Leeds University, first attending the CePRA (Centre for Practice-Led Research in the Arts) Study Day on Tuesday and the twelth Sonic Arts Forum on Saturday. Below are a few notes on each day, along with a couple of thoughts on a presentation closer to home by Kinglsey Ash (a fellow academic at the School of Film, Music and Performing Arts at Leeds Met).

The CePRA Study Day was organised by postrgraduates Lauren Redhead and Marcello Messina and divided into four sessions – composition and technology; aural and visuial; lecture recitals and a keynote speech by Bryan Matthews, senior research fellow at Leeds’ Institute for Transport Studies. Unfortunately I had to leave before the keynote speech, but enjoyed the rest of the sessions, which are summarised below.

Scott McLaughlin (University of Leeds) started the day by talking about Resonant Systems – or to give the work it’s full title Resonant Systems: multiphonic resonance complexes in sine-wave excited cymbal clusters. Here Scott had a system based on one or more cymbals mounted on a transducer, which generated sinusoidal waves courtesy of a Max/MSP patch. It looked something like this:

Two Cymbals and transducer (photo by Scott McLaughlin)

. . . and it sounded LOUD! I found myself reminded of Maryanne Amacher’s drone work- subtly fluctuating, entrancing and utterly inescapable.

I’d love to see this as a durational installation, although Scott pointed out that one problem was overloading the transducer… Scott’s session also discussed the relation of his work to acoustice and systems music and also pointed me toward some great quotes from Rick Bidlack (“How much music inherently resides in this system?”) and Tom Johnson (“I want to find the music, not compose it.”).

Scott also practically demonstrated how the acoustic properties of cymbals relate to the excitation frequency of the signal being fed into them in three stages: period, quasi-periodic and chaotic:

Following Scott was Nektarios Rodosthenous (University of York) presenting his work L’I Meant – “a study of hypothermia for actress, countertenor, saxophone and tape.” Nektarios describes his work as being in the ‘3rd channel , meaning that the acoustic and the electroacoustic work together to create an environment for the music to unfold.’ Being music-theatre pieces in which – for example – the countertenor was called upon to cough and shiver as welll as sing, I was also reminded of the notion of meta-praxis coined by Jani Christou in which performers go beyond their usual praxis as part of the demands of a composition:

For instance, a conductor conducting during a concert is a praxis, but if he is also required to walk about, speak, scream, gesticulate, or perform any other action not strictly connected to conducting, that could be a metapraxis…

Andrey Pissantchev (independent) was the sole presenter in the Aural and Visual panel, sharing some work on visualisation with Pure Data and Arduino. Essentially in its developmental stages, Andrey was had hooked PD to a set of LEDs powered by an Arduino to create varying light-shows. The idea is to extend this in the form of a Light Night performance in which a realtime system will control a lighting rig and accompany improvisers.

The two lecture recitales came from Will Baldry (University of Leeds) and Alistair Zaluda (Goldsmiths). Will was demonstrating his digital vinyl system – a pair of decks with time-coded lps that acted as a control surface for a more experimental approach to mixing and sampling. Alistair demonstrated his score Contrejours for piano and machine-listening system literally interpreting Helmut Lachenmann’s definition of composition as ‘building an instrument’. I thought this was a very successful experiment in extending the piano with live electronics – the sounds which were used were all recognisably derived from the piano itself (although sampled and manipulated), which gave the piece more consistency than many similar experiments I’ve heard which often fall into what I call the ‘Spooky Tooth‘ trap in which the electroacoustic and acoustic elements are hopelessly mismatched.

It was then I had to make hasty exit to pick up the children! On to a quick summary of the Sonic Arts Forum on Saturday… as I mentioned in a previous post, I began going to these in around 2004/05 when doing my first MSc on the topic of genetic algorithms for real-time music generation. They have always had a relaxed and constructive atmosphere and are a pleasure to attend. A few personal highlights from the day were:

David Hindmarch and his acousmatic piece Chain of Spectres. David is a blind composer and the first time I saw him he was – if I remember correctly – working on interfaces for visually impaired composes with BEASt. Chain of Spectres explored the ideas of acoustic shadows: the way in which physical structures change the sonic environment – an awareness of which is often used by blind people to navigate – David mentioned having an awareness of ‘the sound of a lamp-post’ – or perhaps that should be ‘the silence of a lamp-post’? An interesting piece that used field recordings to lead the listener through diverse sonic environments.

Simon and I had a good presentation (although in mono due to an issue with the lecture theatre sound) with some good questions asked at the end. I hope to put the slides up on my Academia.edu page in the near future.

Photo by Coryn Smethurst.

I also very much enjoyed Patrick McGlynn‘s presentation on a gestural control surface for iPad, built with TUI Pad, OSC and Ableton. Patrick’s approach does away with the familiar metaphor of the GUI and allows for something more expressive and intuitive. He’d made some lovely drones out of singing bowls for his demonstrations too!

Photo by Coryn Smethurst.

Francesco Sani’s Birth of a Star is a semi-improvised piece for two violins. The first recorded to a click-track, the second overdubbed. It was composed for the opening of a planetarium apparently, but contexts aside it’s a beautiful piece:

Finally, I’d like to share some thoughts on a presentation by a fellow of the School of Film, Music and Performing Arts, Kingsley Ash which was given at the Faraday Centre for Retail Excellence! Kingsley talked about his data sonification practices in Affected States (finalist – and winner I believe – of the ICAD 2012 sonification competition). While I’ve not studied sonification in depth, it is a peripheral interest to me. Affected States is a sonification of Twitter music trends. What I thought interesting was the use of a valence-arousal space to plot different reactions to music on Twitter and translate them into sonic characteristics:

Each entry on the Twitter music trend leaderboard had it’s own sonic fingerprint made up of words associated between Twitter posts and the valence-arousal graph. Although the sounds were quite short I thought there would be – time allowing – some good opportunities to make this piece of sonification more like an installation, for example searching for extracts from songs related to the artists and using the sonic valence-arousal fingerprints as carrier waves for modulating the sound and so on.

Kingsley also demonstrated some of his work with sonifying data for a study of British bird populations across the country over a decade. In it’s early stages this seems to me to be more concerned at present with creating meaningful sonic information from data for the purposes of meaningful aural scanning rather than creating ‘music’, but it did prompt me to think about the relationship of music to sonification.

Many elements of popular (e.g. not avant-garde or overtly challenging) music have their foundations in chords, progressions and harmonic theory going back at least to the baroque period, and beyond this the musical aesthetic of diverse cultures are drawn from a system of harmonic proportions derived from string lengths, masses of resonating bodies or the size of reverberant bodies. In other words, what we commonly refer to as ‘music’ has its deep structure in the harmonic system and the inherent mathematics of that system. It does not necessarily follow that this musical system can be any more than arbitrarily mapped onto other systems. So, to go back to the Rick Bidlack quote: “How much music inherently resides in this system?” – and how do we find and express it? Of what elements does a given system comprise? What are the inherent aesthetics? In which circumstances might chromatic or diatonic mappings be preferable to an untuned pitch continuum? And, with Affected States in mind, what models from other disciplines (in the valence-arousal diagram often used in psychological and music perception studies) might provide complementary and meaningful modes of data processing?