Except where otherwise noted, this work is licensed under a Creative Commons License.
-->
Washington, D.C.
I got back a little late from today's lunch, and missed most of the first couple of papers in the next session here at Creativity & Cognition 2007. The paper by Kirsty Beilharz and Sam Ferguson is already in progress; they enhanced a Japanese flute, the shakahachi, with a variety of extra-instrumental sensors which drive a generative music system, creating a hyper-instrument, or a creative environment for the instrument. The environment senses the player's physical gestures while plying the instrument; some such gestures already exist as part of the normal process of playing the shakahachi, and the environment therefore enhances and builds on the often unconscious movements of the player, enabling them to exploit techniques they already have. Additionally, qualities of the instrument tone itself (breathiness, noisiness, and other qualities) are also monitored and harnessed.
The generative system builds on a neural oscillator network, and through granular synthesis reuses the sound of the instrument based on the pitches chosen by the neural oscillator network. The performer's data is captured, amplitude and pitch are mapped to neural oscillators, which are also triggered by crescendos, and these drive granular synthesis reusing the original instrumental sound. The visual performance is also captured, and its representation altered according to the information emerging from the system. Overall, this creates a complex performer-driven, gesture-triggered response system, which is also further adjusted according to the outcomes of the exploratory process.
Nick Bryan-Kinns, Patrick Healey and Joe Leach are up next. Their interest is in mutual engagement in creative collaborations; such engagement is a key aspect especially where the focus is on sociable and enjoyable activities. The group began by looking at interactions in free improvisation processes and examined the communicative gestures which occurred in such situations, as well as other collaborative situations using conventional and new musical instruments; however, similar questions also emerge when the collaborators are not co-present in the same space.
Indicators of mutual engagement emerging from this are proximal interaction in the same space, mutual modification, contribution to joint production, and attunement to other contributions; the team created a technology called Daisyphone to provide such features in a collaborative environment. The Daisyphone is represented as a clock-like environment in which multiple users can place musical notes at various spots around the circle, with distance from the centre indicating pitch, and location around the circle indicating the time in the sequence. Different users were represented by different colours, and they could also use different sounds in the system. Versions of this software exist on computers as well as mobile phones. This system was tested with a number of users working in pairs from separate rooms to develop a composition (there was no communication other than the collaboration itself). Results were that some users felt a lack of control over their own notes, and a clash of ideas, but many also worked well to extend and mirror one another's contributions. Interestingly, the more cues to each other's identity participants were given, the less mutual engagement was evident; cues might mitigate against working with each other's material, therefore.
Finally we move on to Olav W. Bertelsen, Morten Breinbjerg and Søren Pold. This project explored instrument interaction in musical creative contexts as a model for software interaction in general. The group worked with two composers working with digital sequencing systems and music programming softwares, respectively; for both, software was an integrative part of their creative processes, and influenced such processes and their musical outcomes. The software acted as a musical instrument in its own right, and thereby became both means and end; the constraints of the software had a direct bearing on the musical outcome itself, and the composers are exploring and straining against the materiality of the software (and in particular, its interfaces).
Chance and unpredictability are important aspects of the creative process in this context. The musicians both play the software as instrument, and make use of automated processes, thus using the software as machine. This is an attempt to transcend the basic metaphors of the software and employ a metonymic understanding of the representation of music in the software, reaching beyond the initial materiality of the basic metaphor built into the software interface. So, software is both means and end, integrating transparency and reflection; chains of mediation are both synchronic and diachronic; it is playable in various modes of interaction, with virtuosity and/or for pleasure; it gains material aspects, and therefore, a sense of instrumentness.
This means that such software must be designed with creativity in mind, that is, employ open designs which do not constrain the user; it must allow for metonymic representations, allow for the accidental, subjectivity, and virtuosity. It may well be possible to translate such observations beyond the realm of music software itself.
Links
[1] http://snurb.info/taxonomy/term/70
[2] http://snurb.info/taxonomy/term/15
[3] http://snurb.info/taxonomy/term/16
[4] http://snurb.info/taxonomy/term/5
[5] http://technorati.com/tag/C%26C%202007
[6] http://technorati.com/tag/cognition
[7] http://technorati.com/tag/collaboration
[8] http://technorati.com/tag/creativity
[9] http://technorati.com/tag/music
[10] http://del.icio.us/tag/C%26C+2007
[11] http://del.icio.us/tag/cognition
[12] http://del.icio.us/tag/collaboration
[13] http://del.icio.us/tag/creativity
[14] http://del.icio.us/tag/music