You are here

Music

Music 2.0 (or 3.0?)

Copenhagen.
We move on at COST298 to Stijn Bannier, who focusses on the musical network in the context of Web 2.0 (or 3.0, as the case may be). By 'musical network', Stijn means the network of artists, producers, labels, distributors, and other music industry institutions, which together constitute the industry itself. These are affected by the rise of Web 2.0, not least as it enables users to create, consume, share and remix music; this is potentially exacerbated by further developments towards Web 3.0.

Stijn points as an example to artist self-promotion and self-distribution on MySpace and elsewhere; to musical reproduction, tagging, and metadata sharing (e.g. on last.fm), which may also be analysed quantitatively; to distribution networks built on social networks, peer-to-peer filesharing, and other Web 2.0 media; and to the abundance of content which this creates. This is where Web 3.0 may come in, with its increased emphasis on metadata generation and evaluation.

How Managing Collaboration in Social Media Is like Conducting

Hamburg.
The final presenter at next09 is conductor (as in, music) Itay Talgam. He begins by describing the way an orchestra tunes up - everyone doing their own thing; this is noise, not yet music. But when the conductor steps up to the podium, attention is focussed, and music begins. Whose creation is this music? Who is responsible? Who contributed to the success of the performance?

This is a question of ownership, of course - and it applies just as much to collaborative online environments as it does to an orchestra. The conductor of the orchestra provides the leadership, controls the process, and the musicians follow - but in the process, the musicians also lose some of their independence, their ability to introduce their own personality into the performance.

Webcasting Royalties: Plus Ça Change...

Following up on a previous post on this subject: Tony Walker over at ABC Digital Futures notes the likely impending demise of one of the most innovative Webcasting projects of recent years: Pandora, the online radio station of the Music Genome Project. For the uninitiated: the MGP is a database of the specific traits of thousands of songs by a wide variety of artists, which enables it to suggest to users that if they like a specific song, they're also likely to enjoy a variety of songs from other albums and by other artists. On that basis, Pandora offers personalised Webcasting of tracks which the MGP identifies as similar to those tracks that a user has already said they like.

No News from the Webcast Front (But Sonic Synergies Now Published)

Sonic Synergies: Music, Identity, Technology and Community (Ashgate Popular and Folk Music Series)

Yay - Sonic Synergies: Music, Identity, Technology and Community, a book collecting the best papers from the eponymous 2003 conference in Adelaide, is finally out (if apparently only in hardcover, for almost US$100)...

My chapter in the book deals at its core with the 2002 Webcasting wars in the United States - a protracted and complex conflict between the recording industry and various groupings of large, medium, and small Webcasters each pursuing their own agendas, which was not so much resolved as put on hold by the eventual intervention of a few members of Congress concerned about the deleterious effects of the 1998 Digital Millennium Copyright Act (DMCA). The DMCA had put in place new approaches for digital royalty arbitration which posed serious problems for the long-term viability of small Webcasters (a fact which was bemoaned only rather fulsomely by the leaders of that market), and the ensuing negotiations finally hit the wall in 2002, after much toing and froing.

New Musical Instruments and Tools for Collaboration

Washington, D.C.
I got back a little late from today's lunch, and missed most of the first couple of papers in the next session here at Creativity & Cognition 2007. The paper by Kirsty Beilharz and Sam Ferguson is already in progress; they enhanced a Japanese flute, the shakahachi, with a variety of extra-instrumental sensors which drive a generative music system, creating a hyper-instrument, or a creative environment for the instrument. The environment senses the player's physical gestures while plying the instrument; some such gestures already exist as part of the normal process of playing the shakahachi, and the environment therefore enhances and builds on the often unconscious movements of the player, enabling them to exploit techniques they already have. Additionally, qualities of the instrument tone itself (breathiness, noisiness, and other qualities) are also monitored and harnessed.

Whoa.

Boston.
OK. 11.5 days of writing (I started on 23 May), for 14 hours straight on some days - all up I've been writing for about 143 hours so far, Word tells me (that's 12.5 hours per day, on average). 363 pages. 156,000 words. That's 1090 words per hour, but includes quotes, of course. 12 chapters written so far, and four more to go. If I haven't blogged for a while, it's because I've used up my allocation of words for the day.

Steam CafeSo, writing the produsage book is going OK, but it will need some editing - the final book is supposed to be only 300 pages, or 135,000 words. (Hey, I could stop right now...) Just as well, though, because it's not quite right in a few places yet, and I'm throwing in altogether too many quotes at times. That's always been an issue for me - lots of research, lots of interesting quotes from the research, and I'd love to use them all, but I can't let them overwhelm what I'm actually trying to say. So, I'm learning to throw out more than I'm using. Slowly.

Settling In in Boston

Boston.
MIT Stata CenterOther than during the MiT5 conference, I realise I haven't really blogged that much from Boston yet - I think I'm still getting over the jetlag from the flight here... It's certainly not as if there wasn't plenty to talk about. This is my third time in Boston, although the last couple of times I was here only for a few days and a few hours, respectively - but at least, I already have something of a general idea where things are and how I get there. It will still take me a while to find my way around MIT, though - if QUT's campuses occasionally seem maze-like, they've got nothing on MIT's sprawling expanse, even if some of the architecture here hadn't been built deliberately in flagrant disregard for architectural orthodoxy.

Web2.0 Critiques

Boston.
(I'm afraid I accidentally deleted a couple of comments here last night - please repost them if you can!)

It's the last day of MiT5, and we're in the first session of the day. Mary Madden from the Pew Center is the first speaker, on Socially-Driven Music Sharing and the Adoption of Participatory Media Applications. She notes that the term Web2.0 is imperfect but convenient for summarising many of the current developments in the online world. Tom O'Reilly defines Web2.0 as harnessing social effects; it may not be a revolution, but there have been important changes. We now need to think critically about how and why it emerged as a major force in the first place.

Tools for New Media Literacies

Boston.
The last MiT5 plenary session for today is on Learning through Remixing, and Henry Jenkins introduces it through examples of remixing as pedagogical practice in earlier times. This can perhaps be described as a process of taking culture apart and putting it together again, in order to better understand how it works.

The first speaker on the panel is Erik Blankinship, of Media Modifications, who build tools for exposing and enhancing the structure of media in order to make them more understandable to all (and he demonstrates this now by using a few redacted clips from Star Trek: TNG). Some of these which will also be online soon at adapt.tv, and another example for this is showing clips from The Fellowship of the Ring (the movie) next to the text of The Fellowship of the Ring (the book), and even a comparison of the Zeffirelli and Luhrman versions of Romeo & Juliet with the original Shakespeare text (which allows the viewer to compare how differently the two directors interpreted the text, and even to created hybrid versions with the 1996 Juliet and the 1968 Romeo interacting with one another). Fascinating stuff!

Six Degrees of Musical Separation, Quantified

I was interviewed for an ABC Online science story the other day, about an article published by a number of physicists recently. Not the most likely story to comment on for an Internet researcher, you might think (even if, as it turns out, my first degree was in physics) - but what's happened here is that the researchers in question have applied complex network theory to the musicians' database of the All Music Guide (AMG), which both tracks collaborations between musicians and provides recommendations of musical similarity made by its panel of expert contributors. What's come out of this are two datasets, one indicating the network of collaborations across the 30,000-odd musicians tracked by AMG, and one showing the similarities between these artists as AMG's pundits see them.

Pages

Subscribe to RSS - Music