CEC

Social top

English

Toronto Electroacoustic Symposium 2009 Programme

August 6 -8, 2009

A co-production of: The Canadian Electroacoustic Community (CEC) and New Adventures in Sound Art, Toronto

All TES events take place at the Artscape Wychwood Barns 601 Christie Street, Toronto

Events will be webcast at: http://webcast.naisa.ca

2009 Registration

Register at: https://www.naisa.ca/eshops/esymposium.php

Directions

If walking:

From St. Clair West Subway station walk west to Christie Street and then south to Benson street. Once you pass Benson, you can enter in one of the side doors off either Christie or Wychwood.

If coming by bus:

Take the Christie bus either north from Christie Station or south from St. Clair West station and get off at Benson Street. Alternatively you can take the 512 Keele bus and get off at Christie and walk south to Benson.

If driving:

From 401 take the Allen Expressway south to Eglinton. Go west (right) on Eglinton and then South (left) on Oakwood and another quick left onto Vaughan Street. Follow Vaughan to St. Clair west and turn west (right) onto St. Clair. Turn south (left) from there onto either Wychwood or Christie. There is street parking on the Christie and Wychwood sides of the building.

Symposium Schedule

Opening Concert: Thursday 6th August 2009, 8 PM Loop Centre for Lively Arts and Learning, #176 Artscape Wychwood Barns 601 Christie Street

Symposium Dinner: Thursday 6th August 2009, 10 PM Reservations have been made at Mezzetta's, 681 St Clair Avenue West

Mezzetta's is situated at the corner of Christie and St. Clair Ave West, a 5 minute walk from the Artscape Wychwood Barns.

Note: Questions about the schedule or any other aspect of the symposium can be directed to David Ogborn, chair of the symposium committee, by email david.ogborn@utoronto.ca or (especially during the symposium) by telephone 289.439.3252.

Symposium Sessions (Day 1): Friday 7th August 8:30 AM – 5 PM

Loop Studio Centre for Lively Arts, #170 Artscape Wychwood Barns 601 Christie Street

8:30-9:00 AM Introductions and Coffee
9:00-10:30 AM Keynote Lecture by Annette Vande Gorne
Electroacoustique (de type acousmatique) et passé musical: rupture ou continuité? (Acousmatic) electroacoustics and the musical past: rupture or continuity?
10:30-11:00 AM Coffee
11-12 noon Paper Session 1: History and Communities — chair: Kevin Austin (Concordia University)
Building a Trans-local Community — Darren Copeland and Nadene Thériault-Copeland
Elements of Identity in Latin American Electroacoustic Music — Leonardo Secco
12-1:30 PM Lunch
1:30-3 PM Paper Session 2: Sound and Space — chair: Fiona Ryan (University of Toronto)
Live Ambisonics: a design for quick and evocative live spatialization of multiple inputs — Carey Dodge
New Tools for Sound Spatialization developed at the Université de Montréal — Robert Normandeau
Compositional Approaches to Multi-channel Space: Spatialization Without Panning — Benjamin Thigpen
3-3:15 PM Coffee
3:15-3:30 PM Listening Session: Strike! by Stephen Kilpatrick
3:30-5 PM Paper Session 3: Sound and Image — chair: Jean Piché (Université de Montréal)
A phenomenological time-based approach to videomusic composition Freida Abtan
Creating integrated music and video for dance: Lessons learned and lessons ignored Jeffrey Hass
Collaborative Interpretation: Interactive Video in Performance Krista Martynes and Julien-Robert Legault Savail

Ecology: Water, Air, Sound Concert 8PM

Loop Centre for Lively Arts and Learning, 601 Christie St #176

Symposium Sessions (Day 2): Saturday 8th August 8:30 AM – 5 PM

Loop Studio Centre for Lively Arts, #170 Artscape Wychwood Barns 601 Christie Street

8:30-9 AM Coffee
9-10:30 AM Paper Session 4: EA Education and Pedagogy — chair: Emilie LeBel (University of Toronto)
Musically Enhanced Narrative Inquiry: A New Research Methodology Benjamin Bolden
Considerations for designing and delivering Electro Acoustic awards online, in a “virtual conservatoire” — Simon Kilshaw
The Aural Skills of Undergraduate Electroacoustic (EA) Music Majors in the Context of a New Aural Training Method Designed for EA Eldad Tsabary
10:30-10:45 AM Coffee
10:45-11 AM Listening Session: Santa Barbara Soundscape by Salman Bakht
11-12 noon Paper Session 5: (Acoustic) Ecology: Water, Air, Sound — chair: Nadene Thériault-Copeland (New Adventures in Sound Art)
Noise, Nonsense, and the New Media Soundscape Salman Bakht
Wind Coil Sound Flow Ken Gregory
12-1 PM Lunch
1-2:30 PM Paper Session 6: Interpretation, Criticism and Analysis — chair: Hilary Martin (York University)
Shadow-walks in Toronto Viv Corringham
Composition as “machine to think with”: Aspects of narrative within electroacoustic music. Stephen Kilpatrick
Referential Sounds, Metaphors, and Compositional Strategies in EA Jason Stanford
2:30-2:45 PM Coffee
2:45-3 PM Listening Session: Slumber by John Gibson, For Tape by Adam Scott Neal
3-4:30 PM Paper Session 7: Live Electronics and Interaction — chair: David Ogborn (McMaster University)
Spectral Delay Processing as a Compositional Resource John Gibson
A Continuum of Indeterminacy in Laptop Music Adam Scott Neal
Multi-Agency and Realtime Composition: In Equilibrio Arne Eigenfeldt
4:30-4:45 PM Listening Session: In Equilibrio by Arne Eigenfeldt
4:45-5 PM Closing Discussion

Two Portraits Concert: Benjamin Thigpen, and Annette Vande Gorne 8PM

Loop Centre for Lively Arts and Learning, 601 Christie St #176

Review Committee

Organizing Committee

Keynote Lecture by Annette Vande Gorne

Electroacoustique(detypeacousmatique)etpassémusical:ruptureoucontinuité? (Acousmatic) electroacoustics and the musical past: rupture or continuity?

XXème siècle: En France, Pierre Schaeffer bouleverse les habitudes perceptives (les 4 écoutes, le son comme forme/matière) et descriptives (le total sonore dans la typo-morphologie du traité des objets musicaux). François Bayle développe les concepts théoriques relatifs à la musique acousmatique produite et entendue par haut-parleurs (image de son, conduite d'écoute, relation archétypale à l'imaginaire, relation à la mémoire, à l'espace, modalité acousmatique de la création en studio…). Il y a aussi, évidemment, la révolution technologique qui l'accompagne (enregistrement sonore, traitements temps réel, synthèse…). Il y a donc rupture avec le passé.

Cependant le lien historique peut être maintenu. L'écoute morpho-dynamique appliquée aux écritures mélodico-harmoniques, et donc une analyse perceptive renouvelée du style de certains compositeurs sont possibles. à partir de ce point d'écoute, l'étude de Debussy m'a permis d'enrichir ma palette d'écriture sonore, comme ces peintres qui posent leur chevalet au Louvre pour étudier, en les copiant, les styles anciens. Explications et exemples sonores: "Ce qu'a vu le vent d'Est". D'autre part, l'habitude de penser la musique comme une conduite de l'imaginaire de l'auditeur grâce, entre autres, à des degrés hiérarchisés de signes (Ch.Pierce) archétypaux engendre une interaction avec le texte inédite. Dans "Yawar Fiesta", opéra électroacoustique en cours de composition, je reviens à la relation madrigalesque de Monterverdi ou Gesualdo, entre autres par l'utilisation figurative de mouvements dans l'espace, à partir des improvisations des chanteurs en studio. Explications et exemples sonores. ...L'histoire continue...

Twentieth century: In France, Pierre Schaeffer upsets perceptual habits (4 modes of listening, sound as a form/matter) and descriptive habits (the totality of sound in the typo-morphology of the Treaty of musical objects). François Bayle develops theoretical concepts relating to acousmatic music, produced and heard by loudspeakers (image of sound, controlled listening, archetypal relationship to the imaginary, relationship to memory, to space, the acousmatic method of creation in the studio…). There is, of course, also the accompanying technological revolution (sound recording, real-time processing, synthesis...). It is therefore a break with the past.

However the historical link can be maintained. Morpho-dynamic listening can be applied to melodico-harmonic writing, and thus a renewed perceptive analysis of the style of certain composers is possible. From this point of listening, studying Debussy allowed me to expand my palette for composing sound, like those painters who raised their easels in the Louvre to study and copy the old styles. Explanations and sound clips: Ce qu'a vu le vent d'Est. On the other hand, the habit of thinking music as a conduit for the imagination of the listener through, in part, a hierarchy of archetypal signs (Ch. Pierce), creates an interaction with the original text. In Yawar Fiesta, an electroacoustic opera and work in progress, I return to the madrigalesque relationship of Gesualdo or Monteverdi, by the use of figurative movements in space, and starting from the improvisations of the singers in the studio, among other things. Explanations and sound examples...The story continues...

Following her classical studies in Belgium, Annette Vande Gorne chanced upon acousmatics when on a training position in France. Instantly convinced, by the works of François Bayle and Pierre Henry, of the revolutionary nature of this art form (disruption of perception, renewal of composition through spectromorphological writing and listening conduction, historical importance of the movement), she took a few training positions to grab its basics, then studied musicology (ULB, Brussels) and electroacoustic composition with Guy Reibel and Pierre Schaeffer at the Conservatoire national supérieur in Paris. She founded and managed Musiques & Recherches and the Métamorphoses d'Orphée studios (Ohain, 1982). She also launched a series of concerts and an acousmatics festival called L'Espace du son (Brussels, 1984; annual since 1994), after assembling a 60-loudspeaker system, an acousmonium, derived from the sound projection system designed by François Bayle. She is the editor of the musical aesthetics review Lien and Répertoire électrO-CD (1993, '97, '98), a directory of electroacoustic works. She also founded the composition competition Métamorphoses and the spatialized performance competition Espace du son. She gradually put together Belgium's only documentation centre on that art, available online at www.musiques-recherches.org She gives numerous spatialized acousmatic music performances, both of her own works and the works of international composers. Professor of electroacoustic composition at the Royal Conservatory in Liège (1986), then Brussels ('87) and Mons ('93), she founded an autonomous Electroacoustic Music section at the latter, later (2002) integrated to the European graduate studies framework. Since 1999, she has been managing an international summer training session on spatialization and - since 1987 - on electroacoustic composition. Her works can be heard in every festival and on every radio program presenting me-dia-based (previously 'tape') music. Her current work focuses on various energetic and kinesthetic archetypes. Nature and the physical world are models for an abstract and expressive musical language. She is passionate about two other fields of research: the various relationships to word, sound, and meaning provided by electroacoustic technology, and the composition of space seen as the fifth musical parameter and its relationship to the other four parameters and the archetypes being used. Her work falls essentially in the acousmatic category, including the Tao suite and Ce qu'a vu le vent d'Est, which renews electroacoustic music's ties with the past, with a few incursions in other art forms, including theatre, dance, sculpture, etc. [English translation: François Couture, ix-07]

Paper Session 1: History and Communities (Chair: Kevin Austin) Darren Copeland and Nadene Thériault-Copeland

Building a Trans-local Community

This joint presentation will offer an alternative outlook to community building that recognizes individuality within the various niche groups of sound artists and offers opportunities for these artists to connect with each other and the broader community. In this line of thinking it does not matter so much that there are several small pockets of artists in one geographic area each creating works within a specific genre of sound art. It matters more that there is cross-pollination between these various niche groups and that there are opportunities for them to come together on an international level with other niche groups creating sound art. Why not help connect these 'pockets' or niche groups municipally, provincially, nationally and internationally by offering them opportunities for having their works presented and to learn from each other through inclusiveness rather than demanding conformity to a specific ideology? In this way one is both serving the needs of the individual and the community as a whole — i.e., building a trans-local community.

In pursuit of the mandate for New Adventures in Sound Art (NAISA) we have realized that supporting diversity goes beyond choosing a diverse group of artists for our presentations, but rather it is about creating a structure that allows for artists creating work within different genres of sound art to interact and learn from each other on a local, national and international level. It is important to seek out opportunities to present works in various formats (concert, broadcast, web-cast) by and about the community in which a presenting organization is located as well as to present sound pieces from a broad range of sound art genres (experimental sound art performances, sound rich documentaries, radio art, electroacoustic music, sound from installations etc) by artists ranging from beginners to established artists and from youth to senior and from local, provincial, national and international geographic areas. It is also important to continually provide opportunities for artists to learn from each other as well as to enjoy the exposure offered by placing their works within contexts that are not bound by the conventions of established genres.

Nadene Thériault-Copeland is a composer and pianist and promotes the dissemination of new and experimental sound art through her work as Managing Director with New Adventures in Sound Art (NAISA) including the editing of three educational booklets: The Radio Art Companion (2002), The Sign Waves Companion (2002) and Sound in Space (2003) and spear-heading the NAISA Youth Initiative. Nadene received her B.F.A. in Music from York University in 1991 where she studied composition with James Tenney. Nadene is also the current chair of the board of directors of the Canadian Association for Sound Ecology.

Darren Copeland is a Canadian Sound Artist creating work for radio, performance, and installation with a focus on soundscape composition and multichannel spatialization. His electroacoustic concert and radio works have been commissioned and presented worldwide (ZKM, Kunstradio, Engine 27, La Muse en Circuit), have received mentions at a diverse range of international competitions (New York Festivals Awards for Radio and Television, Phonurgia Nova, Vancouver New Music, Luigi Russolo, etc), and are published on the internationally recognized empreintes DIGITALes label. His writing has appeared in Musicworks, Circuit, Canadian Theatre Review, and the Soundscape Journal. He has studied electroacoustic composition under Barry Truax (Simon Fraser University) and Dr. Jonty Harrison (University of Birmingham). He is the founder and Artistic Director of New Adventures in Sound Art.

Leonardo Secco

Elements of identity in Latin American Electroacoustic Music

This work is an exploration of different ways by which Latin American composers have integrated elements of identity in their electroacoustic works. The idea came to mind while exploring the Latin American Electroacoustic Music Collection (developed at the Daniel Langlois Foundation for Art, Science and Technology) created by Ricardo Dal Farra (containing more than 1700 compositions created by Latin-American composers during the last 50 years) – we realized that many of the compositions included references related to the history, language, culture and territory of a particular nation.

In this research we have selected and analyzed a number of electroacoustic pieces in which, according to us, composers from Latin America have included some type of identity message in their music. We observed two basic ways to achieve this: Firstly, by using what we call Explicit Sound Elements (referential real-world sounds more or less transformed by different processes), the sources being anything from radiophonic material to field recordings of any type or music excerpts from recorded media. Secondly, through Identity Concepts expressed in texts included in CD booklets or program notes. These two methods often complement each other.

An identity message can be more or less recognized by the listeners depending on their belonging or not to the nation or community referred to by the composer. In this sense, written concepts in program notes can be very important to help the listener understand an identity concept a composer wishes to transmit, specially when the sound material used is mainly abstract. The different approaches observed illustrate well these differences:

Tramos (1975) from Eduardo Bértola uses radiophonic sound material carrying strong identity messages (related to local dialect, historical events and public characters), understandable by most Argentinean residents. Gonzalo Biffarella's Mestizaje (19931994) uses abstract sounds mixed with recorded voice sounds (ritual songs from an existing aboriginal nation) in a more conceptual approach, in this case program notes are important to grasp the message: the composer wishes to express the contrast between aboriginal and occidental cultures. A different approach is that of J. M. Candela in his piece Bajan gritando ellos (2000), here the composer uses a poem read in mapudungún (language of the mapuche nation) about the tragic destiny of these people, the identity of this nation is explicitly shown by two elements: their language and specific historical events described in the poem.

Leonardo Secco was born in Montevideo (Uruguay) in 1966, were he studied composition, violin and sound synthesis. He lives in Montreal since 2005. In 2007 he obtained a degree (Bac) in mixed composition at the University of Montreal. In 2008 he won a grant from the Fonds de Bourses de l'Université de Montréal. Presently, he's completing a master's degree in electroacoustic composition with Robert Normandeau, and teaching electroacoustic composition within the Musiques Numeriques program at the Université de Montréal.

Paper Session 2: Sound and Space (Chair: Fiona Ryan)

Carey Dodge

Live Ambisonics: a design for quick and evocative live spatialization of multiple inputs

This paper presents the software 'Live Ambisonics' created by Carey Dodge. The beta version is to be released on the date of presentation in August, 2009. This software allows for quick and evocative live spatialization of multiple inputs from a recording, live, or otherwise. The thrust in the creation of this software is interaction design. With the creation of this software, the author wishes to continue and push forward discussion about interaction design for live spatialization of multiple sounds. It uses a single monitor interface, mouse, keyboard and optional Wii remote. The software was created using MaxMSP 4.6.3 and uses third party externals listed in the reference section. The paper will discuss the technical and theoretical concepts used to create the application followed by some practical examples and possibilities. With the presentation of this free software, the author hopes to contribute to more accessible complex spatialization techniques for electroacousticians, musicians and anyone else with an interest in sound.

Carey Dodge is a free-lance sound designer and programmer for new media. He has a passion for multi-channel sound and how technology can create new and exciting experiences for people. He has studied and worked across Canada and in Europe. His most recent body of work was created during his master's studies at the Sonic Arts Research Centre in Belfast. His dissertation project was a reactive sound art installation in the Tropical Ravine of the Belfast Botanic Gardens. He used algorithms and interviews of park dwellers to creative an immersive and contemplative environment for park visitors. He is currently residing in Vancouver where he is enjoying the mountains and has joined forces with his brother's theatre company Boca Del Lupo. Boca Del Lupo's next show will be premiered at the Harbourfront Centre in the fall of 2010. Find out more and feel free to contact Carey at www.careydodge.ca.

Robert Normandeau

New Tools for Sound Spatialization developed at the Université de Montréal

At the beginning of the 21st century, we, like many other electroacoustic music studios around the world, installed an octophonic system. Today, as most of our students are using audio sequencers (Digital Performer, Logic etc.) to compose their works, we have to recognize that, on the Mac platform, there are no tools to compose in octophony. Moving a sound around listeners through 8 speakers is not possible within any of the audio sequencers available (only Nuendo 3 possessed this feature). The main reason for this predicament is purely commercial: the companies that design audio sequencers are linked to the film industry and thus, their spatialization formats. This means that one will find only the traditional 5.1, 6.1, 7.1 formats and, less often, the 10.2 format (Digital Perfomer only) – no 8-channel panning format.

One solution to this problem is to use a spatializer of some sort: IRCAM's Spatialisateur or GMEM's HoloSpat (Groupe de musique expérimentale de Marseille), for instance. Many others have been developed on different platforms and within software such as MaxMSP, Super-Collider and CSound. The problem with these tools is that they are incorporated at the end of the compositional process. Thus, composers have to first compose the timeline of the music, then bounce all of the tracks and only then can they work on spatialization. This is nonsense. Space is not just a flavour added at the end of the composition. It should be intimately integrated into the daily practices of the composer.

With these known limitations in commercial software, we, at the Faculté de musique, are developing our own solutions. However, before this process began, we looked around and discovered two projects that provided potential solutions to our problem. The first was developed by the French composer Jean-Marc Duchenne, who has been involved in multichannel composition for over than 20 years and who created a web site dedicated to the subject. Within his series of plugins called Acousmodules, he has dedicated some to sound spatialisation. Originally developed for the Windows platform, some have recently been transferred to the Mac with the help of software designed in Montréal by Antoine Massout. This software, called SonicBirth, is a toolbox made to design AU plugins. Unfortunately, since SonicBirth’s creation, Apple has made some basic changes in Leopard’s Core Audio causing the Acousmodules to function erratically.

With the help of a research grant for Hexagram (a multi-university research group based in Montréal), I have put in place a research group, the Groupe de recherche en immersion spatiale (GRIS), whose goals are to develop tools for multichannel composition. Antoine Massout is part of the group and with the permission of Jean-Marc Duchenne, we have designed an autonomous AU plugin that allows the user to work in a limited multi-speaker environment. The plugin will be presented at the conference. The second project that the GRIS is involved in is the Zirkonium project. Originally developed at the ZKM, Zirkonium is spatialization software intended for sound spatialization within a dome of speakers. In 2008, thanks to a subsidy from the Canada Foundation for Innovation, we built a dome of 36 speakers at the Faculté de musique. Through our collaboration with the ZKM, we continue the development of the Zirkonium software. This year we are further developing the AU plugin that allows composers to integrate an immersive space into their music at the same time and level as any other parameters. The Zirkonium AU will be presented at the conference.

Robert Normandeau: Born March 11, 1955 in Québec City (Canada). MMus (1988) and DMus (1992) in Composition from Université de Montréal. Founding member of the Canadian Electroacoustic Community. Founding member of Réseaux (1991), a concert society. Prize-winner of the Bourges, Fribourg, Luigi-Russolo, Musica Nova, Noroit-Léonce Petitot, Phonurgia-Nova, Stockholm and Ars Electronica (Golden Nica in 1996) international competitions. His work figures on many compact discs among them there are six solo discs: Lieux inouïs, Tangram, Figures, Clair de terre and the DVD Puzzles, published by empreintes DIGITALes and Sonars published by Rephlex (England). He was awarded two Opus Prizes from the Conseil québécois de la musique in 1999: «Composer of the Year» and «Record of the year in contemporary music» (Figures on empreintes DIGITALes label). He was awarded the Masque 2001 for Malina and the Masque 2005 for La cloche de verre, the best music composed for a theater play, given by the Académie québécoise du théâtre. He is Professor in Electroacoustics Composition at Université de Montréal since 1999.

Benjamin Thigpen

Compositional Approaches to Multi-channel Space: Spatialization Without Panning

I would like to talk very concretely about working with multi-channel spatialization at the compositional level of electroacoustic music, focusing on the use of a circle of loudspeakers positioned in a horizontal plane surrounding the public. Thus I will not consider the vertical dimension nor questions of sound projection/diffusion in concert I take it for granted that spatial properties and relations can be important and musically powerful compositional parameters, and also that the perceived effect (hence the musical importance) of spatialization is completely dependent on the particularities of human auditory spatial perception.

I will present approaches to spatialization that I have used as a composer, basing my talk entirely on my own personal composing and listening experience. The general-principles discussed will be substantiated by sound examples drawn from my work and accompanied by explanations and demonstrations of the specific spatialization techniques used.

Given that the circle of loudspeakers defines a horizontal plane and that it is possible to simulate sound localization at almost any point on this plane, the most obvious approach to multi-channel spatialization is to position virtual sound sources within this plane and then move them around (as John Chowning did in 1971). I have often worked in this way, using techniques of multi-channel panning, distance control and doppler effect. In 2006, I wrote a program for NAISA which performs this sort of spatialization, and just prior to the symposium I will be giving a workshop on the program. In this paper, therefore, I will discuss other, alternative approaches.

1) Fixed speaker assignments: using the speaker as a point-source “instrument.” Each sound-channel, rather than moving among the speakers, is assigned to an individual speaker and remains there. The perceptual result (based on the psychoacoustic phenomenon of auditory streaming) is that different simultaneous sounds fuse much less than when they are panned between speakers. Thus multiple speakers are used to maintain the independence of “voices” in polyphonic or heterophonic musics; this results in a more transparent sound and permits the composer to work with a greater density of sonic “information.”

2) Decorrelation: using or creating small differences between the signals on different channels. Channel decorrelations can result from processing, from non-standard recording techniques or both, and they can produce a wide variety of auditory effects (based again on psychoacoustic phenomena). The listener may perceive often quite complex rapid movements of the entire sound or of only parts of it; s/he may experience a sensation of “spaciousness” or of being “enveloped” by the sound; the sound may seem to be somehow physically, palpably “present,” spread out in the space between and beyond the speakers. Rather than placing the sound in an external space, decorrelation techniques open up the space within the sound itself.

While these approaches may appear less seductive than those which allow sounds to fly around in potentially fascinating patterns, from a perceptual and musical point of view they are equally effective, if not more so.

Benjamin Thigpen, nomad, born in the United States, with degrees in English Literature, Comparative Literature and “Esthetics, Technologies and Artistic Creations,” immigrated to Paris at the age of 31. Since then, he has composed at GRM (Paris), at Musiques et Recherches (Belgium), at SCRIME (Bordeaux), at EMS (Stockholm), at the Visby International Centre for Composers (Sweden), at STEIM (Amsterdam), at Djerassi (California), at l'Espace Totem (Montreal), in his bedroom and in the train. After 6 years teaching computer music at IRCAM (Paris), followed by a brief period at the University of Washington (Seattle), he currently teaches at the Conservatory of Cuneo (Italy) and at the Royal Conservatory of Mons (Belgium). His music is concerned with issues of energy, density, complexity, movement, simultaneity and violence, and he often works extensively with space as a primary compositional parameter. http://thigpen.free.fr/ http://www.myspace.com/bnthigpen

Paper Session 3: Sound and Image (Chair: Jean Piché)

Freida Abtan

A phenomenological time-based approach to videomusic composition

This paper considers the history and discourse of visual music and the multiple ways that sound and image have been engaged through its practice, in order to set up a conceptual framework to analyze videomusic. It focuses on the theory of Michel Chion who studied the perceptually binding relationship of sound and image in the context of film, to turn the dialog toward the relationship between movement and combined gesture in the ocular and aural senses. Finally, it speculates on ways in which relational movement and the perception of sound and image may be exploited to achieve different temporal affect.

Freida Abtan's sound work descends from formal electro-acoustic compositionstrategies and from the great body of experimental electronic music that concentrates on the exploration of the spectral properties of sound. She primarily works with samples of both musical and non-musical objects that she records herself and then manipulates, often beyond recognition, through techniques derived from musique concrete and through successive layers of digital signal processing. Freida uses structures reminiscent of popular music and more abstract compositional variants to sequence these sounds into melodic songs before incorporating her own treated voice.

Jeffrey Hass

Creating integrated music and video for dance: Lessons learned and lessons ignored

Over the past five years, I have composed music and video as part of a series of collaborations with choreographer Elizabeth Shea and lighting designer Rob Shakespeare. During that time, we have created four large-scale works, some twenty minutes or more in duration. While our first piece had a small amount of actual video, it did incorporate video tracking which in turn affected the music in real time. Each successive work incorporated video and interactive music in varying ways—full live processing of the dancers’ images, fixed video, infrared tracking and projection, etc. The problems created and the problems solved in having competing visual and aural stimuli for an audience to synthesize has been a journey of both discovery and confusion. The language that three artists from different disciplines create to work with each other when exploring a combined medium none is fully experienced in is one that still continues to evolve. I would like to share examples from each of these works, which are quite different from each other, and discuss the pros and cons of each approach and how each influenced the subsequent work…or how these lessons were ignored in favor of a completely different approach.

Dancing Till the Cows Come Home exemplifies fully-live video processing, The Nature of Human has three contrasting movements. The first, Mindstorms uses infrared tracking and projection of modified silhouettes back onto the dancers—the music was created by sonifying data of synapses firing which also triggers some video events. The second movement, Magnetic Resonance Music is fixed video with pre-recorded material from the choreography. Unstrung for violin, dancer and interactive electronics also features fixed but more minimal video but with a live musician on stage with an expanded sound palette from the electronics.

Full online streaming examples, credits and program notes for all these works can be found at: http://music.indiana.edu/department/composition/Recordings/Hass/Hass. shtml. in the “Works for Dance” section

Jeffrey Hass is currently Professor of Composition at Indiana University, Bloomington, where he serves as the Director of the Center for Electronic and Computer Music (CECM. His compositions have been premiered by the Louisville Orchestra, Memphis Symphony and the Concordia Chamber Orchestra, and have had performances at Lincoln Center and at many national conferences. His band and orchestral works are published by MMB Music, Ludwig Music, and Magnetic Resonance Music.

Awards include National Band Association Competition, Walter Beeler Memorial Award, Lee Ettelson Composer’s Award, United States Army Band’s Composition Award, ASCAP/ Rudolph Nissim Award, Heckscher Orchestral Award, Bogliasco Foundation Fellowship and the Utah Arts Festival Orchestral Commissioning Award. He is currently working on new interfaces for interactions between dance and music. Recordings of his works have been released by the Indiana University Press, the Society for Electroacoustic Music in the US (SEAMUS), Arizona University Recordings, Albany Records and RIAX Records.

Krista Martynes and Julien-Robert Legault Salvail

Collaborative Interpretation: Interactive Video in Performance

-In this presentation, the question of how to integrate interactive video in a musical performance without it becoming a simple visual effect, having no precise speech, or that the music becomes subordinate to the video, resulting in a non aesthetic dichotomy between these two parts. -To avoid this dichotomy, an interaction is necessary. The main object of our research is to find possibilities to create this interaction. Being a new form of multi-media performance, experimentation is necessary in order to find which ways are most effective, and elegant.

-Contrary to “video-music”, which is on a fixed support, the video suggested in our research is interactive and reacts to the musician almost leaving him/her all the freedom of expression present before adding video. -These possibilities of interaction will be divided into three categories: video that follows music, music that follows video, and the mixture of these two domains. -The last category creates the most effective musical result. Another topic being addressed in our presentation will be: Is the effectiveness of the video or the music dependent on the production? The production being the possibility of the placement and elements of the musician and their relationship to the video screen : for example : placement behind or in front, working with a transparent screen, mobility of the musician, integration of the musician physically with projection, etc; and comparing the flow of this visual information when communicated to the listener.

After having composed instrumental and electroacoustic music, Julien-Robert Legault Salvail composes mixed music. To create accessible contemporary music, he uses the technological possibilities to integrate video into his mixed music. He is interested by different media having composed pieces for dance and film.

Krista Martynes did her Bachelors in clarinet performance at the University of Northern Colorado continuing in Europe with a Masters in Musicology at the Paris VIII University. As a contemporary musician and improviser she was featured as a soloist in many festivals throughout North America and Europe, as well as worked with many multi-media composers. She is currently working on a doctorate at the University of Montreal specializing and researching instrumental and mixed media possibilities.

Paper Session 4: EA Education and Pedagogy (Chair: Emilie LeBel)

Benjamin Bolden

Musically Enhanced Narrative Inquiry: A New Research Methodology

I am a composer, teacher, and education researcher, schooled and practiced in qualitative research traditions. Recently I have been developing an entirely new way of doing research – a less cautious, more experimental, and more exciting response to my developing understanding of educational inquiry. This paper outlines the development of this new methodology.

‘Musically Enhanced Narrative Inquiry’ sprung from my work with arts based research and narrative inquiry, in particular, the notions that: -people learn from and through stories -people learn not only by listening to stories, but also by creating and sharing them -art can be a vehicle for developing understandings – not only when people experience art, but also when people engage in creating art -art can tell stories in ways that expository telling cannot – art can express the ineffable

I decided to ‘musically enhance’ a particularly resonant portion of a research interview. The participant, an experienced educator, related an anecdote brilliantly capturing the contrasting pedagogical approaches of two music teachers. In my musical representation of the findings – the audio document – I was able to employ compositional techniques to suggest to the listener particular ways of perceiving and contextualizing this data, and to communicate possible analytical interpretations. In this audio representation of my research the participant’s unique voice is not only captured, literally, but also enhanced with musical illustrations. The infinitely powerful vehicle of music

Dr. Benjamin Bolden is an assistant professor of music education at the University of Victoria and editor of the official journal of the Canadian Music Educators’Association. His research focus is teaching composing, and his work on the subject has been presented and published nationally and internationally. Ben holds a PhD in music education from the University of Toronto, an M.Mus in composition from the University of British Columbia, a B.Ed from OISE/ UT, and a B.Mus from Carleton University. As a teacher, Ben has worked and made music with pre-school, elementary, secondary, and university students. Ben is also an associate composer with the Canadian Music Centre; his music has been performed by a broad variety of professional and amateur ensembles. Ben has extensive performance experience as a singer and professional stage actor.

Simon Kilshaw

Considerations for designing and delivering Electroacoustic awards online, in a “virtual conservatoire”

This paper addresses the challenges and considerations for teaching Electro-acoustic music online, while trying to uphold the 1:1 practical tuition ethos of a conservatoire. The presentation addresses the nature of the blend of technologies and media to nurture the individual learning style of a creative EA composer.

The presentation tackles the considerations for designing the individual learning experience of the student, in terms of synchronous and asynchronous learning, our role as e-moderators, electroacoustic based e-tivities, signposting and media rich VLE content.

And finally, the presentation will demonstrate elements of a virtual conservatoire, linked to a studio back in the UK, with full two-way control of the student’s workstation.

As a lecturer in Music Technology at the Royal Welsh College of Music and Drama, Simon Kilshaw explores the exciting potential of technology to teach, compose, perform and create music. His research and practice primarily relates to interactive gestural performance and composition. More recently, in designing the new online MMus (Masters in Music) at RWCMD, Simon’s research has been in the area of developing interactive learning environments specifically for musicians - an opportunity to “digitise” the individual learning experience of an online conservatoire student. Simon’s latest project is programming computer music systems to enable real-time interactive digital classrooms and creative learning environments. In 2007, Simon founded “Sonic Arts in Wales”, an organisation to foster and promote sonic arts and electroacoustic music in Wales which supports members right across the country, proudly boasting Bernard Parmegiani as Honorary Patron.

Eldad Tsabary

The Aural Skills of Undergraduate Electroacoustic (EA) Music Majors in the Context of a New Aural Training Method Designed for EA

The aural training needs of electroacoustic (EA) musicians may not be addressed effectively by traditional aural training due to the fact that the broad artistic EA medium rejects the limitations of tonality, metric rhythm, equal temperament, pitched sound sources, notation, performance, and indeed performers. EA artists usually design their own sound material, deal with aspects of space, manipulate spoken/sung content, and shape the overall sound of their works, and therefore need additional listening skills similar to those of instrument builders, acousticians, phoneticians, and sound producers, among others.

In the upcoming academic year 2009-10 I will be conducting an action-research study of a relatively new aural training method designed for electroacoustic musicians at Concordia University. Using action research methodology, this study will examine the aural skills of first year undergraduate EA music majors at Concordia University as they take a new aural training method designed for EA music, for the purpose of better understanding and improving the students’ skill-acquiring process. At the symposium, I will introduce this study in the context of electroacoustic ear training, its methodology, and its role in bridging the fields of electroacoustics and music education.

Eldad Tsabary has been involved with EA ear training research since 2005 when he taught the first specialized EA ear training course at Concordia University. His deep interest in EA pedagogy and education has brought him to pursue a doctorate in music education at Boston University, which he will be completing with his upcoming action-research study. He has given talks and workshops on EA ear training at CRES/CFRO Co-op radio (Vancouver), TES 2008, and Sound Travels Intensive 2009 and published articles on the topic in eContact! (2009) and Organised Sound (in press). As a composer, Eldad organizes his sounds around the concepts of fusion, metamorphosis, transformation, and evasiveness. He is also very interested in collaborative work as a means of breaking old patterns. His works won prizes and mentions in Miniaturas Electroacústicas 2008, Deep Wireless/CBC Outfront 2008, Bourges 2007, Madrid Abierto 2007, ZKM’s Shortcuts:Beauty 2006, and Harbourfront’s New Canadian Sound Work 2006. Eldad teaches electroacoustics and music technology at Concordia University and at Formation Musitechnic in Montréal. He is the director of the Canadian 60x60 project and a board member (Treasurer) of the Canadian Electroacoustic Community (CEC).

Paper Session 5: (Acoustic) Ecology: Water, Air, Sound (Chair: Nadene Thériault-Copeland)

Salman Bakht

Noise, Nonsense, and the New Media Soundscape

This paper explores various definitions of the related concepts of “noise” and “nonsense” as they apply to the representation of aural landscapes in soundscape composition and sound art. The paper also introduces a category of aural landscapes referred to herein as “new media soundscapes,” sound environments that exist within the communication channels of different media types such as the personal audio player or the Internet.

The concept of noise will be explored from a number of fields including music, information theory, and soundscape ecology. After examining the meanings of the word “noise” as used in the phrases “background noise,”“white noise,”“noise pollution,” and “channel noise,” the term is explored through Trevor Wishart’s methods of developing an “aural landscape” in electronic music and Robin Minard’s methods of “conditioning of space” in sound installations. Nonsense is explored by extending the concept of “literary nonsense” to the realm of the soundscape. Ultimately, both noise and nonsense are extended to the abstract level, where “noise” is a property of a particular communication channel in limiting the transmission of a signal and “nonsense” is a method of constructing a meaningless signal or removing meaning from an existing signal.

New media soundscape composition is an area I am currently exploring, which follows Wishart’s suggestion in On Sonic Art that “the conventions or idiosyncrasies of media landscapes may become the basis of compositional structures.” I propose that effective representation of a media type in composition could benefit not only the use of the channel noise, but the incorporation of a nonsense signal to foreground the qualities of communication in the channel while discouraging interpretation of the channel content or message.

Salman Bakht is a new media artist and composer currently studying in the Media Arts and Technology Program at University of California, Santa Barbara. Salman's work focuses on the reuse and transformation of recorded audio using algorithmic composition methods. He is interested in creating art that analyzes, represents, and integrates with the physical environment and the media landscape. Salman has a master’s degree from Columbia University and a bachelor’s degree from The Cooper Union, both in Electrical Engineering. Website: http://www.mat.ucsb.edu/sbakht/

Ken Gregory

Wind Coil Sound Flow

An overview of an acoustic electro-mechanical system that poetically reproduces the processes involved in operating an Aeolian Kite Instrument in the field, a wind instrument based on an Aeolian harp. The kite's towline is acoustically coupled to a resonator. The resonator amplifies the wind induced vibrations of the towline and resonates harmonically. A large one stringed guitar played by the wind. An audio recording system coupled to the resonator/amplifier records the sound.

Receiving the audio recordings from this outdoor instrument, wind coil sound flow, the electro-magnetic sculpture in the gallery not only becomes a poetic and kinetic representation of a sound speaker, but also mirrors the different components of the Aeolian Kite Instrument used to capture the wind’s voice.

Winnipeg artist Ken Gregory has been working with DIY interface design, hardware hacking, audio, video, and computer programming for over 15 years. His creative performance and installation work has shown publicly in Winnipeg, other parts of Canada and many international media and sound arts festivals. Anything is part of Gregory's palette, and by using cut-and-paste techniques, random juxtapositions, and careful manipulations, he crafts unique art works. These works are presented in the form of gallery installations, live performances, live radio broadcasts, and audio compact discs. Recent career highlights amongst others are the acquisition of 12 motor bells, a large sound installation by the National Gallery of Canada, Cheap Meat Dreams and Acorns, a touring solo survey exhibition to the Art Gallery of Windsor, the Confederation Centre in Charlottetown PEI, and at the The Art Gallery of Hamilton in Hamilton Ontario.

Paper Session 6: Interpretation, Criticism and Analysis (Chair: Hilary Martin)

Viv Corringham

Shadow-walks in Toronto

Recordings will be presented and methodology described in relation to my recent work in Toronto in July 2009, which is part of my ongoing electroacoustic project Shadow-walks.

This project investigates our sense of place and our relationship with places, especially very familiar ones. Shadow-walks have occurred with communities in twelve different locations throughout Europe and North America. Three main activities are involved: walking with others, recording environmental sound, and improvised singing.

The project investigates the “special” walk, one that has been repeated many times and has distinct meaning or significance for the person who selects it. As if this ground that has been walked over so frequently might retain traces of that person’s own history and memories, so Shadow-walks attempt to make these traces—their shadows—audible through improvised singing.

The final compositions contain conversations and environmental sounds from my initial walk with the person, combined with my later solo walks in which my singing becomes in a sense the ghost of that person’s memories and associations in the location.

Another aspect of this project is the collection of mundane objects found while walking. Just as the recording of “background noise” enhances its importance, so does the gathering of found objects. These sounds and materials are invited into the foreground, allowing for consideration of their existence as traces that others left behind.

The Shadow-walks made in Toronto will be central to this presentation. I will describe how my compositions and sound installation reflect and respond to the specific char acter of this place and its residents, through the experience, personal histories and memories of those who live there, as well as through the sounds particular to their chosen places.

Viv Corringham is a British sound artist, vocalist and composer, currently based in Minneapolis, USA. She has given concerts and exhibited sound installations throughout Europe, USA, Brazil, Turkey, Russia, Mongolia and Australia. Her work usually involves walking, as a method of investigating people’s relationship with place and how that links to an interior landscape of memory and association. The experiences and materials gathered on these walks find their way into installations, recordings and concert pieces. Awards include a McKnight Composer Fellowship for 2006 through the American Composers Forum. Articles about her work have appeared in Organised Sound (UK), Musicworks (Canada), Playing With Words (UK) and For Those Who Have Ears (Ireland) Recent work appeared in Abrons Art Center New York, Serralves Museum of Contemporary Art Portugal, Meridian Gallery San Francisco, Rochester Art Center MN, Binaural Artists Center Portugal, Galata Perform Gallery Istanbul and MCAD Gallery Minneapolis.

Stephen Kilpatrick

Composition as “machine to think with”: Aspects of narrative within electroacoustic music

Musical composition is sometimes described as the act of colouring time, giving form to time, shaping time, etc. Certainly, what is inescapable is the composer’s requirement to deal with sonic materials to create a sense of structure and form that is played out over a given duration.

Narrative, when described as “the principal way in which our species organizes its understanding of time” (Abbott), appears to suggest a parallel with the way both composer and listener interpret events, both musical and non-musical, in a manner that suggests action, causality, conflict and resolution. Indeed, much of the terminology used in the description of narrative can fairly comfortably be used in the description of music. Why then is the idea of narrative in music such an elusive one? Is the problem that when forced to use language to describe music we instinctively make use of narrative when describing events taking place over time, or is there a narrative core even within abstract music?

With much electroacoustic music, we are asked to receive recognizable sound objects or mimetic sounds as abstract sonorities, timbres and gestures divorced from their real-world source or inspiration. Yet, many first time listeners of electroacoustic music instinctively ascribe a programme to the sounds they hear in order to develop an understanding of the piece through narrative. Some pieces like Trevor Wishart’s Red Bird actually depend on source recognition and the use of sound as metaphor or metonymic device. Other compositions, such as Natasha Barrett’s Prince Prospero’s Party, are overtly programmatic and deliberately attempt to recreate a literary narrative.

Drawing on a range of theories on narrative, this paper will discuss how the metaphoric, metonymic and figurative treatment of sound objects is used by the electroacoustic composer to create narrative. This paper also aims to explore how, even when the composer’s intention is abstract, the listener, through the recognizability of sound sources, or the mimetic nature of sounds, can often receive the work as a narrative discourse.

Stephen Kilpatrick studied composition and musicology at the Universities of Manchester and Salford before leaving to live in Hungary where he worked as resident sound artist/composer for the multi-media art group, Vízművek. Since his return to the UK in 2004, he has written for Psappha’s Richard Casey and the BBC Philharmonic. He composed the soundtrack for the Channel 4 documentary The Gathering: The Reek and his sound installation, A University for the 21st Century, was exhibited at the Salford Art Gallery and the Cube Gallery in Manchester. He has taught at the University of Salford and Leeds College of Music, and is currently Senior Lecturer in Music at Leeds Metropolitan University. He has published articles on music technology and 20th century Hungarian music.

Jason Stanford

Referential Sounds, Metaphors, and Compositional Strategies in EA

This paper will focus on my compositional approach and conceptualization of spatial performance, as well as the building and testing my performance instrument in the NAISA spit~, an open-ended spatial performance environment created in MaxMSP based on IRCAM’s Le Spatialisateur.

A selective analysis of works (with musical examples taken from my three works: Nexus, Flux, and Ion, all composed for and performed on the NAISA ~spit) highlighting the compositional process including: the composing of different strands of sound, the mixing of stems for different spatial trajectories, maintaining predictability of events in composition while allowing for some open-ended chance elements in performance, and the practical considerations of alternate controllers in the final performance.

There will also be a discussion of the use of referential and abstract sound sources, and the potential of both for creating deep musical metaphors that create a sonic environment conducive to active listening, and a psychologically complex and visceral musical landscape.

Jason Stanford (b.1976), Toronto-based composer of instrumental and electroacoustic music has written works for all manner of forces from solo and chamber music to a number of works for symphony orchestra. His recent compositional activities involve composing sound in space and 3D surround multi-channel live performance spatilization. The realization of Jason’s three most recent EA works were a direct result of the support and facilities provided by NAISA. His burgeoning research interests concern physical computing, in particular alternate control surfaces for live performance. At present, he is working on compositional projects that will focus on combining these interests together with the seamless integration of acoustic instruments and electroacoustic sound diffusion, and with live digital video projection.

Paper Session 7: Live Electronics and Interaction (Chair: David Ogborn)

John Gibson

Spectral Delay Processing as a Compositional Resource

I discuss software I have written to perform spectral delay processing and describe its use in three compositions: Slumber, a surround-sound fixed-media piece; Out of Hand, an interactive piece for trumpet, trombone, and computer; and an improvisatory piece for laptop ensemble (in progress). In a spectral delay, each spectral component has a dedicated recirculating delay line. Using different delay times for the individual delays acts to destroy the time coherence of the signal, generating extravagant washes of sound that nevertheless carry a trace of the original. High delay feedback lets us freeze a sonic moment, extending its timbre by circulating it through the delay and creating, from a short static sound, pulsating textures of vibrant color. I take several approaches in the compositions, randomizing delay times to make many ouf-of-sync repetitive strands in Slumber, and quantizing delay times to create an eighth-note accompaniment in Out of Hand. The spectral delay software exists in two forms: as an instrument for RTcmix, a script-driven synthesis and processing package, and as a MaxMSP external. Both are available as GPL-licensed source code from http://john-gibson.com/software.htm.

John Gibson’s acoustic and electroacoustic music has been presented in the US, Canada, Europe, South America, and Asia. His instrumental compositions have been performed by many groups, including the London Sinfonietta, the Da Capo Chamber Players, the Seattle Symphony, the Music Today Ensemble, Speculum Musicae, and at the Tangle-wood, Marlboro, and June in Buffalo festivals. Presentations of his electroacoustic music include concerts at the Seoul International Computer Music Festival, the Bourges Synthèse Festival, the Brazilian Symposium on Computer Music, Keio University in Japan, the Third Practice Festival, the Florida Electroacoustic Music Festival, and many ICMC and SEAMUS conferences. Gibson holds a Ph.D. in music from Princeton University, where he studied with Milton Babbitt, Paul Lansky, and Steven Mackey. He has taught composition and computer music at the University of Virginia, Duke University, and the University of Louisville. He is now Assistant Professor of Composition at the Indiana University Jacobs School of Music. For more information, visit www.john-gibson.com.

Adam Scott Neal

A Continuum of Indeterminacy in Laptop Music

The laptop ensemble (sometimes dubbed “laptop orchestra” depending on its size) is an exciting new type of ensemble emerging all over the world. The aural and interactive possibilities afforded by these ensembles are attractive to composers, but many of the performances by these groups favor improvisation over composition. This project includes five compositions for laptop quartet which explore a continuum between deterministic composition and pure improvisation. Each work is an attempt to engage the performer as well as the listener, encouraging exploration and expression while controlling form and ensemble interaction in order to create coherent and identifiable compositions. In this abbreviated presentation, I will speak about three of the works: Presets, (Not) For Tape, and Baffin Bay. These works display what I consider to be both ends and the middle of the continuum. Presets is fully improvised, but informed by the provided interface and network processes. (Not) For Tape has an indeterminate form but a restricted sound palette. Baffin Bay has a predetermined form, timbres, and pitch material (but still allows a moderate amount of improvisation).

Adam Scott Neal (b. 1981) is originally from Atlanta and is now based in New York. Neal holds an MA in Sonic Arts from Queen's University Belfast, where he studied with Pedro Rebelo. He previously studied with Robert Scott Thompson at Georgia State University, where he earned a BM in music technology and an MM in composition. Neal's music has been performed in the US, Europe, and Canada by such artists as the New York New Music Ensemble, the neoPhonia New Music Ensemble, rarescale, and Tadej Kenig. Festival appearances include June in Buffalo, the Florida Electroacoustic Music Festival, the New York City Electroacoustic Music Festival, Harvest Moon V, and Electronic Music Midwest.

Arne Eigenfeldt

Multi-Agency and Realtime Composition: In Equilibrio

Live electronics has a history that dates back to as early as the 1920s. Electroacoustic instruments, such as the Theremin, the Ondes Martenot, the Trautonium, and even the Hammond organ, date from this decade. John Cage’s use of performative actions on variable speed phonographs in his Imaginary Landscape #1 of 1939 is another early landmark. However, it was the 1970s, and the work of David Behrman, Salvartore Martirano, and Joel Chadabe, that brought forth the possibilities of interactive music when they created systems that could participate in the compositional process by making musical decisions. The composer/performer could then react to these decisions, thereby making the entire process interactive.

The evolution of what Chadabe termed “interactive composition” is “realtime composition”: the ability to deliberately compose material in performance (as opposed to improvisation) with the help of software. This has become possible through the use of methods found in artificial intelligence, one of these being multi-agency. Aspects of multi-agents, and their application in musical performance, will be introduced in this presentation, specifically in the context of the author’s own research project, Kinetic Engine.

Kinetic Engine is an ongoing investigation into intelligent music software. Supported by a multi-year SSHRC research-creation grant, it has been presented at conferences such as the International Computer Music Conference (New Orleans 06, Copenhagen 07, Belfast 08, Montreal 09), Sound and Music Computing (Marseilles 06), Expo (Plymouth 07), New Interfaces for Musical Expression (Genoa 08), ArTech (Porto 08), NUS Arts Festival (Singapore 08), and the European Evolutionary Computing Workshops (Tübingen 09). Ironically, it has never been presented to the Canadian electroacoustic community.

The author’s recent work, In Equilibrio, for software system and disklavier (or sampled grand) will be performed (in an adjoining listening session), providing examples of two levels of multi-agents used within the custom software written in MaxMSP.

Arne Eigenfeldt is a composer of live electroacoustic music, and a researcher into the creation of intelligent software tools that encode musical knowledge. His music has been performed around the world, and his collaborations range from Persian Tar masters to contemporary dance companies to musical robots. His research has been presented at conferences such as ICMC, NIME, SEAMUS, ISMIR, EMS, and SMC. He is an associate professor of music and technology at Simon Fraser University, Canada.

Listening Sessions

Strike!

by Stephen Kilpatrick

Although the title of this piece has an obvious connection to one of the sound sources used, a striking match, it also implies one of the more desperate forms of industrial action.

This piece was composed very recently in a period of global economic instability where redundancy and unemployment were rising and various UK unions were discussing a return to strike action. In both the lighting of the match and the undertaking of industrial action, the strike can be an explosive catalyst for change and a painful process of transformation.

In Strike!, my aim was to explore change and transformation through the creation of a number of opposing sound worlds and landscapes, some of which are intended to evoke the natural and industrial worlds. Other sections are intended to be more abstract and are concerned with creating microtonal harmonic textures. Always, however, at the works core is the explosive power of the striking match and the potentially painful, yet transformative, nature of the flame.

Santa Barbara Soundscape

by Salman Bakht

Santa Barbara Soundscape is a piece in two movements. The first movement, Santa Barbara Etude, is a soundscape composition based on field recordings of a nature walk near Santa Barbara, California. At first, the listener is presented with a seemingly accurate representation of the sonic environment: the sound of waves mixes with cars driving by and planes flying over head; the constant song of birds is paired with the occasional conversation of those passing by. But upon closer listening, one may realize that many of the birdsongs heard are in fact transformed samples of human speech. Likewise, the realistic soundscape presentation is intertwined with musical intention: birds repeat rhythmically while a clarinet duo plays nearly imperceptibly.

The second movement of the piece, rough4radio3, is based on an analysis of the soundscape presented in Santa Barbara Etude. The sound sources in this environment are defined in terms of sonic qualities. For example, ocean waves have a consistent envelope shape but varying amplitudes and timing. This movement is for an 8-channel speaker ring with each channel corresponding to a sound source. However, each source is represented not by environmental recordings, but by stochastic triggering of samples of radio noise with parameters based on the original source’s sonic qualities. Combined, these streams present a fabricated noise soundscape that continues for the length of the piece. At some point within this duration, a live performance occurs. This live performance may be spoken word, instrumental performance, or even another tape piece. The only constraint on this performance is that it is masked by the noise source, so that the listener must strain to listen. This movement presents a dichotomy between signal and noise (or object and environment) while the entire piece forms a relationship between environmental soundscapes and the soundscape of radio as a medium.

Slumber (2006)

by John Gibson

Slumber was commissioned by the Third Practice Festival for a DVD of multichannel pieces that engage the music of the past in some way. In Slumber I looked to music from Schumann’s Kinderszennen, Kind in Einschlummern. I asked pianist Mary Rose Jordan to record this piece. Then I subjected parts of the recording to my own software, which stratifies the spectrum of a brief sound and creates many shimmering, out-of-sync repetitive patterns. Slumber begins noisily but eventually settles into a quotation from the end of the Schumann. The listener slowly senses the presence of the piano — first only as a subtle timbral reference, then as explicit piano notes reconstructed from the recording, and finally as the unprocessed Schumann phrase.

Almost all of the sounds in the piece come from the piano recording. The synthesizer solo in the middle section was performed by me, using a glove controller. Thanks to Mary Rose Jordan for playing the piano, and to Neil Cain for engineering the piano recording.

The surround sound version of Slumber is available on [re], a DVD on Everglade Records (EVG06-01). A stereo version is included in the CD, Music from SEAMUS, volume 18.

For Tape (2008)

by Adam Scott Neal

Since the first medium for composing electronic works was magnetic tape, many concert programs describe a piece as written “for tape,” much like a piece might be written “for piano.” This tradition persists today, despite the fact that most pieces are no longer presented from tape, but rather from digital media (e.g. hard disk). This is a companion piece to laptop quartet, (Not) For Tape. In that piece, four players follow an indeterminate score and manipulate sounds of adhesive tape. This version puts the same sound processes into a predetermined form. As an homage to early works “for tape,” I relied on sound manipulations used since the birth of musique concrete, namely speed manipulation, filtering, reverberation, and ring modulation. Initial improvisatory materials were created in MaxMSP, while final edits and mixing were completed in ProTools.

In Equilibrio

by Arne Eigenfeldt

In Equilibrio (Italian: "In Balance") is a realtime composition created by Kinetic Engine, a multiagent software designed by the composer. Responding to performance control over density, the first set of six agents interact to create an evolutionary rhythmic structure, communicating amongst themselves and altering their patterns in an effort to balance their own goals with those of the other agents. Rhythmic events are passed to a second set of six agents, which assign specific pitches: these decisions are mediated by their own desire to explore their environments (which are under performance control), while balancing the ensemble goal of harmonic stability.

Links

Toronto Electroacoustic Symposium
New Adventures in Sound Art
Communauté électroacoustique canadienne / Canadian Electroacoustic Community
eContact!
Artscape Wychwood Barns

Social bottom