eC!

Social top

Mixtering

A Working Model for an Enhanced Sound Quality in Electroacoustics

1. Resume

Since the beginning of the 1980s, there has been an ever-widening gap in terms of sound quality between the electroacoustic and commercial music milieux, to the detriment of the former. Building upon their extensive experiences in both of these milieux, the authors propose a working model that aims to re-establish a certain standard of sound quality in electroacoustics, which involves expanding the role of mastering to incorporate part of the role of mixing. The term “mixtering” is proposed for this practice.

2. Context

2.1 A few Conventions

A discussion of this nature requires dissociation of the “content” from the “container”: the identity of electroacoustics is not entirely based “in sound” but rather in the use of sound. Consequently, and no more than for other musical genres, correcting the overly-aggressive attack of an iteration, adjusting the amplitude of lo-mid range frequencies in a muddy-sounding mix or clarifying the stereo image of a narrow source does not pose a challenge to the identity of the work. To paraphrase Bob Katz (2002, p.11), mastering is the final creative step in the audio production process, where the music is viewed through an aural magnifying glass in a reference environment by a non-involved specialist.

A further distinction should be made between the treatments applied at the sonic conception stage — where typically a part of the sound will be preserved for its musical interest — and those made at the mixing stage, where the different individual sounds are transformed to optimise both their co-existence and the quality of the whole. While the line between these two types of treatment is vague, in part because they share the same tools, there is a great difference in their approach: the former requires involved listening, while the latter requires a certain distance on the part of the listener.

2.2 Observation: the Increasing Quality Gap

Since the Beatles’ pioneering 1965 album, Rubber Soul, the studio has been used in the commercial music milieu as the principal production tool to produce a work which is no longer simply the recording of a live performance — or the attempt to give such an impression — but is a creative work that exists only on tape (Chion, 1994). Up to the 1980s, electroacoustic and commercial music were produced in very similar studios, which can explain the almost identical sound quality of the different productions, as much on the dynamic as the timbral plane. For example, Cream’s I Feel Free (1966) and Bernard Parmegiani’s Violostries (1963) were produced in the same era, and share a timbre and dynamic that is representative of the technological possibilities of the time.

As of the mid 1980s we note, however, a sharp improvement in commercial production, unmatched in the electroacoustic milieu. Even today, no electroacoustic disc has achieved a level of sound quality comparable to Sting’s Brand New Day (1999), in regards to the dynamic range, depth, transparency and timbral richness. The growth of this gap is intimately related to the increase in the prevalence of the home studio, which, as Robert Normandeau has noted (2004), allows electroacoustic composers to work as they wish, when they wish. The general increase in accessibility to resources has affected educational institutions as well, where for the price of one professional studio, a dozen smaller but inferior studios can be built.

While the effects this has had on musical creativity are undeniably positive, it has also provoked serious and increasing repercussions on the quality over the past 15 years. Contrary to professional studios, which have remained up-to-date and reflect recent improvements and trends in acoustic research and signal processing, few studios maintained by composers offer cutting edge equipment, or, more importantly, a reference monitoring environment. Needless to say, such a monitoring system is essential to allow for discerning work on the level of sound quality, and the lack of it in electroacoustics is the principal cause of the rift in quality between contemporary productions in the pop and electroacoustic milieus.

Recognizing its creative advantages, the authors do not intend to call into question the validity of the home studio, but rather propose a working method to ensure that electroacoustics regains its position as a vanguard, as much for its production quality, as for its compositional propositions.

2.3 Individual and Shared Experiences of Mastering

In order to help the reader put the proposition of mixtering in context, the authors provide an overview of the path leading to this collaborative article.

Dominique Bassal is an electroacoustic composer and sound engineer. He owns a studio that has been conceived and calibrated according to the standards necessary for mastering and therefore offers a reference monitoring environment. Since 1997 he has been mastering professionally, and has specialized in mastering for electroacoustics since 2002. Pierre Alexandre Tremblay is a composer, performer and producer of several complementary styles: electroacoustics, free jazz and popular music stemming from Afro-American traditions. He has had mastering experience in the role of a client, having enlisted the services of five different mastering studios (Montréal, Toronto and New York) for rap and jazz albums he has produced.

In 2004, Pierre Alexandre participated in a conference given by Dominique Bassal on mastering in electroacoustics, the larger points of which are available in eContact! 6.3 (Bassal, 2003). The authors both agree that the objective quality of the monitoring situation, corrected by a competent acoustician, is primordial. For an explanation of the impact of an objective listening situation, see Katz (2002).

Following the conference, Pierre Alexandre approached Dominique and asked him to master his first disc, alter ego (Tremblay, 2006), which was to appear on the electroacoustic label empreintes DIGITALes.

It was a difficult process. Briefly, the mastering began with the delivery of the stems that had been mixed by the composer, accompanied by a description of the gestural intentional, according to Bassal’s “Guide to Producing Stems” (2004). To the engineer’s first proposition, the composer responded with several pages of recommendations. After the second, the list had diminished by half but a consensus was still unforeseeable. Following the third proposition the two parties accepted a result that was nonetheless a compromise for both, and made a summary of the successful — satisfying — results, some of which still had some unresolved and contentious points.

It is important to note that for Pierre Alexandre the conclusions of this experience were coherent with all his other experiences with mastering: the finished product always sounds better elsewhere than in the original mixing environment, where the original product is more convincing.

2.4 The Revelation

The discussion is interrupted by the demands of other projects, but a few weeks later, on the engineer’s invitation, the composer returns to the mastering studio for a listening session of some reference works. The session began with a few pop discs recognized for their sound quality and right away the listening conditions speak for themselves: an extensive and precise bass response, the depth of the field of the sound image, the quality of the stero field as much on the horizontal as the vertical plane. The mastering engineer then proposes a comparative A/B/C audition between the original version of alter ego, the first mastering proposition and the “compromise” version.

The result: a breathtaking experience, on the order of revelation! The principal contentious points are no longer justifiable, since in the referencing monitoring environment, the authors find they agree on almost all points. Further, corrections requested by the composer during the mastering process in fact dilute the impact of the improvements gained in the first stage.

The composer is greatly relieved, but an important question remains. Clearly the problems have not arisen out of his lack of technical competence, nor by a rudimentary capacity for sonic analysis: the problems and their solutions are immediately apparent to his ears. The imbalance between the plans, the unnecessary irritation, the ambiguity of the image due to the use of unfocused reverberation and the impoverished timbres resulting from the EQ and compression are pervasive. Further, as the engineer cleaned up the files, certain defects in the sources were revealed.

The question that arises is: What is the cause of this problem, and more importantly, how can it be resolved?

3. Procedure

3.1 Hypotheses: The Source of the Problem

In our opinion, the composer loses a significant amount of the sound quality exactly at the point at which he attempts to improve it, during the mixing. The following problems, representing only a share of the whole gamut of problems, are encountered almost everywhere:

Understanding and accepting these points, the composer realizes that he would not have made the same decisions had he been working with a reference monitoring, i.e. in a studio calibrated by an acoustician,.

Another point which bears repeating is that during the course of the unending discussions the composer auditioned the mastered versions on the same system used to mix the work. This would seem to explain why the composer had the impression that something was lost on some levels: decision made while mixing were made according to a biased, non-linear monitoring situation, so that a mirror image of the problems was incorporated into the mix. For example, a system that has an overly-rich bass response wll inevitably produce a mix that is lacking in bass frequencies, while a system with strong presence of mid-high frequencies will lead to a mix that has no energy. The composer thus finds the mastered result — heard on the original system — to be quite strange, as the audition is always biased in favour of the original mix. Again, Katz (2002, chapters 6 and 14), and even some studio monitor user manuals, will confirm this point and underline the importance of acoustic control. Strangely, such an important reference in electroacoustics as the 1204-page The Computer Music Tutorial (Roads, 1996) offers no more than a meager four-page section within a chapter on the subject of mixing; surely a sign of the importance the author attributes to sound quality.

More proof that the problem arises at the stage at which the piece is mixed can be seen by comparing the list of corrections requested by the composer for the piece Au Croisé, le silence, seul, tient lieu de parole (Tremblay, 2006) with that of alter ego. Au Croisé was mixed at the same time the composer was working on the production of a rap disc at Victor Studio. He was able to evaluate the mixed product in a studio which, while not perfect, offered a far superior acoustic environment than his home studio. The results are remarkable: not only was the list of modifications proposed by the engineer succint, each modification was also more subtle, while the list of corrections requested by the composer was by far the shortest of all the pieces on the disc.

3.2 The Proposed Solution: Mixtering

The solution proposed is that the electroacoustic composer share some of the responsibilities of the mixing with the sound engineer, a procedure the authors have named “mixtering”. This approach allows for intervention directly on the sources, even before any optimization or mixing treatments are applied, for example, equalization, compression, or even reverb.

The following steps are proposed:

  1. The composer submits an ensemble of synchronized files, containing the following elements:
    • the working mix;
    • the stems used for the mix (the sum of which is the working mix);
    • the source stems (to be able to refer to and even use the source, if necessary).
  2. The engineer proposes a mixtered version in which, if necessary, the EQ, compressors, limiters and reverberation added by the composer while preparing the working mix are replaced. The engineer attempts to reconstruct a mix he believes represents the composer’s intentions in the working mix.
  3. The composer sends a list of comments and/or corrections, if necessary.
  4. The engineer delivers the final version.

This demands more effort on the part of the composer — the efforts required to transfer the stems (detailed in Bassal, 2004) will be done twice over — but the mixtering engineer’s contribution will often prove to be faster, more meaningful and more efficient. In actual fact, the work involved in correcting a poor mix — often due to an ill-advised attempt to correct problems introduced into the said mix by deceptive monitoring — can be much more involved and problematic than simply correcting the flagrant problems at their source.

3.3 Validation of the Proposition: Comparison between mastering and mixtering

The annex to this article provides an analysis document prepared by the engineer that describes a session based on the new protocol. The first part compares the sums of the source stems and the sums of the working mix stems (excerpts Audio 1–3). In the next part, the mixed stems are mastered, and commentary by the engineer concerning the problems and the solutions is offered, supported by audio examples and screen captures (Audio 4–19, Figs. 2–9). The combination of the improved stems is then optimised, and the mastering completed (Audio 20–22, Figs. 10 & 11). An interesting comparison between the sum of the sources and the mastered product follows (Audio 22 and 23). Then, each of the stems available in its source form is mixtered, with comments by the engineer (Audio 24–31, Figs. 12–14) leading to the final presentation of the mixtered product (Audio 32, Fig. 15).

We invite the readers to draw their own conclusions; below, those of the authors are presented.

The mixtering of this work has allowed for undeniable improvements in its sound quality: out of a wash of mid-high aggression, snoring bass and a flat and lifeless dynamic, a product whose image is more intelligible and has more depth, whose dramaturgy and life is more vibrant has been distilled. Many problems in the composer’s mix that were due to a biased monitoring environment have been resolved.

Contrary to the mastering of the previous album, the composer has here only three remarks to make, each of which are clear and minimal, and have to do with æsthetic issues. Further, two of these are the result of an error in judgement by the composer concerning treatments at the conception and mixing stages. The mixtering procedure was still in development and some sources were sent with too little initial treatment, which left the engineer with an ambiguous amount of leeway. For example, comparing excerpts Audio 4 and 5 with Audio 24 and 25, we see that a noise gate is used in the mixed stems (Audio 4 and 5) for articulation. Despite the fact that — as the engineer notes — the difference in the finished product is negligible, it should have been included in the source stems because of the importance of its articulation to the composer.

The next example of a problem in the production of the source stems is even more ambiguous. Compare Audio 14 and 15 with Audio 30: the poor quality reverb in the first two excerpts is compositionally justified as part of the gesture’s articulation. Following several unsuccessful attempts to find a decent replacement reverb, the engineer has correctly decided to leave the original reverb as is, in order to retain a clear sonic image. For this reason, and despite its divergence from the source, Audio 15 would probably be the one which the composer would approve of: after his initial surprise, the composer did in fact enjoy the clarity of the image in the overall mix offered by a reduction of the spectrum.

Incidentally, it wasn’t seen to be useful to pursue  the work related to these two errors. Whether or not they were changed would not have changed the conclusion of the experience. Mixtering allows for a more far-reaching compensation of the coloured result from the composer’s studio and can more effectively reveal the intentions of the mixing as perceived in the working mix.

The only remaining point before the work is finished is the medium-low content in the excerpt Audio 29. The composer was aiming for a deeper bass sound, but which was not subjected to such a draconian equalization as in his mix (Audio 10). In effect, the recovery of the sub-basses that were originally in the premix is an incontestable improvement; in comparison, the divergence of opinion on the middle register carries little weight.

It is important to note that if the composer is now able to “objectively” judge the impact of mastering and mixtering, it is because he performed numerous comparative auditions in several studios. Nonetheless, his judgement was sometimes obscured by the fact that none of the monitoring situations available to him could be considered to be reliable. That said, the memory of the “enlightening” listening session in the mastering studio engendered a relation of trust between the two authors, and allowed the composer to distance himself from the material.

4. Conclusion

The first pressing conclusion concerns the primordial importance of having neutral monitoring conditions during the mixing stage. However, it is also to the electroacoustic composer’s advantage to have access to such an environment throughout the entire course of the composition, as the listening bias that influences the optimisation process also impacts the sonic conception of the work: i.e. compositional judgement is directly affected by the surrounding environment. In such optimal conditions for composing, mixtering, like mastering, would become an important validation stage in the development of a perfect work, because, as Katz illustrates (2002, p.11), the approval of a mix on a professional monitoring system can counterbalance the effects of an emotionally-involved listening, often biased by hours and hours of work on the materials.

In relation to a point raised by Robert Normandeau in his 2004 lecture, we could add: In the present era most composers can afford the machines and tools to build their own home studios — allowing them to work as they wish, when they wish —and so the pertinence of institutions is to provide access to, amongst other things, optimal monitoring conditions and to thereby contribute to an increased awareness of the importance of the listening environment. In fact, very few young composers can afford a studio in which the acoustics and speakers have been calibrated by an acoustician. Electroacoustics would gain much in terms of sound quality if the upcoming generation were to be exposed to situations encouraging the kind of revelation described above.

The second conclusion is that mixtering, like mastering, relies on a trusting relationship between the composer and the engineer, as the former is not always in a position to judge the quality of the corrective interventions. Once this relation has been established, mixtering can effectively have much greater impact on the sound quality in electroacoustics, permitting interventions that are more precise and efficient, as well as reducing the number and importance of the revisions.

Finally, the authors would like to call particular attention to the two principal points in mixtering. Firstly, the composer must clearly understand and be able to work with / within the functional difference between the two types of treatments — conception and mixing — during the production of the source stems. Secondly, the mixtering engineer must be careful to not deliver the optimised stems to the composer (as we did for this particular article) in order to allow him a holistic judgement of the result: auditioning some of the radical treatments applied to individual stems can cause such a shock to the composer that the perspective of the whole is severely compromised, if not lost altogether.

5. Developments

In the short-term, the engineer and composer will run a mixtering test together on a project involving instruments and technologies, due to be released in June.

The engineer plans to test this approach on another electroacoustic project to see whether the results are as conclusive when working with another composer.

The composer will invest in improvements to his system and monitoring studio, and will work towards implementing improvements in the studios of the institution where he teaches.

6. Bibliography

Bassal, Dominique, “The Practice of Mastering in Electroacoustics” (2002), eContact! 6.3 — Questions en électroacoustique / Issues in Electroacoustics (2003). Montréal: Canadian Electroacoustic Community.

_____. “A Guide to Producing Stems,” eContact! 6.3 — Questions en électroacoustique / Issues in Electroacoustics (2004). Montréal: Canadian Electroacoustic Community.

Beatles (The). Rubber Soul. London: E.M.I., 1965.

Cream. Fresh Cream. London: Polydor, 1966.

Chion, Michel. Musiques, medias et technologies. Paris: Flammarion, 1994.

Katz, Bob. Mastering Audio: The Art and the Science. Burlington: Focal Press, 2002.

Normandeau, Robert. Le studio personnel, la véritable innovation du second cinquantenaire. La musique électroacoustique : un bilan. Lille: Université Charles-de-Gaulle Lille 3, 2004, pp. 65–71.

Parmegiani, Bernard. Violostries. Paris: INA CD, 1963.

Roads, Curtis. The Computer Music Tutorial. Cambridge: MIT Press, 1996.

Sting. A Brand New Day. London: A&M, 1999.

Tremblay, Pierre Alexandre. alter ego. Montréal: empreintes DIGITALes, 2006.

See following section:
7. Annex: Comparative Mastering / Mixtering Session

Social bottom