Diffusion: theories and practices,
with particular reference to the BEAST system

Dr. Jonty Harrison
Reader in Composition and Electroacoustic Music
Director, Electroacoustic Music Studios & BEAST
The University of Birmingham, UK
d.j.t.harrison@bham.ac.uk
http://www.bham.ac.uk/music/ea-studios/BEAST/

Abstract

The issue of whether diffusion is a legitimate continuation of the compositional process or merely a random throwing-around of sound which destroys the composer’s intentions continues to be a matter of great debate, further complicated by the proliferation of cheap multi-track formats and the emergence of Dolby 5.1 and DVD. The differing attitudes to the public presentation of electroacoustic music will be traced back, through various theories and practical implementations (particularly the BEAST system), to fundamental differences in compositional approach and, ultimately, to different definitions of ‘music’ itself.

Introduction

Questions about diffusion can be reduced to three main issues: ‘what’, ‘how’ and ‘why’? Whilst it is relatively easy to deal with ‘what’ and ‘how’ because they are primarily technical in nature, there is little point in doing so without also considering the much more involved and difficult question of ‘why’. For it is here that we find the reasons for sound diffusion – the real-time, usually manual, control of relative levels and spatial deployment during performance – remaining one of the most contentious issues in electroacoustic music. Why should this be so? Why should it arouse such passion? I offer two theories.

Firstly, diffusion is often seen only as a performance issue – something separate from composition. I should like to challenge this assumption and assert that within the acousmatic tradition, descended from musique concrete, composition and performance are inextricably linked – diffusion being, in effect, a continuation of the compositional process.

Secondly, much contemporary music, including electroacoustic music, has taken little heed of what Pierre Schaeffer opened up to composers over fifty years ago, continuing instead the traditional musical paradigm, predicated predominantly on the supposedly ‘objective truth’ of the score as documentation of the composer’s thought. Being readily susceptible to measurement (of pitch, duration, dynamic, etc), notation has enabled generations of musicologists and analysts to persuade us that musical value resides in architectonic, quantitative criteria, which are assumed to be, and can be demonstrated to be, part of a conceptual construct which precedes sound. I should like to challenge this view of music, offering instead organic, qualitative criteria for musical construction, based on the perceptual realities to be found in sound material itself – and this is the precise basis of musique concrete.

From musique concrete to acousmatic music

We must now grapple with history and, more ominously, with terminology. This task is further complicated by problems of translation – not only between languages, but also between cultural understandings.

Among English speakers, the term musique concrete – the name given by Schaeffer to the use of sound stored 'on a fixed medium' as the basis of composition – has usually been taken to mean only that the sounds used were 'real', recorded from acoustic sources via microphone – a definition which then affords a convenient historical contrast with elektronische Musik, which emerged shortly afterwards in Cologne. I would suggest that this is too simple a distinction, based on a reading of only the most obvious surface features.

In the French-speaking world, it is widely understood that a further dimension of what was 'concrete' about musique concrete was the method of working and, by extension, the relationship between composer and material: as in sculpture or painting where the artist produces the finished product on or in a fixed medium by manipulating the materials (paint, wood, stone) directly, so in musique concrete the composer is working directly with sound. As Francis Dhomont points out, echoing Schaeffer, the musical process thus moves from.

'... the concrete (pure sound matter) and proceeds towards the abstract (musical structures) – hence the name musique concrete – in reverse of what takes place in instrumental writing, where one starts with concepts (abstract) and ends with a performance (concrete)' [Dhomont, 1995; 1996].

This was, effectively, a reversal of everything music had, in recent history, understood itself to be.

Elektronische Musik, by contrast, was very much a continuation of traditional (what Dhomont calls ‘instrumental’) musical thinking. The apparent need for 'objective justification' of musical utterance, through analysis (i.e. ‘measurement'), is one of the central creeds of western art music (especially in academia). The high modernist agenda of serialism (in which elektronische Musik had its origins) was heir to this tradition and continued the prevailing view that the 'text' of the score was the true representation of the composer's thoughts because it was amenable to an 'out of time' analysis of the distances between musical events: pitch intervals, rhythmic durations, dynamic levels, fixed (‘instrumental’) timbre and, eventually, spatial location.

Because it used generators to build up synthetic sounds, elektronische Musik actively required composers to adopt a measurement-based, conceptually-driven approach, and thus quickly gained intellectual approval – not least in anglophone music histories. It is a mistake, however, to assume that musique concrete lacks an intellectual dimension; it is merely that this is not generally available outside the francophone world – a situation epitomised by the scandalous fact that Schaeffer’s 1966 Traite des objets musicaux still awaits translation into English – and that its phenomenological roots do not sit well with the dominant central European canon, to which the English-speaking world subscribes.

Continuing our survey of terminology, another poor translation which causes confusion is ‘music on a fixed medium’. In the French original (musique de support) the fixity is implicit rather than explicit – and it is worth pointing out that, even in English, it is the medium which is fixed, not the music. The hybrid word ‘electroacoustic’, used by this organization and by myself in founding BEAST, is also problematic – rather than reconciling antagonisms, it merely creates confusion because it doesn’t really mean anything at all! ‘Computer music’ tells us very little too, because it describes the tool, not the music – it is hardly any more helpful than a term such as ‘piano music’.

This is a rather indirect way of arriving at ‘acousmatic music’ which can at least be said to have certain characteristics (though I doubt this is an exhaustive definition):

Conditions for acousmatic music:
heard over loudspeakers;
displays an acousmatic intent (not merely a substitute for another listening mode)
composed on and exists on a fixed medium;
the physical source (if any) of the sounds is not actually present at the time of listening;
the source, nature or cause of the sound may be unknown or unknowable;
the compositional criteria extend beyond what is normally considered ‘musical’; these criteria may be spectromorphological, referential/anecdotal, or both.

Qualitative and quantitative: organic and architectonic musical thinking

We can thus say that acousmatic music continues the traditions of musique concrete and has inherited many of its concerns: it admits any sound as potential compositional material, frequently refers to acoustic phenomena and situations from everyday life and, most fundamentally of all, relies on perceptual realities rather than conceptual speculation to unlock the potential for musical discourse and musical structure from the inherent properties of the sound objects themselves. In other words, acousmatic music is a qualitative art, displaying strongly organic characteristics of form and musical motion. It springs entirely from the specific, ‘concrete’ qualities of the sound material used. To illustrate: two recordings of a violin playing G4 could be sonically quite distinct – in Schaeffer’s terms they would be two sound objects – so their notated equivalence, the very basis of instrumental music, is no longer tenable.

By contrast, the kind of thinking which led Stockhausen to talk about ‘…a relatedness among the proportions… Not similar shapes in a changing light. Rather this: different shapes in a constant, all-permeating light' [Stockhausen, 1956, quoted in Worner, 1973], and Worner to elaborate Stockhausen’s position by saying, ‘…the proportion existing between given elements placed in conjunction may remain identical while what is actually placed in conjunction may be constantly changing' [Worner, 1973] suggests a point of view in which sound events have no intrinsic interest, but exist only to articulate the distances between them, on the measurement of which distances rests the notion of 'musical structure'. And when Boulez observes that, ‘Any sound which has too evident an affinity with the noises of everyday life…, with its anecdotal connotations… could never be integrated, since the hierarchy of composition demands material supple enough to be bent to its own ends, and neutral enough for the appearance of their characteristics to be adapted to each new function which organises them’ [Boulez, 1971], he is implying that composition (the creation of musical structure) is merely the imposition of quantifiable values on (fundamentally inert) sound material.

These views ignore the specific characteristics of individual sound objects (which are anything but inert), to which we now have access via recording – the single most significant event in twentieth century musical development. But much ‘electronic’, ‘electroacoustic’ and ‘computer’ music continues to be quantitative and architectonic. It leans strongly on instrumental thinking (and one need look no further than MIDI protocol and Csound ‘instruments’ playing ‘notes’ from a ‘score’ for proof), the studio being seen only as a way of extending instrumental possibilities, and not for what it really is: a competely new means of interacting with sound and fundamentally rethinking music.

Composition and performance

The model of composition I would propose is a collaboration, a partnership between composer and material, each listening and responding to the other, through a (concrete) process of exploration, rather than the traditional one of an ‘inspired genius’ imposing his (usually!) will on neutral material. The manipulation of sound materials was, historically, a physical, manual process – it was, in other words, ‘performing’ in the studio. Even though this is now often done via digital surrogates, our aural understanding of the essential ‘physicality’ of performance gestures in shaping musical utterance remains intact. Thus we can assert that elements which we would readily associate with performance were and remain embedded in the composition of musique concrete and its descendants. In performing this music, therefore, it is appropriate that the same type of ‘physical’ gestures that were used to shape material during the process of composition should be used again in performance to reinforce that shape in the audience’s perception and to enhance further the articulation of the work’s sonic fabric and structure.

Diffusion – theory and practice

In addressing the ‘why’ of sound diffusion, I have so far focused on the compositional context from which diffusion springs, for this is clearly the most important issue. But it is not the only one.

For most of its 50-year history, electroacoustic music was stored on analogue magnetic tape, whose restricted dynamic range necessitated a manual expansion of the implied dynamic contours of a work during concert playback. Of course, even such a simple operation is fraught with danger, as inappropriate re-contouring can damage the work. Public spaces also demand an analogous enlargement of the spatial cues on the tape, to avoid the hall ‘swallowing’ the spatial detail. Bearing in mind that most electroacoustic music has been stereo, let us examine more closely the question of the stereo space. Even on a good hi-fi system, with the listener in the 'sweet spot', the stability of the stereo image is notoriously fickle – turning or inclining the head, or moving to left or right by just a few inches, can cause all kinds of involuntary shifts in the stereo image. So if a stereo piece is played only over a single pair of loudspeakers in a large hall (which will probably also have a significant reverberation time), the image will be even less stable and controllable than in a domestic space, and will certainly not be the same for everyone in the audience. In the equivalent of the ideal listening position at home, everything is relatively fine, but elsewhere the story is very different. Listeners at the extreme left or right of the audience will receive a very unbalanced image; someone on the front row will have a 'hole in the middle' effect, whilst a listener on the back row is, to all intents and purposes, hearing a mono signal! Listener (c) will also experience everything as ‘close’, with listener (d) hearing it as ‘distant’, simply because these listeners are in those real relationships with the loudspeaker cabinets. The shape and size of the hall have a huge influence on how marked these effects will be. But in any public space, some or all of these effects will occur. Events carefully oriented by the composer within the space of the stereo stage will simply not ‘read’ in a concert unless something more radical is done.

BEAST basics

Here we have the beginnings of such a radical solution – what in BEAST is called the 'main eight', which I regard as the absolute minimum for the playback of stereo tapes. The original stereo pair from the previous diagram is narrowed to give a real focus to the image (Main); this removes the hole in the middle, enhances intimacy and offers the effect of 'soloist' speakers. A Wide pair is added so that dramatic lateral movement can be perceived by everyone. In the BEAST system, these four speakers are of the same type (ATC) and driven by matching amplifiers (approximately 500 watts each) as this frontal arc represents the orientation for which our ears are most sensitive to spectral imbalance among loudspeakers. They would normally be set around ear height. The rest of the system consists of various speakers with differing characteristics (Tannoys, Volts, KEFs and Ureis), driven by 500, 250 or 100 watt amplifiers, depending on the speaker itself and its function in the system in that particular space. For effects of distance (which on the original stereo tape are implied by careful balancing of amplitude, filtering and reverberation characteristics, and which are very susceptible to being drowned by actual concert hall acoustics), it is useful to be able to move the sound from close to distant in reality, following the cue on the tape – hence the Distant pair. These are angled quite severely across the space to hold the stereo image in a plane behind the Mains. This off-axis deployment also reduces the treble, adding to the distant effect). They are usually on towers about 8 feet above ear height. The Rear pair, also positioned above ear height, helps fill the space, adding a sense of being enveloped in sound. Implications of circular motion on the tape can actually be made to move round the room, and the introduction of sounds behind the listener can still have a startling effect.

After the main eight, the next most significant additions to a system would be bass bins/sub-woofers (actively crossed-over at 80Hz, with a roll-off of 18dB/octave, so that the whole output of a large amplifier can be used to maximum efficiency) and tweeters, suspended over the audience in up to ten 'stars' of of six units each and around 6 feet in diameter, and/or in up to four clusters of three wide-dispersion units which can be flown or floor-mounted on extending poles. These speakers extend down to 1kHz, so they are filtered to ensure that they only receive the top octave or so, to enhance, rather than confuse, the overall spatial image.

Beyond this, the number and positioning of loudspeakers is primarily a function of the concert space. Long, thin halls may need Side Fills, to achieve smooth transitions from frontal to rear sound (these are often angled up and/or reflecting off the side walls so that the audience is less aware of the ‘headphone’ effect. Wide halls may need Stage Centre speakers, positioned quite close together, higher than the Mains and pointing slightly out, to avoid ‘hole in the middle’ effects. In some halls, a stereo pair of Front/Back speakers positioned quite high in the stage centre and centrally behind the audience can be useful in overcoming this problem (and creates the possibility of cruciform patterns with the Wides or Side Fills). Punch speakers, again central and outward-pointing but fairly low for maximum impact, can be useful for sforzando reinforcement of strong articulation. If the hall has lighting gantries, then height can be used to good effect, Front Roof and Rear Roof speakers enabling front/rear motion via a canopy, rather than by moving the sound only round the edges of the hall). Proscenium speakers hanging over the front edge of the stage also add height to the frontal image. Differing heights can be further exploited by angling speakers on the Stage Edge down to the floor. In short halls, it can sometimes be difficult to achieve a real sense of distance, but if the wall at the back of the stage is brick or stone, Very Distant speakers facing away from the audience and reflecting off the wall can be effective, the high frequency attenuation and general reduction in source location delivering remarkably well the sensation of distance. Finally, in extremely large halls, speakers placed immediately by the Mixer can help overcome the sensation that the sound is predominantly at the periphery of the listening space. Of course, not all of these speaker locations are likely to be necessary, not are they the only ones possible – it depends entirely on the nature, character and sound of the performance space – but it would be wrong to assume that small halls necessarily require fewer speakers (in a 100-seater hall in Birmingham we use 28 channels).

Mixer configuration

In a diffusion system every speaker or group of paralleled speakers needs a separate channel of amplification controlled by a fader. In small systems this can be achieved by using the group faders, but for bigger systems, direct channel outputs are needed. This necessitates splitting the stereo signal out from the source and running several left/right signal cable pairs into successive pairs of inputs. BEAST has developed an elegant way of achieving what is effectively a 'mixer in reverse' (few in/many out) by using a switching matrix through which several incoming stereo signals can be routed (and mixed) to any stereo outputs, without replugging between pieces. The DACS 3-D is a 12 (soon to be 24) in/32 out design, allowing easy pre-configuration for multiple stereo sources, multi-channel operation and microphone mixes from secondary mixers when needed. It offers inserts on every channel for outboard EQ, and the outputs go via a multiway connector and a balanced multicore cable to the remote amplifier racks, which total over 7kW of power.

The fader layout on the mixer raises interesting points. In some systems, the most distant speakers in front of the audience are controlled by the leftmost pair of faders, the next most distant speakers by the next pair of faders, and so on, until the rearmost speakers are reached on the extreme right of the run of faders in use. This is a good configuration for certain kinds of motion (front to back, for example) but is less convenient for more dramatic articulation of material/space. BEAST has evolved a grouping of faders by function in any given performance space: the Main Eight (which also happen to fit eight fingers on the faders, so sudden, dramatic gestures on these faders result in the most significant changes in spatial perception by the audience) are always in the centre of the console, with bins and tweeters to the extreme left; beyond this, the layout varies according to the unique design of the system for that space/event. Here we see a typical mixer layout for a 28-channel system.

I have not discussed techniques of diffusion performance itself: relative signal levels, how many speakers might be blended at any given moment in a piece, what rate of change or what kind of spatial motion is appropriate and effective for given pieces or sections of works – the things which articulate both the sounding surface and the deep structure of the music. These are musical questions, answered by knowing the individual pieces and getting to know the space in rehearsal (informed, of course, also by knowing the system and by being willing to change loudspeaker positions during rehearsal – pragmatic, and economically difficult, as there is never enough time to rehearse in the actual performance space). But it cannot be stressed too strongly that decisions about loudspeaker placement should be made with reference to musical, perceptual and practical considerations, not technical, conceptual or theoretical demands.

Other practice

Of course, BEAST is not the only diffusion system in the world. Other solutions and approaches have been proposed. The GRM’s Acousmonium was conceived as an 'orchestra of loudspeakers', positioned in a mainly frontal array on the large stage of the Salle Olivier Messiaen at French Radio. Here the analogy of 'family groups' of speakers to the instrumental 'sections' of an orchestra was extended into the physical layout of the loudspeaker ensemble, each 'section' typically being made up of a number of the same make and model of speaker (thereby imbuing a characteristic 'colour' to the sound) and positioned in a block or line. The physical deployment of these groups sometimes departed from the equal left and right pairings normally associated with stereo – such as, for example, a line of speakers running diagonally across the stage. The Acousmonium derives largely from Francois Bayle's ideas about '...a music of images that is "shot and developed in the studio, and projected in a hall, like a film" [Dhomont, 1995, quoting Bayle 1993], giving rise to diffusion based on a series of planes running across the stage, with stereo images of differing widths and locations deliverable to different positions in the auditorium. This requires careful handling if the multiple left/right images are not merely to dissolve into mono.

Automation and multi-track

Various things have been proposed to counter the difficulties of handling 32 faders in performance – new controllers or automation. As well as the cost implications of developing an automated system, there are compositional and performance issues involved. Ideally, the automation would be composed on the performance system in the performance space (and the cost of that is usually prohibitive). If another performance were to take place in a different space with a different system, then the automated version is hardly any more durable than an individual real time manual performance. Even at the most fundamental level, performance spaces behave differently in concerts from the way they behave in rehearsals – some kind of intervention to update and correct for the presence of the audience is musically inevitable.

It is a short step from automation (and the standardised playback conditions which would have to be adopted to make it possible) to the issue of multi-track formats, now gaining in popularity with the advent of cost-effective digital multi-track systems. I cannot claim not to be interested in the possibilities offered by multi-track – especially true spatial counterpoint – though there are some dangers, not least those related to 'four corner quadraphonics' and related systems, which are well enough documented not to need repetition here. Not surprisingly, the fact that what I have called architectonic music has searched, historically, for a fixed, repeatable performance capability, has led many composers towards these new formats.

Multi-track working also raises question over the nature of the signal itself – specifically over whether to use mono or stereo sources. Lateral movement in multi-mono works tends to gravitate to the speaker cabinets and the richness of a three-dimensional 'stage' is rare. Many pieces composed in 4-channel format over the past four decades and many now being composed in 8- or 16-channel digital formats for replay in concert over the equivalent number of loudspeakers actually use mono source material. It seems glaringly obvious to me that the 'space' of such pieces is unlikely to work, for there is little or no phase information on the tape. Works which use stereophonic (i.e. 3-dimensional) images within the multi-track environment tend to be the most spatially convincing, though the logistical problems of doing this within a program like ProTools are significant.

Conclusion

Although ostensibly about diffusion, much of this discussion has, necessarily, taken place before a backdrop of musical attitudes. Without an understanding of the motivation and concerns of musics descended from musique concrete, the need for diffusion makes little sense – historically, there was little knowledge, understanding or sympathy outside the French-speaking world for what appeared merely subjective and 'capricious' when compared to the objective 'truth' of achievements in elektronische Musik, supported as they were by the inherited traditions of musical 'meaning' embodied in architectonic structure.

For all the claims that the divisions between musique concrete and elektronische Musik disappeared in the late 50s, there remain significant vestiges of the two approaches in the contrasts between what I have called architectonic and organic elements in the music. Architectonic structure is built on the quantifiable distances between musical events (in all parameters), whereas organic structure explores the qualitative evolution, the spectromorphology of the events themselves. Similarly, architectonic space is built on the quantifiable distances between events in the additional parameter of spatial location, whereas organic space explores the qualitative spatial evolution of an already evolving spectromorphological sound object. Organic material is already sculpting time and space – diffusion is merely the necessary continuation and enhancement of this process – articulating spatial qualities already possessed by the sound, not arbitrarily imposing spatial behaviour on it.

Generally speaking, works exhibiting architectonic structure and space are not well suited to diffusion, whilst those displaying organic structure and space require it. My personal plea is for composers to immerse themselves in the essentially new ways of musical thinking which Schaeffer offered us fifty years ago, and to explore the qualities of unique sound objects themselves for appropriate and organic models of musical structuring. When elaborated through the process of composition into the realm of performance practice, it has the power to transport us – quite literally, at the speed of sound – into other places, other situations and even, because of its interactions with our personal memories and histories, other times. Ultimately, therefore, it can reach deep into the most fascinating space of all: our imagination.

References

Francois Bayle, Musique acousmatique, propositions, positions, Paris, 1993
Pierre Boulez, Boulez on Music Today, London, 1971
Michel Chion, Guide des objets sonores, Paris, 1983
Francis Dhomont, Rappels acousmatiques/Acousmatic Update, in Contact! 8(2), Montreal, 1995
Jonty Harrison, Space and the BEAST Concert Diffusion System, in Francis Dhomont (Ed.) L'espace du son, special issue of Lien, Ohain, 1988
Jonty Harrison, Sound, space, sculpture: some thoughts on the 'what', 'how' and 'why' of sound diffusion, in Journal of Electroacoustic Music, London, 1998; revised version in Organised Sound, 3(2), Cambridge, 1998
Jonty Harrison, Imaginary Space – Spaces in the Imagination, Keynote address, Australasian Computer Music Conference, Wellington, NZ, 1999
Pierre Schaeffer, Traite des objets musicaux, Paris, 1966
Denis Smalley, Spatial Experience in Electroacoustic Music, in Francis Dhomont (Ed.) L'espace du son II, special issue of Lien, Ohain, 1991
Denis Smalley, Spectromorphology and Structuring Processes, 1981, rev. 1986, in Simon Emmerson (Ed.) The Language of Electroacoustic Music, London, 1986
Denis Smalley, Spectromorphology: explaining sound-shapes, in Organised Sound, 2(2), Cambridge, 1997
Annette Vande Gorne (ed), Vous avez dit acousmatique?, Ohain, 1991
Trevor Wishart, Audible Design, York, 1994
Trevor Wishart, On Sonic Art, York, 1985; London/Amsterdam, 1996
Karl-Heinz Worner (transl. Bill Hopkins), Stockhausen: Life and Work, London, 1973

A paper session of this article was presented by the author at SEAMUS Y2K in Denton Texas, March 2000.