Sie sind auf Seite 1von 28

Resources used in lecture:

-Stereo audio playback facility


-Laptop with Audacity (stereo playback from laptop)
-Audacity project named “17-01 White noise bursts - for panning while looping
playback.aup” – an Audacity project containing periodic white noise bursts;
playback can be looped while panning the sound from left, to right, and back again,
in order to demonstrate stereo ‘panning’ in action

[To follow – a sound file of the above.]

1
2
3
4
The slide shows wave rays travelling from a sound source to each of a listener’s
ears.

5
In this example, the distance between the sound source (S) and the listener’s left
ear (L) is greater than that between the sound source and the listener’s right ear
(R).

6
Interaural intensity difference is greater with high frequencies and less with low
frequencies. Interaural time difference does not vary with frequency (because the
speed of sound does not vary with frequency).

ITD and IID are collectively referred to in the literature as directionality/localisation


‘cues’ – they ‘prompt’ us as to a sound source’s location.

7
8
9
10
11
A phantom image can be produced—and the apparent location of the sound
source in between the two loudspeakers adjusted—by producing and controlling
an inter-channel difference, i.e. a difference in the signals sent to channels A and B.

This could be an amplitude, or intensity difference – i.e. the same signals sent to
both loudspeakers, but slightly louder in one channel than the other. Or, it could be
a time difference between A and B – i.e. the same signals sent to both
loudspeakers, and the same loudnesses, but one signal slightly time-delayed in
comparison with the other.

12
(Panning refers to the position of a phantom image in between the two
channels/loudspeakers.)

13
14
15
16
‘A directional microphone is most sensitive to sounds in front of it (on axis) and
progressively less sensitive to sounds arriving off axis. That is, a directional mic
produces a relatively high-level signal from the sound source it’s aimed at and a
relatively low-level signal for all other sound sources. The coincident-pair method
uses two directional mics symmetrically angled from the center line [as shown in
the slide]. Instruments in the center of the ensemble produce an identical signal
from each microphone. During playback, an image of the center instruments is
heard midway between the stereo pair of loudspeakers. That is because identical
signals in each channel produce a centrally located image. If an instrument is off
center to the right, it is more on axis to the right-aiming mic than to the left-aiming
mic; therefore, the right mic will produce a higher-level signal than the left mic.
During playback of this recording, the right speaker will play at a higher level than
the left speaker, reproducing the image off center to the right, where the
instrument was during recording. The coincident array codes instrument positions
into level differences (intensity or amplitude differences) between channels.’
(Bartlett & Bartlett, pp.79-80)

17
‘If an instrument is off center, it is closer to one mic than the other, so its sound
reaches the closer microphone before it reaches the other one. Consequently, the
microphones produce an approximately identical signal, except that one mic signal
is delayed with respect to the other. If you send an identical signal to two stereo
speakers with one channel delayed, the sound image shifts off center. With a
spaced-pair recording, off-center instruments produce a delay in one mic channel,
so they are reproduced off center.’ (Bartlett & Bartlett, p.82)

18
A specific version of the near co-incident technique—called the ORTF system, after
the Office de Radiodiffusion Television Francaise (a French broadcaster)—uses two
cardioid microphones that are angled 110° apart and spaced 17 cm.

19
20
With stereo microphone techniques:
-You capture a lot of the venue’s natural acoustics, e.g. reverberation
-The balance between instruments in the recording—assuming you’ve positioned
the microphones sensibly—will be a reasonably accurate reflection of the balance
between instruments ‘in real life’

These are characteristics that might be desirable, for example if you want to create
a ‘natural’ sounding recording that captures some of the acoustic characteristics of
the performance venue.

On the other hand, there are other situations where you may not want to capture
the natural ‘ambiance’ of the venue. And, in a stereo microphone recording, the
balance between instruments, once recorded, cannot be adjusted (at least, not
particularly easily or effectively) and in some situations this might be restrictive…

21
22
In such a scenario (as described in the quote on the previous slide), a
microphone—usually directional—is placed close to each sound source, the idea
being that this reduces the amount of spill, or ‘bleed,’ from other sources.

For example, a drum-kit may be ‘miked up’ with 7 directional microphones, one
each for bass-drum, snare, hi-hat, three tom-toms and two cymbals. These
microphones are positioned so that, ideally, each microphone receives only the
sound of the drum (or cymbal) at which it is pointing, and minimising sound picked
up from any of the other drums or any other instruments in the room. At the same
time an acoustic guitar and electric bass may be ‘close miked’ with a single
directional microphone each, so that the guitar’s microphone—ideally—receives
only the sound of the guitar, and the bass’s microphone receives only the sound of
the bass.

Microphones, when positioned in this way, are called ‘close microphones’ or ‘spot
microphones,’ and the technique is sometimes called ‘close miking,’ or ‘spot
miking.’

In practice, of course, there will always be a certain amount of ‘bleed’ between


microphones—but the idea of close miking is that the amount of bleed is
minimised.

23
What we have, then, is ten microphones in total—8 drums, plus guitar and bass—
each representing a different sound source in isolation. Because we have multiple
microphones this kind of technique is sometimes called a ‘multi-mike’ technique.
We might record each of the microphone signals separately, on multiple audio
tracks, and that would be a ‘multi-track’ technique.

The advantage of multiple microphones, and multiple tracks, is that they can be
controlled separately. We can set the level of each microphone, or each recorded
track, independently, so that we can control the relatively balance of instruments
in a mix. We can also apply all other kinds of processing to the tracks
independently—such as reverb, filtering, and so on.

We can also individually control the stereo positions of the multiple tracks, which
brings us on to the second method of producing inter-channel differences…

24
In a live-sound scenario—or in an analog studio—the 10 microphones in the
scenario just described would be hooked up to a mixing desk (shown in the slide).
Ten white ‘channel faders’ would be used to individually control the amplitude of
each of the 10 microphone signals. However, the relevant control for the purposes
of our present discussion is the control labelled ‘pan’ (also shown on the slide).

By turning the ‘pan’ control fully to the left, the signal will be sent only to the left
loudspeaker. By turning the control fully to the right, it will be sent only to the right
loudspeaker. In the middle, the signal will be sent equally to both loudspeakers.
The pan control, in other words, controls the proportion of the audio signal that is
sent to the left and right stereo channels.

In other words, the pan control produces an inter-channel amplitude difference—


not by positioning microphones, but electrically.

The pan control is sometimes called a ‘panpot,’ which is short for ‘panoramic
potentiometer.’ (A potentiometer is the name of the electrical component in
question.)

Remember, in our imaginary scenario, we have 10 microphones, 10 channels of


audio, and 10 panpots, and by ‘panning’ these individual channels separately we
can build a complete stereo image artificially. We could, for example, have the
drum-kit spread across the entire stereo panorama, or—on the other hand—we
could have the entire drum-kit positioned off to the left. We could position the
bass guitar in the middle and the acoustic guitar off to the right, and so on. We
could even simulate movement from left to right by moving the relevant panpot

25
while the performance is in progress, or while the recording is playing back.

25
In multi-track digital audio software—such as Cubase, Pro Tools, or Logic—the
principles of an analog mixing desk, including pan controls, are imitated.

In the Cubase, the pan control for each channel—shown magnified in the slide—is
simply a blue line that can be moved from left to right, changing the level of the
signal sent to the left and right channels in the same way as a panpot on a mixing
desk. (In the slide the Cubase pan control is set fully to the left.)

26
27

Das könnte Ihnen auch gefallen