Sie sind auf Seite 1von 9

There

are many different features of psychoacoustics, which is how the brain receives, perceives, and
interprets sound. I will be going into depth in the physics of sound, the principles of musical
instruments, the mechanisms of human hearing, and the acoustic characteristics of spaces.

Sound is omnidirectional, meaning the sound energy is equally dispersed across a medium and sound
doesnt come from one direction. Although it is omnidirectional, we are able to tell where the scenario
is coming from, due to the compression and rarefaction of the waveforms, and help from the
reflections in the room. For example, when someone claps, they essentially increase the amount of air
molecules, which gives the initial sound. However, because there was an increase in air molecules, the
next area in the medium doesnt have as many air molecules. Below, we can see this in a visual
representation. Where there are more air molecules it is called compression, and the fewer molecules
is called rarefaction. In musical terms, this is represented in waveforms. There are many different
characteristics to
waveforms that change
the overall sound. For
example, the amplitude
is the intensity of the
compression. This will
result in an increase in
volume, or decibels.
Additionally, the width
of each wavelength will
result in an increase in volume, or decibels. Additionally, the width of each wavelength will change the
frequency. Firstly, a wavelength is the distance between a specific point of one compression and
rarefaction, to the same point of the next compression and rarefaction. The frequency is the amount of
completed cycles in a second. To add, frequency is measured in Hertz. So say we played a standard A
note, with a frequency of 440Hz. This would mean there was 440 completed cycles of compressions and
rarefactions in a second. If the wavelengths are closer together, the pitch would increase in addition to
the frequency. 880Hz, would be the octave higher to 440Hz. This would mean the wavelengths are
closer together than they were before. And the same regards to lowering the pitch. If there was a lower
frequency, for example 220Hz, then the wavelengths would be further apart from each other, and the
pitch would be an octave lower. As well as each sound having a waveform with a corresponding
frequency, but each sound has an envelope in which you can analyse, and in some cases change.
Each envelop has four parts: ASDR (Attack,
Sustain, Decay, and Release) envelope. This
changes the start, core, and end of the
sound, yet can be changed to get numerous
desired effects. For example, a short attack
with a short decay, sustain and release,
would give you a pluck. On the other hand, if
you had a long attack and sustain, you would have a sweep effect that lasts for a while, similar to strings,
pads, or sweeps. You can also change the wavetable shape to edit the sound as well. You can also set the
synth to be a higher or lower octave, by any amount you like, over any of the different oscillators. All of
these features can be found in the ES2 synth in Logic Pro X, for example, however, I used Native
Instruments Massive for this demonstration. In addition to how you want your track to sound, you have
to keep in mind if any of your tracks end up phasing in the mix, whether intentional or by accident.



Sometimes when I DJ, and beat match the next song with the last song, the kicks occasionally cancel
out. This is because they are the same frequency with the same amplitude, but at completely different
positions, meaning they are completely out of phase.






Other times, there is no audio effect of the waveforms. This is where the waveforms dont exactly line
up. Sometimes if the kick has been layered, you can have some cancellation in parts of the kick where
different frequencies have been used. But for the most part, with out of phase, you will have no change
in volume, or a slight dip in volume where the waveforms dont exactly line up.

You can also have a scenario where the frequencies are the same, and also have the same amplitude.
This is where they are in phase. It will result in a doubling in volume.

This is very important when producing or mixing. It is pivotal that the mix is as clean as possible, and we
cant have parts of the mixing cancelling out parts of the mix, or doubling them in volume, because it will
throw everything off. In Logic Pro X, if you bounce the MIDI region to an audio file, you will then have
the waveform to work with. Here, you will be able to compare that waveform to the rest of the
waveforms of you project, and not only hear if anything gets phased, but you will be able to visually see
as well.
With regards to this, you need to keep in mind the Harmonics. This keeps the project in key in phase. By
playing in key, you will be using notes that have the same frequency, and if played at the same velocity,
you will have similar waveforms. This will mean the compressions and rarefactions will be in the same
area, giving a smooth and clean mix. If you arent playing in key, you will numerous different frequencies,
which could possibly result in out of phase or completely out of phrase, which wont sound nice as an in
phase and in key project will.


There are many different types of musical instruments, such as: strings, wind, brass, drums, and
percussion. Each type of instrument has different characteristics, which identifies which group they
belong in. Some characteristics include their frequency range, structure, and components. As we can see
each type of instrument has
a different frequency range.
The piano has the widest
frequency range, as it can
play notes as low as 28Hz,
and notes as high as
3951Hz. Although other
string instruments dont
have as wide a range as the
piano does, it is clear that
many of the string
instruments have a wider
frequency range than other
types of instruments.


Without vibrations in air, we would have no sound. But for wind instruments, it isnt as easy as hitting
a drum skin or plucking a string. Wind instruments generate their sound and vibrations in a couple of
different ways. One way is by vibrating the reeds in the mouthpiece; and the other, is by moving the
musicians lips in a different ways to generate different vibrations in the air. Either way, the air then
circulates in the cylinder of the instrument, which exits and produces the sound. You always see wind
instruments being separated and cleaned; as it is imperative they stay clean. Not just from a hygiene
point of view, but if the instrument is dirty, the waveform produced wont be as neat, producing a
harsher sound. But with a clean instrument, a clean waveform will be produced, and be easier on the
ear. Some examples of wind instruments are: the flute, clarinet, oboe, piccolo, and bassoon.
This is the mouthpiece of a wind instrument. Here, we can see that the tuning
wire will be moved, which in turn makes the reed move. This then gives the
waveform for the instrument to produce.
Originally, wind instruments would be
made of wood, hence their old name,
woodwinds. Nowadays, they are made of
metal, wood, plastic or a combination of
the three. These vibrations in air will be
carried down a clean cylinder, as fore
mentioned, and exit, produced the
harmonised notes that are being played.
Later on, we will see that cylinders are
apparent in other types of instruments, for the same reasons, to carry the
vibrations in the air until they exit the instrument. However, this all
happens very fast, as the speed of sound is 344 m/s when air is the medium. In other mediums however, it

is slower as they are denser than air. Additionally, across all instruments, a rule of thumb is: the bigger the
instrument or cylinder, the lower the pitch; and the shorter the instrument or cylinder, the higher the pitch.
For example, a guitar vs. a ukulele.


Brass instruments are still operated through a mouthpiece, but it doesnt have a reed. The musician vibrates
their lips to push air into the instrument. Due to the vibration of the lips, the waveforms produced are
higher frequencies because there are more vibrations. Due to this, the pitch is higher. However, this limits
the notes that can be produced, hence the introduction of buttons and tubing. We all know the trumpet has
a standard high pitch, but when buttons are pressed, more air is let into the tube, giving more air to be
effected by the vibrations, lowering the frequency, and in turn, lowering the pitch. Trombones are well
known for the slider. When the slider is pulled towards the mouthpiece, it shortens the length of the tube,
reducing the amount of the air in the tube, increasing the frequency and the pitch. And when pushed away
from the mouthpiece, more air is let into the tube, which decreases the frequency and the pitch. Another
noted brass instrument is the tuba. This is well known for its massive size. And because of the size, there is a
lot of air in the instrument, giving a very low frequency and pitch.
String instruments can be miss leading
due to their wooden body, however, the
wooden body has a purpose. The wood is
hollow, which allows maximum vibration
to occur once the strings have been drawn
with the bow. The bow has a wooden
handle, with the hair from horses tails for
the bow! The strings are usually made of
nylon or steel, as this has a good timbre.
(Timbre is simply the sound of the
instrument). Sometimes, to achieve a different sound, the musician could use their fingers to pluck the strings,
or the wooden side of the bow. To
The right, we can see some examples
of the wooden, stringed instruments.
As mentioned previously, the smaller
instruments have a higher pitch, due
to the tighter vibrations and less air.
This means that with regards to the
strings, the violin has the highest
pitch, and the double bass has the
lowest pitch, as it is the biggest
instrument in the category. We can
see that the harp has two different
sizes on each side. This gives a good
f requency range, due to the taller left hand side, which will have a longer string to produce a lower frequency,
and a shorter right hand side to produce higher pitched notes. When used the musician plays with a bow, it
will have a different envelope shape compared to when pluck. I mentioned previously that plucks have a
short attack, decay, sustain, and release. This is the same for a piano. When a key is pressed, a hammer strikes
the string, giving it a short note, unless a sustain peddle is used. However, when strings are used with a bow,
the attack will be slower, with a longer decay, sustain, and release. This is because the string is being pulled to
vibrate rather than plucking it with fingers. With regards to non-orchestral strings, the same applies.
However, bass arent usually much bigger than guitars, although they can be bigger sometimes. Their low
frequency notes are produced with thicker strings and playing style. Both guitar and bass can be plucked with
a pluck, or fingers.


Percussion instruments are any instruments that make a sound when hit or shaken. This implies that their
envelope will consist of a short attack, decay, sustain, and release. A challenge when playing percussion
instruments is hitting it in time, with the right velocity, in the right area, and having it tuned in key with
the rest of the song. Some percussive instruments can be tuned, others on the other hand, cannot. This
means that their frequency range is limited. Their main role is to add rhythm and excitement to the song,
by having layers under the main piece. Percussionists tend to play more than one instrument; they rarely
are restricted to one type of percussion instrument, unlike musicians from other families. Although I
mentioned piano in the string family, it could also be classed as a percussion instrument because it
produces sounds by a hammer hitting strings. Some other examples of percussion instruments are drums,
cymbals, tambourine, shakers, triangle, glockenspiel and xylophone.






































Without a doubt, the ear is the most important part of music. Without the ear, we would obviously not
be able to hear anything, or appreciate the beauty of music. As musicians, it is imperative we understand
the role each part of the ear has, and how we can look after our ears so they can serve us to full potential
for the longest amount of time possible. I will be going into depth for each part of the ear, how it
interprets sound, and how it can be protected.
There are three core parts to the ear, the inner ear, middle ear, and outer ear. Working from the outside
in, we will begin to discuss the outer ear. This
will most likely be more common to most, in
comparison to the rest of the ear. The outer
ear consists of the earflap or the pinna. This
provides protection for the middle ear to
avoid damage to the Tympanic Membrane, or
the eardrum. The outer ear is specifically
shaped to catch the sound waves, and channel
them into the middle ear. Due to the length of
the External Auditory Meatus, or the ear
canal, it is capable of amplifying sounds with
frequencies as high as 3,000Hz. Until the
sound reaches the eardrum, it is still
interpreted as compressions and rarefactions in the air. Only when it reaches the eardrum, is when it is
interpreted as vibrations.

The middle ear is an air-filled cavity containing the eardrum, and a group three small bones called the Ossicles,
which help to amplify the sound. These bones are referred to as the Malleus, Incus, and Stapes, or the hammer,
anvil, and stirrup, as that is what they look like. The eardrum is a durable, permeable membrane that vibrates
in the pattern of the compressions and rarefactions of the sound. The compression pushes the membrane of
the eardrum inwards, whereas a rarefaction pulls it outward. This replicates the frequency of the wave
entering the ear. The eardrum connects to the hammer, anvil, and stirrup, so as the membrane of the eardrum
moves, the Ossicles move in the same pattern at the same time. The stirrup then connects to the inner ear,
which transmits the pattern of the vibrations into the fluid in the ear, which also carries out the same
waveform representation. The stirrup has nearly fifteen times more vibration capability, which in turn
increases the intensity of the vibration. This results in an increase in amplitude, which means the volume has
been amplified and becomes louder. This is how we can hear the smallest of sounds.

Finally, the inner ear consists of the cochlea and semi-circular canals. The fluid and nerve cells of the in the
semi-circular canals have no role in hearing. They are purely to help you keep your balance. It may seem
strange having your balance located in your ears, but the liquid is what accounts for whether you feel dizzy or
not. As we can see, the cochlea is shaped similarly to a snail. This also contains liquid, but in addition to that, it
has over 20,000 miniscule hair-like cells that are pivotal for your hearing. As the sound wave enters the ear,
and works its way to the cochlea, the compression of the wave pushes the liquid over the hair-like cells, and
gives them the green light to start working. Each individual cell has incredible sensitivity to frequencies
between 20Hz and 20KHz. Once a frequency of a sound matches the frequency of the cell, that cell triggers the
same wave, however with a larger amplitude. This intense amplitude then triggers an electrical current that
passes along the auditory nerve, towards the brain. Nerves meet that current, and the information is
interpreted so we understand what we have heard. If we dont comprehend what we have heard, the brain is
able to take the surrounding frequencies and use context clues as to what we have heard. Impressively, this all
happens in fractions of a second.


Some factors of hearing, however, and to do with the brain rather than the physical ear itself and the science
behind it. Psychoacoustics is the perception of sound with psychological effect. For example, the cocktail
party effect; notice how when youre talking to someone, you can only pay attention to your conversation
everyone elses conversation drowns out and becomes
background noise. This is because of the cocktail party
effect. Your brain automatically tunes in to the
frequencies of something you are paying attention to.
It is the same when some people disappear into phone
land. They are so engrossed in their phone that people
who try to talk to them have to repeat themselves a
few times over to break the cocktail party effect.
Another example of psychoacoustics is the Doppler
effect. This is the appearance in frequency change over Cocktail Party
Effect
a period of time from a point of view. For example,
when you are standing by a busy road, and a car
whizzes past you, their engine sounds as if it is in two
Doppler Effect
pitches. A higher pitch, and then sweeps down to a
lower pitch. But to the people in the car, the engine is
at one constant tone. So how can this be? I mentioned
earlier that sound is omnidirectional. This means that
the engine is emitting sound all around the car.
However, the direction that the car is travelling in,
those sound waves get squashed by the car, and
become closer together. Because the compressions and
rarefactions are closer together, this means the
frequency is higher, having a higher pitch. Equally, the
sound waves of the engine behind the car are now dragging behind the car, increasing the distance between

each cycle of compression and rarefaction. This means the frequency is lower, which in turn results in a lower
pitch. Another example of psychoacoustics is the Haas
Effect. This is simply the ability of our ears being able
Haas Effect
to localise the direction of sound from the reflections
in the room. Weve all heard a loud bang in a room
before, and seen everyones head rapidly turn in the
direction of the source. This is because the sound
waves are projected in every direction. However, some
sound waves are more direct than others. This is how
we can tell the direction because the information
arrives to our ears faster. Not only this, but every other
sound wave will be coming at us from an angle due to
it bouncing off of surfaces, so our brain can interpret
the angle we receive the waves at, and work out the
general direction of the sound. Sometimes, producers
have sounds in a recording that they dont want,
however, they cant necessarily delete it all the time. When this happens, they can mask the sound with
another sound, making the undesired sound not stick out as much or exist at all. It is important to
remember phase, when talking about the next topic of psychoacoustics beats. Essentially, this is when you
have notes at a similar frequency all playing at the same time at the same amplitude. Initially, they will all be
in phase as the frequencies are so close to each other. However, as time goes on and the loop continues, the
different waveforms become out of phase, producing a waving wobble, and then later on, completely out of
phase, until the two waves have caught up to where they were positioned at the beginning.


As well as protecting your ears, it is also important that you protect your workstation. As a musician, your
ears are the most important part of your career. In order to keep your ears from losing their ability to hear,
you need to look after them, by avoiding listening to unnecessarily loud music. However, in some scenarios
that is impossible to avoid, so to prevent damage to your ears, you can wear ear plugs to lessen the intensity
of the sound waves being let into your ears. When mixing in the studio, it is important that we achieve a
clean mix without clipping or distortion happening, because not only does this not sound good, but it isnt
healthy for your ears either. Clipping and distortion occurs when you reach the red zone in the mixer, as we
can see in the screenshot to the left. This could result in an uneven mix, and could
possibly mask over other, more subtle parts of the mix. It is a good general idea to put
compressors on your software instruments. This is where you can set the threshold for
your sound. This tends to lower the volume. But not to worry, as this can be edited in
the compressor as well, because you can adjust the make up output, bringing the
volume back up to where the artist wishes. Clipping and distortion isnt healthy for you
speakers as well as your ears. This will cause the speakers to work harder than they
need to be or should be. Another health and safety for your speakers is you should
always lower the volume on them, then turn them off, and then disconnect them. When
people just pull out the connection to the speakers before taking precautions, you hear
a popping noise, and this could blow the speakers. The same goes for connecting to the
speakers always make sure they are turned off with the volume down before
connecting. Another health and safety, for the consideration of others, is hygiene.
There is nothing worse than working in a confined space with other people, and one of
you smell. It sounds silly, but this could be off putting for some people and block their
ideas. Continuing from this, music equipment is very expensive, so it is important you look after it well. Limit
the amount of people in a studio to as little as possible, and avoid eating and drinking while working. This will
keep the workspace in as good a condition as possible. Also considering cleaning your equipment at least
weekly as well to ensure that everything runs a smooth as it possibly can. Obviously most equipment will
require the use of electricity, which can cause many hazards itself. Firstly, make sure you dont overload one
socket. This could result in blowing the fuse. Make sure you have numerous plugs around the studio to avoid
overload. Also, having a lot of plugs will limit the chance of trip hazards. Make sure that all wires are as neat
as possible, whether this means taping them to the floor, or buying extension cables so the bulk of the wire
can be hidden away neatly. Working with electricity can also bring fire hazards. So if you dont have a fire
extinguisher in the studio, make sure you have one in close proximity. But it is common knowledge electricity
and fire dont mix well, so if possible try to use foam extinguishers.















There are many different areas in which music can be played. Locations can range from a recording
studio, to a festival outside, or a concert inside. However, no matter where the music is being played,
you always have to keep in mind some key features. For example, RT60, which is essentially the amount
of time the reverb takes to decrease by sixty decibels. I went to the music venue at college to show an
example of how this can be worked out. In order to work this out, I had to take the measurements of the
ceiling, floor, and walls. This
would allow me to work out the
volume and surface area of the
room. I then had to take into
consideration of the types of
material each part of the room
was made of. Each type of
material has a different
absorption coefficient. This is
how much of the waveform gets
absorbed by the material. 0 is
no absorption at all, where as 1
is the whole sound wave gets
absorbed. Another feature you
have to keep in mind is whether
the room will have standing
waves or not. This is when the
vibrational pattern in a medium
is reflected over and over,
causing an increase in volume.
This is because the wavelength
is too long to be absorbed. This
can be avoided by having a
sound proof studio. This can be
achieved by using materials that
have a high absorption
coefficient, such as upholstery,
or acoustic tile, for example.



Considering my bedroom would be transformed to a recoding studio, I would have to consider the shape and
materials used for this could effect the overall recording. I would design the studio to have materials with a high
absorption coefficient. This way, the waves produced from instruments or people in the studio would be
absorbed and not reflected, preventing standing waves, and an increase in volume. Additionally, Id avoid
feedback this way as well. Transitioning to the hardware and software, I would have both Ableton Live 9 and
Logic Pro X. Why do I need two DAWs? To begin with, Id use a Novation Launchkey 49 for my MIDI keyboard,
and this doesnt map well with Logic Pro X, however, it does work well with Ableton. Additionally, Ableton has
more editing features and live performance capabilities rather than Logic. However, I do prefer Logic for my
overall working. Id also have a Native Instruments Maschine MK2. This is an advanced drum machine that has
numerous different pages and sample options for recording and editing. Id also need an interface to link up my
compressor microphone, which will mean I will produce waveforms into Logic or Ableton.

Das könnte Ihnen auch gefallen