Sie sind auf Seite 1von 198

Introduction to the

Recording Arts
By Shannon Gunn

SSL Console
Image source: https://en.wikipedia.org/wiki/Solid_State_Logic

Copyright 2015 Shannon Gunn


ISBN-10: 1517246911
ISBN-13: 978-1517246914

DEDICATION

This book is dedicated to high school students who wish to study the art of audio
production. In the words of Harry Watters, Always aim for the stars, and even if
you dont reach it, at least youll land on the moon.

CONTENTS
Dedication

Chapter 1: Curriculum Outline

Pg 1

Chapter 2: Introduction to Advanced Music Technology Pg 15


Chapter 3: Physics of Sound

Pg 31

Chapter 4: Electronics Primer

Pg 91

Chapter 5: Microphones

Pg 107

Chapter 6: Cables

Pg 125

Chapter 7: Sound Boards

Pg 137

Chapter 8: Digital Terms

Pg 153

Chapter 9: DAW Processing and Sound Effects

Pg 161

Appendix A: Skills based tutorials and activities

Pg 173

Introduction to the Recording Arts

Introduction to the Recording Arts

CHAPTER 1
ADVANCED MUSIC
TECHNOLOGY CURRICULUM
OUTLINE
Unit 1: Introduction to Advanced Music Technology
1. What is Music Technology?
2. Careers in Music Technology
3. A Brief History of Recording
4. Analog vs. Digital recording
5. Introduction to class recording equipment
6. Sound Board Level 1 Certification (done in person, not in book)
a. Student can turn the system on and off correctly.
b. Set levels for wireless mics.
c. Set levels for mp3 and CD player.
d. Set up a projector and adjust the screen with keystone
settings.
e. Understand signal flow.
f. Troubleshoot feedback.
g. Troubleshoot issues such as the LR button or power issues.
h. Set up a portable PA system with wired mic, CD, and mp3
player.
Unit 2: Physics of Sound Sound Waves
1. Sound waves
2. Compression and Rarefaction
3. Frequency
4. Resonate frequency (optional)
5. Frequency Spectrum
6. Law of Interference
7. Pure and Complex Tones
8. Waveforms
9. Nodes vs. Antinodes
10. Harmonics (optional)
11. Overtones (optional)
12. Harmonics (optional)
13. Amplitude
14. Decibels
7

Introduction to the Recording Arts

15.
16.
17.
18.
19.
20.

Psychoacoustics How we hear


Noise Induced Hearing Loss (NIHL)
Wavelength
Speed of Sound
Phasing (optional)
How sound waves act in different materials (optional)

Unit 3: Electronics Primer


1. Electricity
2. Voltage
3. Current
4. Series and Parallel circuits
5. Impedance
6. Impedance Matching
7. Power
Unit 4: Microphones Unit
1. Types of Microphones
2. Microphone polarity patterns
3. Microphone setup for guitar, voice, instruments, drums
4. Phasing
Unit 5: Cables
1. Analog vs. Digital cables
2. Parts of a Cable
3. Balanced and Unbalanced
4. Mic stands and mic clips
Unit 6: Sound Boards Unit
1. Signal flow
2. Home studio setup
3. Mixing consoles
4. Inputs and Outputs
5. Channel strip
6. Bus/sub mix
7. Inserts
8. Aux output
9. PFL and AFL
10. Groups
11. DAW
12. Send mix

Introduction to the Recording Arts

Unit 7: Sound Boards Level 2 Certification


1. Understand signal flow for various outputs such as groups or one
monitor mix.
2. Be able to set up and adjust individual tracks for a single monitor
mix.
3. Plug wired mics into a snake and understand how to adjust them on
the sound board.
4. Understand how and when to use the pad button, phantom power,
and roll off buttons.
5. Understand the different mics and cables and when to use them.
6. Troubleshoot issues such as tape input/output, cables, group
buttons.
7. Understand how and when to use stereo vs. mono tracks.
Unit 8: Digital Electronics Unit
1. Analog vs. Digital
2. Digital Electronics
3. Sample Rate
4. Bit depth
5. Buffer size
6. Latency
7. Nyquist Frequency
8. Unit 8: DAW Unit
9. DAW tracks
10. Send mix
11. Automation
12. Dynamics processing compression
13. Reverb

Introduction to the Recording Arts

How to Use This Book


This is an explanation designed for music teachers who wish to use this book for
music technology classes but may not have a background in audio recording.
Basically, audio production has three strands: theory, practice, and composition.

Practice
Theory

Compostion

Music Technology
The Practice strand involves learning how to set levels, run a sound board,
troubleshoot, and set up sound system. The Theory strand involves understanding
the physics of sound, acoustics, and electronics. The Composition strand involves
the process of creating sounds with MIDI or loops, which is a rising interest in
many adolescents. It would be incomplete to teach one without the other. For
instance, you need theory (physics of sound and electronics) to understand how to
properly use the sound board (practice). Each one of these strands informs the
other and each is necessary to accomplish a well-rounded education in music
technology.
Most of the careers in audio production focus more on the theory and practice and
less on the composition. Therefore, this book is structured more toward the
theory/practice side. This book is actually the second book in a two book series.
The first book teaches more of the composition side, with assignments geared
toward creating songs with loops, MIDI, and synthesizers. In the first book,
students learn how to construct sounds from scratch using oscillators and synth
plugins as well as do some light recording with podcasting. The goal of this book is
to get students more comfortable running and troubleshooting a live sound system
as well as giving them training in the art of audio engineering.
10

Introduction to the Recording Arts

Every class may have three parts: new theory concepts, applying those concepts to
a sound board, and then working on a computer with music technology software.
Some of the activities for the practice strand have been incorporated into this
book.
At the end of each lesson, an activity will be listed which would be used to apply
that lesson to practical use in a live sound setup.
If you are a teacher and you wish to use this book in your classes, you are welcome
to access the keys, tests, quizzes, source audio files, and PowerPoint presentations
used in class. Please use a valid teacher email address and contact Shannon Gunn at
jazztothebone@gmail.com. Put in the subject something like, Request for Book
Keys. She will send you a zip file with everything on it and may ask you to write a
review on Amazon in exchange. Amazon reviews will help the book gain traction
as others are looking for resources to teach music technology. Thanks for your
interest and hopefully this book can be helpful to you!
Regarding the structure of the course, here is a suggested Curriculum Map:
Week

Theory

Practice

Careers

History of
Recording
Analog vs. Digital

Sound Boards
Level 1
Sound Boards
Level 1
Portable PA
systems

3
4

Composition/Computer
Skills

1st Song, Skills Tutorials #1


and #2 in Appendix A
(Splitting clips, editing)
Frequency
How to set up Skills Tutorial #3 in
Frequency Spectrum a Projector and Appendix A (Crossfade) and
adjust keystone #4 (Stereo vs. Mono) in
Appendix A
Law of Interferance EQ on Sound Skills Tutorial #5 and #6 in
Pure and Complex
Board
Appendix A (EQ
Tones
demonstration, Telephone
Nodes/Anti Nodes
Voice)
Waveforms
Amplitude

Skills Tutorial #7 and #8 in


Appendix A (Spatial aspect of
sound, listening exercises)
11

Introduction to the Recording Arts

7
8

Noise Induced
Hearing Loss
(NIHL)
Psychoacoustics
Wavelength
Speed of Sound

Review, Unit 1
Physics of Sound
Test
10
Microphones Unit
through Law of Interference
14
Phasing
Diffraction
15
Cables Unit
through
18
19 21 Electronics Unit

Hearing Test

Halloween or Winter Song


Skills Tutorial #9, #10, #11,
Adjust levels on a previously
recorded song as practice for
setting levels

22 23

Digital Electronics
Unit

25 28

Sound Boards
(More Advanced)

29 36

Processing
EQ
Reverb
Compression

Set up mics for


guitar, vocals,
instruments,
drums
Set up and use
various cables
Take apart and
fix cables
Listen to the
same song on
various
formats
(analog/digital)
Sound Boards
Level 2
Certification
Practice
processing
with a sound
board

12

Record vocals and


instruments into the
computer
Video Game Project (propose
a video game, make up
background music)
Present Video Game Projects
to class
Recording projects

Song for Somebody


Apply processing to student
recording projects
Graduation Song

Introduction to the Recording Arts

Composition/Music Assignments:
1st Year Assignments
1.
2.
3.
4.
5.
6.

7.
8.
9.
10.
11.
12.

First Song - Using loops and the keyboard, create your own first song.
Halloween Song Create a scary song using audio effects and loops.
Game Song Create a song that is the introduction to a game.
Winter Song Create a song inspired by the sounds of winter. May or may
not be holiday oriented.
Ring Tone Create a ring tone for your phone.
Superhero Song Everyone in the class will write a superheros name on a
sticky sheet. Then these will be balled up and placed in a basket. Each class
member will take one out of the basket and then write an original theme
song for that character.
Song for Somebody Create a song for someone you care about. Share it
with them for Valentines Day.
Podcast Record a podcast with your team members about a topic of your
choice. All topics must be school-appropriate.
Sample Song Create a song that incorporates a vocal sample.
Synth Song Create a song that incorporates a synth such as the
MiniMoog.
Summer Song Create a song for summer.
iPad Commercial Create background music for an iPad commercial.

2nd Year Assignments


1. Commercial music Create background music to a commercial (not iPad).
2. Audio Editing Assignments - Sometimes musicians make mistakes while
they are recording. As an audio engineer, sometimes you need to be able to
combine multiple takes into one track. This assignment will teach the art of
the crossfade to splice tracks together, in addition to cropping and splitting
tracks.
3. Mixdown Assignments Work on various songs with different tracks to set
levels, add reverb, add EQ, and add effects.
4. Video Game Project The class has morphed into a Research and
Development department in a major game company. You will work in a
small team to come up with an idea, write out the scenes, and then create a
presentation with original background music that presents a new game. The
original background music must play across slides and relate to the scenes.
5. Graduation Song Create a song themed to graduation.

13

Introduction to the Recording Arts

14

Introduction to the Recording Arts

Chapter 2
Introduction to the Recording Arts
Welcome to the recording arts! In this class you will learn the art of recording,
including the use of sound boards, mics, and cables. Additionally, you will learn
about how to do a mixdown properly, how to create a spatial sense of the sound,
and how to make your songs sound more produced. This course utilizes Mixcraft
5 software, which can be downloaded at http://acoustica.com. Assignments will
focus on skills such as running a sound system, audio editing, mixing, and
troubleshooting sound systems. Additionally, we will discuss the music industry,
which is constantly changing due to new technology.
What is Music Technology?
Music technology can have many definitions to many people. For some, it may
include more of a compositional slant, including the creation of new music. For
others, it may relate more to beats and hip-hop. For the intents and purposes of
this class, the definition of music technology is focused on the art of live and studio
sound recordings. The skills involved in running live sound, troubleshooting
equipment, and audio editing are in high demand. Additionally, audio engineering
skills transfer very easily to video editing and video production. There are
opportunities in the music industry like never before, such as the creation of new
apps and resources for music-minded people. The industry is constantly changing
due to new technology.
Careers in Music Technology
All students should download and read the Berklee report: Music Careers in
Dollars and Cents, 2012. You can find this by clicking on this link here:
http://www.berklee.edu/pdf/pdf/studentlife/Music_Salary_Guide.pdf .

15

Introduction to the Recording Arts

Generally, colleges and universities offer three different types of tracks for music
technology. The music technology program may include audio engineering, music
business classes, and/or composition. You will need to determine which of these
three strands interests you when looking at colleges and universities. Typically, you
have to get a degree in music to get the music technology related degree. If the
study of classical music does not interest you, you can always get a degree in
business or related field and then work in a music related company. Additionally,
many colleges and universities are offering media arts degrees which are similar to
audio engineering but do not require the upper level classical music classes. There
are also programs at the local studios, usually ranging from nine months to two
years. Studios do not give you a degree, just a certification. The most important
part of your post-high school education is the quality internship. Be sure to look
for a professor or studio program that is well-connected in the industry and can
place you in a major or successful company. Internships tend to lead to jobs if you
work hard and do well.
Students who wish to become successful singer-songwriters or beat creators will
have difficulty finding a college program that will teach either of these two topics.
Basically, you have to network and meet people and work for free until there is
demand for your music. Once demand becomes overwhelming you can start
licensing beats according to the number of downloads allowed. Singer-songwriters
will have to utilize social media, email, text, and publicity to gain a following and
create demand for their songs. This track is very entrepreneurial.
Careers in the music industry may include:

Audio Engineering live sound engineering, audio editing, audio tracking,


mixing engineer, producer, mastering engineer, theatre production
Media Arts audio/video editor, video producer
Music Business marketer, publicist, A&R Representative (Artists and
Repertoire), accountant, operations
Arts Management development associate, marketing, management,
facilities production, box office management
Film music supervisor, audio editor, music composer, copyist,
orchestrator

Here is a brief outline of a lecture regarding careers in the music industry.


1. What is a producer?
a. Visionary
b. Engineer/technical side
c. Motivator
d. Go-between
e. Coordinator
16

Introduction to the Recording Arts

2. Audio Engineering
a. Live Sound Reinforcement
i. Sound and Lights
ii. Technical Theatre
b. Studio work
i. Mixer
ii. Mastering
iii. Audio tech
3. Film/TV/Video work
i. A/V Editor
ii. Sound Designer
iii. Foley
iv. ADR
v. Composer
1. Orchestrator
2. Arranger
3. Synthesizer
4. Installations
a. AV installer
5. Related Fields
a. Copyright Law (Intellectual Property)
b. Working as an accountant, marketing, or management of a music
oriented company, such as
i. Independent Record labels
ii. Sound Exchange, ASCAP, BMI, Copyright office, etc.
c. Radio
i. NPR
ii. Other radio stations program manager, audio editor, etc.
d. DJ
e. Music Supervisor for film, tv
f. Every organization needs interns and secretaries
g. Every venue needs sound techs

17

Introduction to the Recording Arts

A Brief History of Recording


Analog vs. Digital Recording
This class will focus on techniques used with digital recording. However, it is
important to understand the difference between analog and digital recording
formats before you get started.
Sound is mechanical energy. For centuries, people wanted to record that sound so
they could hear it back later. Thomas Edison invented the first successful device to
record sound, called the phonograph, in 1877. This device was created by attaching
a stylus to a rotating cylinder. When Edison shouted into the horn attached to the
stylus, it caused the stylus to move, which then created an imprint onto the cylinder
which was covered with tin foil. This tiny groove could then be played back by
another stylus attached to a horn that functioned as a speaker, reversing the
process. A stylus is like a needle.

Thomas Edison and his phonograph invention.

Phonograph with cylinder.

18

Introduction to the Recording Arts

Cylinders for a phonograph.


Example of Edisons phonograph:
https://www.youtube.com/watch?v=lCej78LLudw
In the 1880s, scientists in Washington D.C. invented the Graphophone. This was
similar to the Phonograph, but it used a wax cylinder instead of tin foil.

Then, in 1889, a German immigrant to the U.S. named Emile Berliner invented the
Gramophone. The Gramophone was similar to the Graphophone in that it used
wax, but instead of cylinders it used a disc. The groove was etched onto a metal
zinc master disc covered in wax. The master disc could be reproduced onto a hard
rubber material which could then be mass produced.
For the first time, people could purchase music that they could listen to on
demand. The first gramophones did not require electricity you would turn the
crank and then it would cause the record to rotate which would then be read by the
stylus. This was called a record because it was a recording of sound.
Demonstration of gramophone player:
https://www.youtube.com/watch?v=AApsSZq0g-c
19

Introduction to the Recording Arts

Eventually people wanted to listen to longer sessions on their recordings. The first
popular format was the 78 which was named as such because the disc would
rotate 78 times per minute, or 78 rpm. You could listen to about three to five
minutes on each side, depending on if it was 10 or 12.
Then, after World War II, Columbia Recording Company started manufacturing
the first long playing record. This is the standard vinyl record that we know
today that turns 33 1/3 times per minute, or 33 1/3 rpm. The concept is similar to
Edisons phonograph, though. A stylus, or needle, lays on top of the disc which
moves with the groove and is then amplified by a speaker.

Above are four kinds of phonograms with their respective playing equipment.
From left: phonograph cylinder with phonograph, 78 rpm gramophone
record with wind-up gramophone, reel-to-reel audio tape recording with tape
recorder and LP record with electric turntable. Photo from the exhibition "To
preserve sound for the future", showcased at Arkivens dag ("Day of the
Archives") at sv:Arkivcentrum Syd in Lund, Sweden, November 2012. Photo
credit FredrikT on Wikimedia Commons.
Much of the above information is sourced from the website
http://www.recording-history.org/recording/?page_id=12.

20

Introduction to the Recording Arts

All of the above types of recordings are considered analog because they recorded
a direct copy of the sound energy to some sort of medium such as a disc or
cylinder. At the same time as the invention of the phonograph and gramophone,
sound recording began to evolve to record to tape for better audio quality. This
wasnt like sticky tape but was a type of plastic film with a magnetic coating. The
coating of the tape was made of iron filaments. A microphone was attached to a
stylus which would move with the sound energy. When the stylus moved, it caused
a disturbance in the organization of iron filaments. This could then be read back by
another stylus attached to a speaker, essentially. This is the same concept as the
phonograph but using magnetic tape instead of cylinders or discs to record the
sound.

Recording to tape was cheaper and easier than creating masters on metal discs
covered with wax. You could erase the tape with a magnet and then start over
again. Additionally, engineers figured out how to record two tracks on one piece of
tape, creating the first stereo recording. The word stereo indicates a different
signal for the left and right speaker. The Beatles band championed the first multitrack recording process by recording to four track in 1963 and then eight track in
1968. This allowed the band to experiment with multiple takes, overdubs, and layer
multiple instruments. Before 1963, all recordings were made to sound like a live
performance and were typically mixed down to mono, or one track. In todays
digital age, you can layer as many as 128 tracks at one time. At that time, an audio
engineer would have to get the levels exactly right before they pressed record so
that when the sound was imprinted to tape, all the relationships between the levels
of instruments would be acceptable. Now, audio engineers can lay down tracks and
then fix it later if levels are off.
Examples of analog recording mediums include reel-to-reel, 8-track, cassette tape,
and vinyl.

21

Introduction to the Recording Arts

Reel-to-reel recorder:

8-Track Player and 8-Tracks:


(Source: CZmarlin on Wikimedia Commons)

The 8-track format was significant because it was the first type of tape player that
was available in a car. You could take your music with you on the road. The format
wasnt highly sustainable, though, because the tape would get twisted and ruined
because of the design of the machine that played it.

22

Introduction to the Recording Arts

Cassettes
Cassettes became popular because they were much more portable than a record
and more reliable than an 8-Track.
This is a picture of the inside of a good quality cassette recorder.

Inside you can see the magnetic head which can read, erase, or record the audio
signal onto the iron filaments on the magnetic tape.

Casette side view.

Casette top view.

23

Introduction to the Recording Arts

In the 1980s, the Walkman became popular as a way to listen to your songs on a
portable cassette player.

Additionally, in the late 1980s, people would record songs to a blank cassette in a
certain order. This was called a mix tape. The term mix tape did not indicate
any sort of desire for fame, but it was used for personal listening and at social
events. You could record from one cassette to another using a dual cassette deck.
You could also record directly from the radio.
All of the above types of audio formats are considered analog because the format
represents an exact representation or imprint of the sound. Digital recording was
introduced in the late 1980s and is created when all sound wave information is
converted into binary code made up of ones and zeroes.
Digital Recording
Digital recording is different from analog recording because of the concept of
non-linear editing. Basically, if you wanted to change an analog recording, you
had to re-record it on new tape or overdub the original. This can get very expensive
with new tape required for each take. With digital recording, you can edit, change,
or add to a recording without changing the original. The implications for recording
technology are huge. As computer technology has grown, the number of possible
tracks has grown tremendously. It doesnt cost extra to re-record or add layers to a
recording because the hard drive space is generally available. Studios have had to
consistently upgrade equipment to keep up with the latest technology. A console
that would take up an entire room in the 1970s can now be replicated on a tablet
that can be held in one hand.
Digital recording formats used to include ADAT, which is a type of digital tape, but
now most studios record to a hard disc on a computer.
Digital consumer formats include audio files and CDs. Please refer to the Digital
Electronics unit for more information on digital recording.

24

Introduction to the Recording Arts

Vocabulary for this Chapter:


Analog: An exact representation or imprint of the sound.
Digital: When all sound wave information is converted into binary code made up
of ones and zeroes.
Phonograph: Edisons first invention that allowed sound to be recorded and then
played back from a cylinder.
Gramophone: Berliners invention which played back sound from a disc.
Reel-to-reel: A recording device which recorded sound to tape.
8-Track: The first portable format for recorded music. Played back sound from
a tape and could hold eight tracks. 8-track players were installed in cars and so
people could listen to their own music while driving for the first time. (Before 8tracks they only had FM or AM radio.)
Cassette: A small analog format that plays back audio from a tape.
CD: Compact disc. A digital disc format for audio.

25

Introduction to the Recording Arts

Introduction to Class Equipment


Tascam Audio Interface US-144 MKII
We use this to process MIDI, mostly. Typically, the audio drivers are not stable
with our particular computers.

26

Introduction to the Recording Arts

Alesis iO2 Express


We use this audio interface for recording and listening to audio.

Please note there are three knobs for main loudness on the Alesis the headphone
levels, the direct/USB mix, and the main levels. The Direct/USB mix will
determine how much of the computer and how much of the recording you will
hear in playback. The main level knob controls the levels of all the tracks combined
as they go to the computer. The headphone levels determine how loud the sound is
in the headphones. There are two possible inputs for recording, and each has their
own levels as well.

27

Introduction to the Recording Arts

How to Record the MIDI Keyboard


There are two types of tracks in Mixcraft 5: instrument tracks and audio tracks.
Instrument tracks are connected to the keyboard and record MIDI messages that
can be converted into any instrument or edited in any way. Instrument tracks have
a little keyboard icon. Audio tracks have a speaker icon and can record your voice,
loops, or a real instrument.
1. Open Mixcraft
2. Click the Purple Icon or add a virtual instrument track by clicking the
+track button
3. Arm the virtual instrument track. This enables it for recording.

4.

28

Click Record.

Introduction to the Recording Arts

How to Record With A Microphone


1. Plug your microphone into the left Microphone XLR input

2. Make sure it is set to Mic/Line level. Adjust the gain for the track so that
you can hear it in the headphones. Make sure your Monitor/USB mic is at
about 12 oclock so you can hear your recording and the computer
playback.
3. Arm your audio track in Mixcraft.

4. Click on the drop down menu next to the Arm button and select the iO2
Express or whatever audio interface you are using. Select the channel you
are using (left or right input on the device.)

5. Click Record.

Please note: for recording with the Alesis, you will need to select the input to be the
iO2 Express left channel.

29

Introduction to the Recording Arts

Sound Board Level 1 Certification (done in person, not in book):


1.
2.
3.
4.
5.
6.
7.
8.

Student can turn the system on and off correctly.


Set levels for wireless mics.
Set levels for mp3 and CD player.
Set up a projector and adjust the screen with keystone settings.
Understand signal flow.
Troubleshoot feedback.
Troubleshoot issues such as the LR button or power issues.
Set up a portable PA system with wired mic, CD, and mp3 player.

Assignment 1: Create a song on the computer using loops. Learn to record into
the computer using a microphone and audio interface.

30

Introduction to the Recording Arts

Chapter 3
Physics of Sound

31

Introduction to the Recording Arts

32

Introduction to the Recording Arts

Sound Waves
Sound waves exist as variations of pressure in a medium such as air. They are
created by the vibration of an object, which causes the air surrounding it to
vibrate. The vibrating air then causes the human eardrum to vibrate, which the
brain interprets as sound. The source location is where the sound originates
and is the most intense area of vibration.

Sound Waves http://youtu.be/thlWZzfTIyQ


A sound is caused by a vibration. When a source vibrates, it moves particles
beside it, which then causes particles beside those particles to vibrate, causing a
chain reaction where energy is passed along from one place to another.
This energy (called a sound wave) flows in a wave and is an example of
mechanical energy.
Mechanical energy is the energy associated with the motion and position of an
object and is defined as the sum of potential energy and kinetic energy present in
the components of a mechanical system.
The particles vibrate in a cycle back and forth in their little area in a certain
number of times per second according to how high or low the sound is. Do the
particles actually change location with the sound wave? The air particles don't
actually move through the air like a bullet, they just vibrate in a cycle in the general
area where they already are. The sound wave disturbs the area around it, which
disturbs the area around that, getting weaker as it goes further away from the sound
source. It's like a rock falling into the water the water changes shape with the
waves and then goes back to the way it was.

33

Introduction to the Recording Arts

The medium is the material through which sound can travel. The medium for
sound waves can be air particles, or solid particles, or liquid. Remember that
Earth's atmosphere is actually very dense as compared to other parts of the
universe, so it's the gases in the air that vibrate.
Because sound travels in a medium (air, solid, liquid), it cannot travel in a vacuum,
and therefore there is no sound in space. If there was a spaceship battle in space,
the explosions would be silent. Well, there may be some sound heard in any gases
which may be emitted from the fire, but the vacuum is so great that the gases (and
sound) would dissipate immediately. The astronauts, when talking on the moon,
talked to each other through their radio in their helmet. You would not hear
someone scream while in space. They might hear themselves through the material
of their body, but the sound itself wouldnt travel through the air to another
person. Light waves and radio waves travel in space, but sound waves do not.

34

Introduction to the Recording Arts

Compression and Rarefaction


Sound waves are passed along via mechanical energy that flows in a wave. With
each vibration, there are times where the source location is moving in one direction
and then another. As it moves in the first direction, it creates an area of high
particle density, or high pressure. Then, when the source location moves back, it
creates an area of low particle density, or low pressure. These areas of high and low
pressure are considered to be the actual sound wave.
Once the displacement of particles has occurred, there are two regions. One region
has high particle density, and the other region has low particle density (also known
as high pressure and low pressure.)
COMPRESSION - also known as condensations, are regions of high particle
density.
RAREFRACTION - are regions of low particle density.

Another video that explains sound waves http://youtu.be/zXJAPcZyA70


See https://commons.wikimedia.org/wiki/File:Molecule2.gif for a
simulation of a sound wave.

35

Introduction to the Recording Arts

Frequency
FREQUENCY the number of cycles in a second
As you recall, when something vibrates, it causes the air particles around it to
vibrate, which causes the mechanical sound energy to move out in a wave fashion.
These particles don't actually move the distance of the sound wave, they just
vibrate in a cycle within their own little area. One cycle is when a particle moves
from its starting position to the maximum displacement distance in one direction,
back to its starting position, and then to the maximum displacement distance in the
other direction.
In sound terms, 1 cycle is known as 1 Hertz, or 1 Hz.
1000 cycles is known as 1000 Hz, or 1 kHz (1 kilo hertz).
Particles can vibrate thousands of times per second in this fashion. The number of
cycles completed in one second is called the FREQUENCY of vibration.
Frequency is interpreted by the human ear as the pitch, or how high or low
the sound is. (Note: high and low meaning opera singer versus subwoofer, not
talking about loud or soft here, yet.)
Frequency = pitch.
Normal human hearing is between 20 Hz and 18,000 Hz, but some humans can
hear from 16 Hz to 20,000 Hz.
The picture below is an example of how the sound waves are closer together as the
frequency gets higher. The X axis is time. You can see that there are more cycles
per time in the 2200 Hz as opposed to the 1000 Hz examples.

Examples of videos about frequency:


Hearing test: http://www.youtube.com/watch?v=JiD9I6CP9eI
Hearing test with various frequencies: http://www.youtube.com/watch?v=o810PkmEsOI
Mosquito noise: http://www.youtube.com/watch?v=y7KDM0RcJ1s&feature=fvwrel

36

Introduction to the Recording Arts

Name____________________
Date _____ Period ______
Frequency Vocabulary
1. Define Frequency

2. Define Hertz

3. Hertz is also known as the number of _____________ per _____________


4. What is the particle displacement for one cycle for a sound wave? Fill in the
blanks below.
One cycle is when a particle moves from its _____________ to the ___________
displacement distance in one direction, back to its starting position, and then to the
______________displacement distance in the ________ direction
5. Conversions:
a)
b)
c)
d)
e)
f)
g)
h)
i)

1000 cycles/second = 1 _____


2000 cycles/second = ______ kHz
3000 Hz = _____ kHz
4000 Hz = _____ kHz
5000 Hz = _____ kHz
6 kHz = _________ Hz
7 kHz = _________ Hz
8 kHz = __________ Hz
9.5 kHz = ____________ Hz

6. What is the frequency range of extremely good hearing? ___ Hz to ___ kHz

37

Introduction to the Recording Arts

38

Introduction to the Recording Arts

Resonant Frequency (Optional)


FREQUENCY the number of cycles in a second
RESONANT FREQUENCY The frequency at which a certain material
amplifies louder than others
Resonant frequency is an important and interesting phenomenon to study for
acoustics and sound design. Different materials vibrate in different ways when a
sound wave passes through them. Additionally, the shape and size and whether or
not the object is hollow will determine the resonant frequency of the material.
When you place an object with a certain resonant frequency next to a sound source
at that frequency, it will cause the entire object to vibrate at the same pulsation as
the frequency. So, for instance, if the resonant frequency is 40 Hz, the object itself
will vibrate 40 times per second if a 40 Hz wave passes through it.
This is what has caused the bridge to break in 1940. It can also cause a beaker to
break if there is enough power in the sound.
Any type of box or hollow object will have a resonant frequency.
A clarinet has a resonant frequency.
A trombone has a resonant frequency.
A room has a resonant frequency. To find the resonant frequency of a room, walk
around with a known frequency playing, and measure the loudness of that
frequency in different areas. The area where that frequency is loudest is the area
with that resonant frequency.
Glass breaking Video
http://virginia.pbslearningmedia.org/content/lsps07.sci.phys.energy.glassbreak/

http://shannongunn.net/audio/2012/08/31/acoustics/video-of-glassbreaking-due-to-resonant-frequency/
Tacoma Narrows Bridge Break when wind passes through at a rate of its resonant
frequency
http://www.youtube.com/watch?v=3mclp9QmCGs

39

Introduction to the Recording Arts

Frequency Spectrum
The frequency spectrum ranges from 20 Hz to 20,000 Hz for human hearing. Each
pitch has its own frequency within that range. A piano ranges from about 28 Hz to
about 4 kHz. Below is a picture of the frequency range for different instruments
grouped by category. This picture gives you a good idea of the frequency range of
each instrument as it relates to the notes on the piano. At the top are the actual
frequency numbers in Hz.

This photo was used with permission from


https://www.flickr.com/photos/ethanhein/2272808397/in/album72157603853020993/.

40

Introduction to the Recording Arts

Octaves (optional)
When you play a piano, you notice that the same note is repeated several times. Each note sounds
the same, just higher or lower. An octave is the distance from one note to the same note 12 tones
higher or lower. This applies to all instruments, including the keyboard.
Each octave on the keyboard is labelled as octave 0, 1, 2, 3, 4, etc. Each note after C in that octave
has that number.

Photo credit https://www.flickr.com/photos/ethanhein/2272808397/in/album72157603853020993/.


When you go up one octave, the frequency is doubled.
When you go down one octave, the frequency is halved.
So, for instance, the tuning note used in all US orchestras is A4 at 440 Hz (also known as A 440).
The A above that is 880 Hz. The A below that is 220 Hz.

Note that the actual frequencies dont quite line up to the frequencies on the
picture. Thats because we use well-tempered tuning instead of even-tempered
tuning. Basically, we adjust the upper notes by ear so that they sound good.

41

Introduction to the Recording Arts

Law of Interference
LAW OF INTERFERENCE: The Law of Interference is a physics rule that
states that when two sound waves hit each other, they will either reinforce each
other or cancel each other out. Two or more sound waves will travel through any
medium and combine together to make a new complex tone.
Read this tutorial to see pictures of the law of interference in action:
http://www.physicsclassroom.com/class/waves/u10l3c.cfm
Example:
A piano consists of a hammer that hits three metal strings at the same time. Each
string vibrates at a certain frequency and they combine together to create the
pianos own distinct tone.
Piano Hammer Action Animation:
http://www.youtube.com/watch?v=xr21z1CZ54I
Inside the grand piano: http://www.youtube.com/watch?v=I6SvIbKIWPQ
The picture below shows how when you superimpose two waves that are having
similar displacement, a new wave is created that is twice as big. On the right, you
can see if two waves hit each other that are in opposite phase, they will cancel each
other out.

Photo credit
https://he.wikipedia.org/wiki/%D7%92%D7%9C_%D7%A2%D7%95%D7
%9E%D7%93.

42

Introduction to the Recording Arts

Pure and Complex Tones


PURE TONES sound waves that are a single frequency sine wave
COMPLEX TONES sound waves that consist of multiple frequencies
Pure Tones are produced when an object consistently vibrates at a single
frequency. An example of an instrument that creates a pure tone is the tuning
fork. Another example would be if you selected a sine wave on a synthesizer.
There are really only two examples of pure tones. Everything else is complex.
In order to have a complex tone, the object must vibrate at different
frequencies. What is happening is that the instrument whether it be a box or a
tube has various resonant frequencies that are produced, combined together,
and then heard by humans as a complex tone. So while an instrument can play one
note, you are actually hearing that instrument vibrate in several different places, all
adding up together to create this complex tone.
We use a sine wave to explain a lot of physics of sound, but most music is made up
of complex tones.
Noise is another example of a complex sound wave. A major triad is a complex
tone because there is more than one frequency present.
Pure and Complex Tones: http://capone.mtsu.edu/wroberts/purecomp.html

Photo credit Ethan Hein at http://ethanhein.com. Photo not modified. Source:


https://www.flickr.com/photos/ethanhein/2441692002/in/album72157603853020993/.

43

Introduction to the Recording Arts

44

Introduction to the Recording Arts

Name __________________________________ Date ____ Period _____


Notes on Pure vs. Complex tones and the Law of Interference
1. Define Pure Tone
2. Give an example of an instrument or waveform that produces a pure tone.
3. Define a complex tone.
4. In order for an object to have a complex tone, the object must vibrate at
________________ ________________________.
5. Define the Law of Interference:

45

Introduction to the Recording Arts

46

Introduction to the Recording Arts

Nodes and Antinodes


Nodes and Anti-Nodes
The nodes are the closed part of the wave, the anti-nodes are the open part of a
wave. This is for a standing wave.

Source: Wikipedia commons (public domain.)

47

Introduction to the Recording Arts

Common Waveforms
WAVEFORMS the shape of the sound wave
There are two types of sound waves: one with a definite pitch we call a note, the
other with no definite pitch we call noise. Music has both of these properties
think of cymbals (noise) in a rock song (pitch). We can definitely hear the
difference, but what is the difference in acoustic terms? Well, a pitch contains
regular vibrations (period motion) and a noise contains irregular vibrations (nonperiodic motion.)
There are a few common wave forms that are found in popular music, especially
synthesizers. Live musical instruments tend to produce sine wave forms, unless
playing an instrument with a buzzy sound such as a distorted guitar. There are
different aesthetics among cultures as to how much of a pure sound is actually
beautiful. Synthesizers are designed to allow the user to control the timbre of the
sound through filters of different harmonics. The following are the four main wave
forms used as building blocks to create new synthesized sound in electronic music:
sine, square, sawtooth, and triangle.

Source: Wikipedia commons, public domain.


SINE WAVE - pure fundamental tone
SQUARE WAVE fundamental tone with odd-harmonics decreasing by a rate of
1/n
TRIANGLE WAVE fundamental tone with odd harmonics decreasing by a rate
of 1/n2
SAW TOOTH wave - fundamental tone with all harmonics decreasing by a rate
of 1/n
Click here to hear the sound of each type of waveform:
http://shannongunn.net/audio/2012/08/26/acoustics/demonstration-of-different-waveforms/

48

Introduction to the Recording Arts

Harmonics (optional)
FUNDAMENTAL TONE: the most intense vibration frequency, or the main
pitch that we hear.
The FUNDAMENTAL TONE is the most intense vibration frequency in any
given note on the instrument. It is also the lowest vibrating frequency (pitch) on
that instrument with that particular fingering. It is also the loudest resonant
frequency. Within each complex tone there are multiple frequencies present.
These additional frequencies are known as harmonics.
HARMONICS multiples of the fundamental frequency
It just so happens that each HARMONIC is a multiple of the fundamental
frequency (x2, x3, x4, x5, x6, x7, etc.) and are named as such. The presence of
different harmonics within a complex tone give the instrument its timbre, or tone
color. The harmonics present within a complex tone are what make the
instruments sound different even if they play the same note.
For instance, when you bow across a violin string, it causes the string to vibrate at a
certain frequency, which is the most intense amount of particles moving, and thus
heard as the main frequency, or FUNDAMENTAL TONE. But there are other
parts of the violin body that are vibrating at two, three, four, or even five times the
main frequency. These are the harmonics and they are present in every note.
You can segregate the harmonics and hear them by themselves if you bow while
you press the string lightly. This is because the length of the string has changed and
therefore the harmonic becomes the fundamental tone.
How is this possible? After all, an A is 440 Hz, whether it's played on a piano or a
harp or a tuba, right? That A at 440 Hz is just one number, it's not like we're calling
it 440 Hz plus a little 880 Hz and some 1720 Hz on the side. Well, actually, when
instruments vibrate, they have many different levels of vibration going on,
including all of the frequencies mentioned above. These upper frequencies, or
harmonics, are very soft (not intense) and not easily heard to the human ear. We
call that tone 440 Hz because the 440 Hz is the loudest part of the sound heard,
and is the fundamental for that particular string.
Why should you learn this for music technology? The entire world of synthesizers
exists around manipulating harmonics because they give each sound its
characteristic timbre. The entire world of audio engineering rests upon the
understanding that within each sound are fundamental frequencies and harmonics
that can be boosted or attenuated.

49

Introduction to the Recording Arts

How the piano tone has several harmonics present http://youtu.be/MBrQYSGJgck


Unequal, well-tempered tuning systems - http://youtu.be/xjhNt-ZksVw
Bowed Violin String in Slow Motion - http://youtu.be/6JeyiM0YNo4
How to tune a guitar with harmonics http://www.youtube.com/watch?v=NFfiozcLQ1w
When the fundamental frequency is 55 Hz, then the frequencies of the
harmonics would be as follows:
Where f = 55 Hz, the harmonics equal 1x, 2x, 3x,
Fundamental

Harmonic

Harmonic is
multiple of the
fundamental

Frequency of the
Harmonic

1st Harmonic

1f

55 Hz

2nd Harmonic

2f

110 Hz

3rd Harmonic

3f

165 Hz

4th Harmonic

4f

220 Hz

5th Harmonic

5f

275 Hz

6th Harmonic

6f

330 Hz

7th Harmonic

7f

385 Hz

8th Harmonic

8f

440 Hz

9th Harmonic

9f

495 Hz

10th Harmonic

10f

550 Hz

11th Harmonic

11f

605 Hz

Frequency
=f
55 Hz

50

Introduction to the Recording Arts

Even if an instrument produces fundamental tones at 2 kHz, it has harmonics at


much higher frequency values. This is why most audio interfaces need to have a
much higher frequency range than can actually be produced by musical
instruments. If their sampling rate cannot capture those upper level frequencies (20
kHz) then the sound will be deadened.
There are also harmonics that are not exact multiples of the fundamental. For
instance, a cymbal may produce a fundamental frequency of 500 Hz, but the
majority of its vibrations are at much higher frequencies ranging from 900 Hz to 16
kHz depending on the type of cymbal. These upper frequencies are what make
each cymbal sound different and give it that crash sound.
Slow motion video of a cymbal crash: http://youtu.be/kpoanOlb3-w
Videos that explain Harmonics
Overtones, Harmonics, and Additive Synthesishttp://www.youtube.com/watch?v=YsZKvLnf7wU
Test your hearing: how high can you hear?
http://www.youtube.com/watch?v=h5l4Rt4Ol7M&NR=1&feature=fvwp
Octave spiral with overtone series:
http://www.youtube.com/watch?v=vS8PEM-ookc
Fibonacci sequence as it relates to music:
http://www.youtube.com/watch?v=2pbEarwdusc
http://www.youtube.com/watch?v=SUxcFA0r4oQ

51

Introduction to the Recording Arts

52

Introduction to the Recording Arts

Name______________________ Date _____ Period ______


HARMONICS VOCABULARY
1. Fundamental Tone:
2. Harmonics:
3. What determines timbre, or tone color?
4. What are the harmonics for the following fundamental frequency? 65 Hz
1st Harmonic =

____ Hz = 65 Hz x 1

2nd Harmonic =

____ Hz = 65 Hz x 2

rd

____ Hz = 65 Hz x 3

th

4 Harmonic =

____ Hz = 65 Hz x 4

5th Harmonic =

____ Hz = 65 Hz x 5

6th Harmonic =

____ Hz = 65 Hz x 6

7th Harmonic =

____ Hz = 65 Hz x 7

8th Harmonic =

____ Hz = 65 Hz x 8

9th Harmonic =

____ Hz = 65 Hz x 9

10th Harmonic =

____ Hz = 65 Hz x 10

3 Harmonic =

53

Introduction to the Recording Arts

54

Introduction to the Recording Arts

Overtones (optional)
OVERTONES: Overtones are the same as harmonics, except the 1st overtone is
the same as the second harmonic, and so forth.
In physics, we call the multiples of the fundamental harmonics and refer to the
fundamental tone as the first harmonic. In band class, they call the fundamental
tone the fundamental, and the 2nd harmonic is called the 1st overtone. The second
overtone is the 3rd harmonic, and so forth. This can be confusing to switch back
and forth between the two nomenclatures.
When you pluck or bow a string, it will vibrate at the fundamental frequency. The
picture below demonstrates when you pluck a bow or string at the halfway, 1/3rd
way, and point on the string. Each time you change the length of the string as a
multiple of the original length, you will play a multiple of the fundamental, or a
harmonic.

Over time, instrumentalists have figured out the Overtone Series or the tones
that are resonant on a particular instrument also line up with the harmonics. A
skilled musician can play all of the notes below with one fingering on a brass
instrument.

Image source: Hyacinth at the English language Wikipedia


55

Introduction to the Recording Arts

Octaves and the 2nd, 4th, and 8th


Harmonics
Harmonics are hidden tones within a sound that are multiples of the fundamental.
Harmonics are x1, x2, x3, x4, x5, etc. (not doubled)
Within the sequence of harmonics you have some octaves. For instance, when you
multiply the fundamental times 2, you get the second harmonic, but that also
happens to be one octave above the fundamental. This is because all frequencies,
when doubled, are one octave higher. This isnt to say that the fundamental
frequency is now being played an octave higher, it just means that within the tone
of the sound you have a sound that is also one octave higher.
If you do the math, youll notice that frequency gets doubled on the 2nd harmonic,
4th harmonic and 8th harmonic. All three octaves are present in the sound, which
helps create the sounds timbre, or tone color.

56

Introduction to the Recording Arts

Name ______________________
Date _____ Period _____
Octaves Review
1. What is the frequency one octave above 400 Hz? _____ (400 x 2 )
2. What is the frequency one octave above 800 Hz? _____ (800 x 2)
3. What is the frequency one octave above 1000 Hz? _____
4. What is the frequency one octave above 326 Hz? _____
5. What is the frequency one octave below 400 Hz? _____ (400/2)
6. What is the frequency one octave below 500 Hz? _____ (500/2)
7. What is the frequency one octave below 440 Hz? _____
8. What is the frequency one octave below 110 Hz? _____

57

Introduction to the Recording Arts

58

Introduction to the Recording Arts

Amplitude
AMPLITUDE the maximum displacement of the particles from their original
place. Amplitude is measured as the intensity of the sound pressure level (SPL).
Amplitude is known as the strength or power of a wave signal. In acoustics, it is
the height of a wave when viewed as a graph. It is heard as volume, or loudness.
Thus the name amplifier for a device which makes the guitar louder. As the
sound wave continues to displace particles in a wave fashion, it is displacing energy.
That is why the sound gets weaker as it goes farther from its source. The energy is
displaced in the form of heat.
Amplitude is graphed as the height of the sound wave. The higher the wave, the
more the particles are being displaced, thus the denser the air, and the louder the
sound.

59

Introduction to the Recording Arts

Decibels
DECIBELS are the term we use to measure perceived loudness. Or, more
specifically, the term to measure SOUND PRESSURE LEVEL. The sound
pressure level is the intensity of the displacement of particles.
This chart below describes the amount of time you can listen to a certain loudness
before you have hearing loss or hearing damage.

This is a chart that describes the perceived loudness of different sound sources. On
the left is the measurement in dB, or decibels. On the right is measurement in Pa,
or Pascals. Note, if there is hearing loss, hopefully it is temporary, and can be
rectified by letting the ears rest.

60

Introduction to the Recording Arts

Inverse Square Law


The INVERSE SQUARE LAW, when applied to audio, states that the intensity
of a sound drops by 6 dB for each doubling of distance from the source. In other
words, this means that for each time you double the distance between yourself and
the sound source, the power of the audio drops by 75% - a fairly significant
amount!

This is a picture of the inverse square law as it relates to light waves. The concept is
the same for sound waves.
Source: https://commons.wikimedia.org/wiki/File:Inverse_square_law.svg

61

Introduction to the Recording Arts

62

Introduction to the Recording Arts

Name ______________________________ Date _____ Period ______


Amplitude Review
1. Amplitude:
2. Amplitude is measured as the ________________ of the sound pressure level.
3. Sound pressure level:
4. Abbreviation for sound pressure level: ______
5. Amplitude is notated as the _______________ of the sound wave.
6. One word to describe amplitude: ________________
7. Decibels:
8. How long can you listen to 85 dB without having hearing loss? _____
9. How long can you listen to 97 dB without having hearing loss? _____
10. How long can you listen to 106 dB without having hearing loss? _____
11. What dB is a conversation? _____
12. What dB is a quiet room? _____
13. What dB is rock and roll band? _____
14. What dB is a lawn mower? _____
15. About how long can you listen to a rock band before you have hearing loss?
_____
16. Inverse square law:

63

Introduction to the Recording Arts

64

Introduction to the Recording Arts

Psychoacoustics
PSYCHOACOUSTICS the study of how humans perceive sound.
The pinna scoops the sound forward, focusing energy into the ear canal. It also
blocks high frequencies from behind you. The ear drum is a transducer which
converts acoustic mechanical energy to neurons. The nerve pulses then travel to
the brain where they are perceived. It is important in audio engineering to take into
account not only the mechanics of the environment, but also the fact that the brain
and ear of the listener are involved in a person's listening experience.
Inside the cochlea are tiny little hair cells that move when hit by a sound wave.
Hearing damage occurs when those hairs get bent. They go down after a certain
amount of time and eventually don't come up. Here's a video on how hearing
works: http://hearinghealthfoundation.org/template_video.php?id=3

65

Introduction to the Recording Arts

How Our Hearing Works another way of explaining it


As sound passes through each ear, it sets off a chain reaction that could be
compared to the toppling of a row of dominoes. First, the outer ear collects
pressure (or sound) waves and funnels them through the ear canal. These
vibrations strike the eardrum, then the delicate bones of the middle ear conduct the
vibrations to the fluid in the inner ear. This stimulates the tiny nerve endings, called
hair cells, which transform the vibrations into electro-chemical impulses. The
impulses travel to the brain where they are understood as sounds you recognize.

66

Introduction to the Recording Arts

Hearing Loss
How long can you listen to music on a phone without having hearing loss?

At 100% Volume = 5 minutes to hearing loss


At 90% volume = 18 minutes to hearing loss
At 80% volume = 1 hour 12 minutes
At 70% volume = 4 hours 36 minutes
At 60% volume = 18 hours
At 50% volume = unlimited

Please refer to the image on the following hyperlink to see different levels of
loudness for different devices.
http://www.betterhearing.org/hearingpedia/hearing-lossprevention/noise-induced-hearing-loss
NIHL = Noise Induced Hearing Loss
NIHL is 100% preventable!
Hearing Loss
Symptoms of Hearing Loss
You should suspect a hearing loss if you:
1. have a family history of hearing loss
2. have been repeatedly exposed to high noise levels
3. are inclined to believe that "everybody mumbles" or "people don't speak
as clearly as they used to"
4. feel growing nervous tension, irritability or fatigue from the effort to hear
5. find yourself straining to understand conversations and watching people's
faces intently when you are listening
6. frequently misunderstand or need to have things repeated
7. increase the television or radio volume to a point that others complain of
the loudness
8. have diabetes; heart, thyroid, or circulation problems; reoccurring ear
infections; constant ringing in the ears; dizziness; or exposure to ototoxic
drugs or medications
Click here for an interactive website on safe hearing levels:
http://www.cdc.gov/niosh/topics/noise/noisemeter.html
67

Introduction to the Recording Arts

Psychoacoustics Continued
PSYCHOACOUSTICS the study of how humans perceive sound
ANECHOIC CHAMBER - a room where there is no sound reflected off the
walls. All sound is absorbed into the walls and other materials. In the chamber
you can hear your stomach, heart, and even your ear.
http://dsc.discovery.com/life/worlds-quietest-room-will-drive-you-crazy-in-30-minutes.html

http://youtu.be/u_DesKrHa1U
During a rock concert, there is a temporary threshold shift. The brain grabs
muscles on the ear drum and tightens it to turn down the volume within
your ear. As a result, engineers will gradually turn up the volume throughout
the evening. After a while, the muscles get tired and don't hold back the
volume levels as much.
BINAURAL HEARING - We hear with two ears and it's separated by a
space, or baffle (your brain)
DYNAMIC RANGE OF HUMAN HEARING: 0 120 dB, normal is 10
dB to 120 dB
STEREOPHONIC SOUND Stereophonic sound developed in the late
1940s. Also known as Stereo, it is a method of sound reproduction that
creates an illusion of directionality and audible perspective. This is usually
achieved by using two or more independent audio channels along with two or
more loudspeakers in such a way as to create the impression of sound heard
from different directions.
MONITOR SETUP Monitors are the speakers we use to listen to
recordings. In a recording studio, the two monitor speakers should be placed in
an equilateral triangle with 60 degree angles to closely emulate the human
hearing experience.

PHANTOM IMAGES - The fact that people hear sound as if it's coming from
the middle, even though it is coming out of two different speakers.
68

Introduction to the Recording Arts

TIME OF ARRIVAL It takes about 7 milliseconds (thousandths of a second)


for the sound to travel from the ear to the brain ad get processed.
EQUAL LOUDNESS CONTOURS Equal Loudness Contour is a
measurement of perceived loudness (dB SPL) over the frequency spectrum for
which a listener perceives a constant loudness when presented with pure steady
tones. Basically, it is the concept that humans hear different frequencies at different
levels. For instance, when you turn down the volume, humans can't hear to bass as
well that's because of the pinna, not the speakers. Humans are really sensitive in
the 3000 to 5000 Hz range which also happens to be where the baby's cry resides.

Image source: Wikimedia Commons (Public domain)


FLETCHER MUNSON CURVES - The Fletcher Munson graph is a picture
of the response of the human ear at different levels. Basically, this picture states
how loud a frequency has to be to be heard at the same level as another frequency.
For instance, in order to hear 30 Hz as the same loudness as 3000 Hz, you have
to amplify the 30 Hz by about 50 dB more than the 3000 Hz. This all has to do
with the resonant frequencies within the ear canal and the structure of the ear.
Notice the graph's frequencies are on a logarithmic scale (squished down to
make it more representative of what we would perceive.)

69

Introduction to the Recording Arts

This is important to understand while mixing down music because it makes a big
difference regarding which frequencies to emphasize and de-emphasize in your
music. This is why you have to turn up the bass to hear it. Thus the use of the
subwoofer or speaker dedicated to bass sounds only.

70

Introduction to the Recording Arts

Name ___________________________ Date ________


PSYCHOACOUSTICS VOCABULARY
1. Why do we typically mix music to stereo (2 speakers)?
2. Is the ears sensitivity to sound logarithmic or linear?
3. Why are decibels (dB) better for measuring loudness than absolute
intensity?
Using the Fletcher Munson graph below, answer the following questions:
4. How intense does a tone at 50 Hz need to be in order to be perceived at 40 dB?
_____
5. How intense does a tone at 100 Hz need to be in order to be perceived at 40 dB?
_____
6. How intense does a tone at 1000 Hz need to be in order to be perceived at 40
dB? ____
7. How intense does a tone at 500 Hz need to be in order to be perceived at 40 dB?
_____
8.Let's say you have a speaker putting out 1000 Hz at 40 dB and another speaker
putting out 50 Hz at 50 dB. How loud does the speaker putting out 50 Hz need to
be in order to be heard at the same loudness as the 1000 Hz signal? ____

71

Introduction to the Recording Arts

72

Introduction to the Recording Arts

Wavelength
WAVELENGTH The length of one complete cycle of the wave. It is also known
as the distance between two of the same points in a sound wave.

Wavelength = Speed of Sound/Frequency

The larger the wavelength, the lower the frequency, and vice versa.
Notice the wave has to go down, then up, then down again to complete the cycle.

Photo credit Bryan Derksen (on English Wikipedia) at


https://commons.wikimedia.org/wiki/File:Wavelength.png.
73

Introduction to the Recording Arts

Speed of Sound
340 m/s
The speed of sound is the same for all frequencies, and is typically about 340 m/s.
SPEED OF SOUND The speed of sound is actually the same for all sounds. It
typically travels at 343 meters per second or 1,126 feet per second. How is this
possible? Well, you have to remember that frequency is how often the air particles
vibrate per second. The air particles don't actually move in a trajectory, but their
energy is passed from one to the other like a hot potato. The speed of sound is
how fast that energy travels, and is the same for all frequencies.
The Speed of Sound is also known as the Velocity.

v= velocity, or 340 m/s


lambda = wavelength
f = frequency
The speed of sound does not change while wavelength and frequency are inversely
proportional. As wavelength increases, frequency decreases. As frequency increases,
wavelength decreases.

74

Introduction to the Recording Arts

Name ___________________________
Date _______ Period ______
WAVEFORMS VOCABULARY
1. What is the mathematical relationship between wavelength, velocity,
and frequency?
2. If a sound wave is travelling in air and has a frequency of 20kHz,
what is the wavelength? (take the velocity of sound in air to be 340
m/s) ___________ m

3. If a sound wave travelling in air has frequency of 20 Hz, what is the


wavelength? (take the velocity of sound in air to be 340 m/s)
___________ m

4. Assuming the sine wave travels at 340 m/s, what is the frequency of
a sine wave with a wavelength of 10 meters? ______________ Hz

5. If a sine wave is 10 meters long, where is the first node? ___ meters
6. If a sine wave is 10 meters long, where is the first anti-node? ___ meters
7. If a sine wave is 10 meters long, where is the second anti-node? ___ meters

75

Introduction to the Recording Arts

76

Introduction to the Recording Arts

Law of Interference Review


Review of the LAW OF INTERFERENCE: The Law of Interference states
that when two sound waves hit each other, they will either reinforce each other or
cancel each other out. Two or more sound waves will travel through any medium
and combine together to make a new complex tone.
Review of REINFORCEMENT when two similar sound waves meet at the
same place and are both at high pressure areas
Review of CANCELLATION when two similar sound waves meet at the same
place and one is at a high pressure level while the other is at a low pressure level.
If two waves meet, they can either cancel each other out or enhance each other's
loudness.
If they are at the same frequency: (such as two speakers emitting the same
sound)
If the sound waves meet in a place where both waves are at a high sound pressure
level, or volume, their volume gets doubled. This is why, when two instrumentalists
play exactly in tune the sound is twice as loud. Conversely, if two similar sound
waves meet and one is at a high pressure level while the other is at a low pressure
level, then they can cancel each other out to create silence. Therefore, with every 2
speaker placement, there will be dead spots in the room. You can test this by
walking around and listening in the auditorium.
If they are at different frequencies: the sound waves either reinforce each other
or cancel each other and a new wave is formed.
Click here to see a good tutorial on how sound waves interact with each other:
http://www.mediacollege.com/audio/01/wave-interaction.html

77

Introduction to the Recording Arts

The Universal Application of the Law


of Interference
There are two types of sound wave interference: Constructive and Destructive
(same as reinforcement and cancellation).
Interference can happen when:
1. Lets say two speakers are emitting either the same frequencies (same song).
These speakers are most likely emitting the same frequency, so there is
constructive interference. Constructive interference means that the sound
wave becomes louder.
2. Two instrumentalists or sound sources emit different frequencies, resulting
in a combination of constructive and destructive interference.
Interference can also happen when:
3. Two microphones are used.
When you place two microphones on the same sound source, the sound
signals from each mic will go into the computer software as two separate
tracks. Each track is added together to make a new waveform. Well, the
same laws of interference apply in this situation as well. Depending on the
placement of the mics, the sound will change with various constructive and
destructive interference to create a new waveform. Therefore, its
extremely important to know how to place microphones properly,
especially if you are using two mics on a single sound source, such as
overhead mics on drums.

78

Introduction to the Recording Arts

Here is an example from a recording project. I have recorded two separate stereo
tracks, same exact sound, and then bounced it down to the third track which is the
resulting waveform. Its basically the same, just louder.

Here is another example where I have recorded two tracks but the frequencies are
slightly different from each other. The two frequencies are out of phase. The top
two tracks are now different, resulting in a much different waveform at the third
track.

79

Introduction to the Recording Arts

MICROPHONES OUT OF PHASE: when two mics cancel each other out
The same concepts of cancellation and reinforcement for sound waves can apply to
microphones as well. Let's say you're recording with two microphones at the same
time. If they are both picking up the same exact frequency, and one is placed at a
point of high pressure, while the other is placed at the point of low pressure, then
they will actually cancel each other out when you listen to the two tracks combined
on the recording. This is known as being out of phase. This is really important
when applied to putting two microphones on one guitar amp or using multiple
microphones on a drum set.
Here is a video that demonstrates how double microphone placement can cause
different frequencies to go in and out of phase for a guitar amp:
http://youtu.be/7_h9WjfjhMw

Steve Reich is a very famous composer who often uses phasing to create music.
http://youtu.be/JW4_8KjmzZk
Piano Phase Song
By embracing the phasing issues, new sounds and music are created.
Search: phase issues in YouTube:
http://www.youtube.com/results?search_query=phase+issues&oq=phase+
issues&gs_l=youtube.3..0.1329.2967.0.3206.12.8.0.4.4.0.87.543.8.8.0...0.0...1ac.
YSulc0FqtKA

80

Introduction to the Recording Arts

Harmonic Explorer Activity


This lesson uses a free stand-alone VST called the Harmonic Explorer which can be
downloaded from this hyperlink: http://www.vst4free.com/free_vst.php?id=805
Learning Targets:
1. I understand Constructive and Destructive interference as it relates to sound
waves.
2. I can create new sound waves by adding harmonics to the fundamental.
3. I understand how to create a saw and square waveform.
Open the Harmonic Explorer. The lowest row in your keyboard is the major scale.
First create sound.
1. First of all, turn your frequency up to #4 or 5 so you can hear it. This is the
octave or how high or low your notes are. Press Z X C V B N M to hear the
scale.
Then hear the harmonics of that tone.
2. Turn the knob labeled Harmonic up and you will hear the harmonic series
of that frequency. Remember that the harmonics are multiples of the
fundamental. This is each harmonic by itself.
Notice the loudness of each tone.
3. Press the button Hide Frequency Analyzer. This allows you to toggle
between seeing the waveform and seeing the amplitude of that tone.
4. Hold down a note and notice the shape of the wave as you change
frequency. Notice there are more anti-nodes as you get higher this is a
larger frequency (cycle per second.) Then toggle to the frequency analyzer
and notice the amplitude of the sounds as you change frequency.
Construct a Saw Tooth Waveform
5. Click on the drop down menu Single and select Saw. Now as you move the
harmonics knob, you will be adding harmonics to the fundamental. Toggle
the frequency analyzer so you can see the waveform.
6. With your frequency knob at 4 or 5, hold down a note while you turn the
harmonic knob.
7. As you add each harmonic, you can hear the sound change to be a sawtooth
wave. This is because of the constructive and destructive interference.\
8. Notice the lighter waveforms in the background these are all the different
frequencies of the harmonic.
9. Toggle to the frequency analyzer. Notice the amplitude of each harmonic
gets smaller as the harmonics get higher.
Construct a Square Waveform
10. Select Square instead of Saw.
11. Notice that as you turn up the harmonics knob, nothing happens at the even
harmonics. Thats because the square wave is made up of the fundamental
plus all the odd harmonics.

81

Introduction to the Recording Arts

82

Introduction to the Recording Arts

Name _______________________ Date _____ Period _____


Phasing Review
1. Describe what happens when sound waves hit each other.
2. What happens when two sound waves of the same frequency hit each
other and both are at the highest point in the wave?
3. What happens when two sound waves of the same frequency hit each
other and one is at its highest point in the wave and the other is at its
lowest point in the wave?
4. What happens when two sound waves of different frequencies hit each
other?
5. Define constructive interference
6. Define destructive interference
7. Identify any potential issues you might have when you record with two
microphones on the same sound source.

83

Introduction to the Recording Arts

84

Introduction to the Recording Arts

How Sound Waves Act In Different


Materials (optional)
WAVE BEHAVIOR how sound waves interact with different substances and
materials
ABSORPTION the process where sound waves become deadened by certain
materials (the sound energy is absorbed into the material). High frequencies get
easily absorbed into soft material such as carpet or curtains. Low frequencies will
go through solid material such as walls much more readily than high frequencies.
Low frequencies will get absorbed into hard materials easier than high frequencies.
REFLECTION the process of sound waves hitting a material and bouncing off,
such as when sound waves hit a wall and reflect. High frequencies reflect off of
hard surfaces while low frequencies are more likely to be absorbed or go through
the hard surface. Multiple reflections result in perceived reverberation.
ABSORPTION COEFFICIENT the percentage of sound energy that is
absorbed into material
All decimals are percentages (i.e.: .60 = 60%)

Notice that in a carpeted room, 60% to 65% of the 2 kHz and 4 kHz frequencies
will be absorbed.
A wood floor will absorb a higher percentage of low frequencies than high
frequencies.

85

Introduction to the Recording Arts

HOW SOUND WAVES INTERACT


WITH DIFFERENT MATERIALS
continued
DIFFRACTION the bending of sound waves around small obstacles and the
spreading of sound waves beyond small openings. Notice how the size of the
opening affects the trajectory of the signal.

See this picture above in action here:


https://commons.wikimedia.org/wiki/File:Wavelength%3Dslitwidthspectr
um.gif
Image credits Lookang many thanks to Fu-Kwun Hwang and author of Easy
Java Simulation = Francisco Esquembre on Wikimedia commons.

86

Introduction to the Recording Arts

Sound diffraction through a hole.

Photo source: Yggmcgill on Wikimedia commons.


REFRACTION Refraction is the bending of sound waves in hotter or cooler
temperatures. In hot air, sound travels faster. In cool air, sound travels slower.
Sound also travels faster in humid air.
When sound waves hit areas with different temperatures, it is actually like hitting a
wall or an accelerant. Generally, sound waves go toward cooler temperatures. At
night, sound waves go toward the ground and during the day, sound waves go
toward the sky. Therefore, if you are running sound at a rock concert that starts
during the day and ends at night, you may need to move the speakers in back of the
audience to accommodate. This is due to refraction.

Photo credits Yggmcgill on Wikimedia commons.

87

Introduction to the Recording Arts

88

Introduction to the Recording Arts

Name ______________________
Date _____ Period ______
SOUND WAVE INTERACTIONS VOCABULARY
1. Which frequency is more likely to reflect off of hard surfaces?

2. Which frequency is more likely to be absorbed into carpet?

3. Which frequency is more likely to be absorbed into wood floor?

4. You are on the beach and you notice that you can hear people talking
from 20 feet away. Would you be able to hear this easier at night or
during the day, and why, assuming nobody else is around?

89

Introduction to the Recording Arts

90

Introduction to the Recording Arts

CHAPTER 4 ELECTRONICS
PRIMER
Concepts/Terms

Units of Measurement: Volt, Ampere, Ohm

Voltage

Current

Resistance

Impedance

Power

Skills

Be able to use the equation I = V/R to calculate current, voltage, or


resistance when given the other two.

Be able to use the equation P = V*I to calculate power

Be able to avoid blowing out an amp or overheating speakers

91

Introduction to the Recording Arts

92

Introduction to the Recording Arts

Why learn Electronics?

Electricity is the flow of electrons.

If you understand electricity, you can fix cables, amps, sound boards, and
mics. This makes you extremely valuable to any organization!

According to the engineer at Cue Recording studios (from our field trip last
year) the ability to fix electronics is the number 1 skill needed in studios
right now.

Go slow, take it easy. It can be overwhelming at first to try to understand


something that you take for granted. Do some research on your own, be
curious, and ask lots of questions.

Introduction to electricity:
https://www.youtube.com/watch?v=EJeAuQ7pkpc

93

Introduction to the Recording Arts

Voltage

A Volt is a unit of measurement for absolute electrical potential, potential


difference, and electromotive force.

Basically, voltage is potential its the potential for electrons to


move from one atom to another.

Why does the US use 120V and the rest of the world uses 240V?
http://www.straightdope.com/columns/read/1033/howcome-the-u-s-uses-120-volt-electricity-not-240-like-the-rest-ofthe-world
http://askville.amazon.com/difference-110-volt-220-EuropeAsia-Pro-Con/AnswerViewer.do?requestId=724312

An Old Volt Meter

Image source: public domain

94

Introduction to the Recording Arts

Current

Current is the rate that charge flows through a circuit.

Current is conserved in a circuit; current flowing into a circuit must flow


out of it.

Current is measured in amperes. (amps)

What is the voltage and current for your keyboard?

Conventional Current vs. Real Current

In electronics, real current usually describes the flow of electrons, which are
negatively charged.

Conventional current describes the flow of hypothetical positive charge in a


current. Conventional current flows in the opposite direction of real
current.

Circuit diagrams are drawn using conventional current, while, in reality, the
current flows in the opposite direction.

http://www.mi.mun.ca/users/cchaulk/eltk1100/ivse/ivse.htm

Below is a simple electric circuit, where current is represented by the letter i. The
relationship between the voltage (V), resistance (R), and current (I) is V=IR; this is
known as Ohm's Law.

95

Introduction to the Recording Arts

Direct Current vs. Alternating Current

Direct current flows in one direction and is the type of current available
with a battery

Alternating current flows in two directions and is the type of current that
comes out of the wall socket

Alternating current flows at 60 Hz, or 60 cycles per second, and can be


heard when there is a ground loop (or some sort of unfinished electrical
circuit)
Introduction to Electricity:
https://www.youtube.com/watch?v=EJeAuQ7pkpc
Direct Current occurs with a battery.

Alternating Current

96

Introduction to the Recording Arts

Series and Parallel Circuits


So far, all of the demonstrations have been for one device in each circuit. What if
you want to have more than one device in each circuit? How do you add multiple
devices to a circuit? There are two ways to hook multiple devices to one circuit:
parallel and series.
Parallel circuit: each device is added to its own branch from the electrical source.
In a parallel circuit, electricity passes through both devices at the same time.

Parallel circuits give the electrons multiple pathways that they can travel in
order to complete the circuit, so if you unhook one of them, the other
devices still work
Electricity and circuits
https://www.youtube.com/watch?v=D2monVkCkX4

Series circuit: each device is hooked up to the previous device in a daisy-chain


fashion. Each component is connected in a row, with electricity passing first
through one and then the other.

Series circuits give the electrons only one pathway to travel, so if you
unhook one device, none of the other devices works

In the picture below, the left diagram is in series and the right diagram is in parallel.

Image credits Theresa knott at English Wikipedia.

https://commons.wikimedia.org/wiki/File:Series_and_parallel_circuits.png

If you have multiple devices on a series circuit, if one of the devices stops working,
it stops the flow of electrons for all of the devices after that point. Example:
Christmas tree lights.

97

Introduction to the Recording Arts

Impedance

Voltage is the electromotive force that causes charge (current) to flow


through a circuit.

Unlike Voltage, Impedance is a measure of the overall opposition of a


circuit to the flow of current.

Impedance is also known as resistance

Resistance is a measure of the restriction of current flow and can be


calculated by R(resistance) = V(voltage)/I(current)

Resistance, or impedance, is measured in Ohms

The formula above is called Ohms Law, or R = V/I, or Ohms =


Volts/Amps

Resistance in a circuit is independent of frequency.

Impedance is influenced by resistance, but is also affected by reactance.

Reactance (X) measures the opposition of capacitance and


inductance to current.

Reactance varies according to the frequency of the current.

Resistance, impedance, and reactance are measured in Ohms.

98

Introduction to the Recording Arts

Important Formulas:
These are all the same formula

V = IR

Voltage = Current * Resistance


Volts = Amps * Ohms

I = V/R

Current = Voltage/Resistance
Amps = Volts/Ohms

R=V/I

Resistance = Voltage/Current
Ohms = Volts/Amps

99

Introduction to the Recording Arts

Connecting Speakers and Impedance


Each speaker that is connected to an amp has a circular flow of electrons to
complete the circuit. The electrons flow out of the amp, into the speaker, get some
resistance, and then flow back into the amp to complete the circuit. The speaker
adds a certain number of Ohms of resistance to the circuit, which you can usually
figure out from looking at the back of the speaker at the fine print. The amp can
handle a certain number of Ohms as well, which you can figure out from looking at
the back of the amp at the fine print.
Each amp has two channels in the back one for the left signal, one for the right
signal.
If you plug one speaker into each channel, the number of Ohms of resistance for
each channel is going to be equal to the Ohms that the speaker is rated for. The
picture below is a typical speaker setup with one speaker, rated at 6 Ohms,
connected to each channel on the back of the amp. The speaker is giving 6 Ohms
of resistance to each circuit on each left and right channel.
What if you want to plug four speakers into the back of your amp? How does this
affect resistance?
Well, each time you add an additional speaker to the same port on the back of the
amp, the amount of resistance changes.

100

Introduction to the Recording Arts

Impedance is Added when You


Connect Speakers in Series
For instance, when you add one speaker in series, the impedance of both speakers
is added together. For instance, if you had two 8 Ohm speakers connected to one
port on your amp, then the two speakers would have a total of 16 Ohms of
resistance on the circuit (8+8).
Or, if you have three speakers rated at 4 Ohms connected to one port on your amp
in series, then the three speakers would have a total of 12 Ohms of resistance on
the circuit (4+4+4).
Additionally, if you had one 8 Ohm speaker and one 4 Ohm speaker plugged in
series in the back of your amp, you would have 12 Ohms total resistance on the
circuit.

101

Introduction to the Recording Arts

Impedance is HALVED when You


Connect Speakers in Parallel
If you have multiple speakers with the same impedance connected in parallel, the
total impedance is the impedance of a single speaker divided by the total number of
speakers.
Source: http://www.bcae1.com/spkrmlti.htm
In the example below, the net impedance on one channel is 8 Ohms because there
is one 8 Ohm speaker on each channel.

If you add another 8 Ohm speaker in parallel to each channel, then the resistance
becomes 4 Ohms on each channel (8/2).

The above picture has 4 Ohms resistance on each channel.


Please visit the following website to determine maximum speaker loads for each
channel, according to the level of resistance.
http://www.bcae1.com/spkrmlti.htm

102

Introduction to the Recording Arts

Impedance Matching
Impedance matching is the process of hooking up your speakers to your amp in
a way where the impedance of the speakers matches the impedance of the amp. If
the speakers have too high of an impedance, then they will not be powered enough
by the amp because they will have too much resistance. If the speakers have too
little impedance, then the amp will overheat and turn off (or blow up if its an old
one). Therefore, if the amp is rated to be able to handle as low as 6 Ohms, which is
typical, then you want to make sure you hook up your speakers so that its not
going to go below 6 Ohms.
Example:
An amp has two outputs one for the left channel, one for the right.
Lets say you want to hook 4 speakers up to one channel.
If each channel on the amp is rated at 4 ohms, you could hook two speakers at 8
ohms into each channel in parallel. However, if you hooked two speakers at 4
ohms each into each channel, then each channel would be accepting information at
2 ohms and you would have a potential for overload, especially if you played it too
loudly for too long.
Basically, you have to match the impedance of the speakers to the impedance of
the amp.

This website has a good explanation:


http://www.bcae1.com/spkrmlti.htm
Safe speaker Ohm load calculator:
http://www.bcae1.com/images/swfs/speakerloadssafenoframe01.swf
Speaker connections in series and parallel with good pictures:
https://www.parts-express.com/resources-speaker-connection
How-to wire speakers in series and parallel:
https://www.youtube.com/watch?v=nzGMdr6SH-E
How to use a combination of series and parallel wiring to hook 8 speakers up to
one amp:
https://www.youtube.com/watch?v=ysuBeGeQ2yM
Wiring in series:
https://www.youtube.com/watch?v=igdapz6xuHc
A 70 volt speaker system is an alternative which allows you to run several
speaker lines for long distances. It is specifically designed to have a regulated
circuit that can handle multiple speakers.

103

Introduction to the Recording Arts

Power
W (power) = V (volts) x I (current, in amperes)

Work is the amount of energy transferred by a force. Work = Force *


Distance

Power measures the amount of work done per unit time. In electronics,
electrical power is the rate at which work is done when current flows
through a circuit.

Electrical Power, measured in Watts, is related to current and voltage: W


(power) = V(volts) * I (current, in amperes).

One Watt is the rate at which work is done when one ampere of current
flows through an electrical potential difference of one volt.
Implications for Audio Engineering:
Make sure your speakers are powerful enough to handle the power from
the amp. Otherwise you may put too much power through a speaker and
thus overpower the speaker. The speaker will start smoking and you will
smell an electrical fire. This is very dangerous and should be avoided at all
costs!
For instance, if you have a 150 watt amp, then each channel in the back of
the amp is going to be 75 watts each. You can power a 75 watt speaker with
that. If you hook a 50 watt speaker up to a 75 watt channel, and turn it up
all the way, you run the risk of overpowering the speaker. If you plug a 300
watt speaker into a 75 watt channel, though, you should be fine because the
speaker can handle a lot more power than what will be put through it.
Additionally, you need to know the power of your PA system. For instance,
if you are providing a PA system for a band with three 300 watt guitar
amps, and your PA system is only rated at 80 watts, you wont be able to
hear the vocals. Never underestimate the power of a guitar amp.

104

Introduction to the Recording Arts

Volts, Amps, and Ohms

Volts, amps, and Ohms are metric units. As such, metric prefixes apply.

1000 milliamps = 1 amp

1 microVolt = 1/1,000,000 Volt

Amperage = Voltage/Resistance

The surface of the Earth has a voltage of zero volts.


Potential (Voltage) Divider

A potential divider is a series of resistors that reduces the output voltage in


a circuit.

Given two resistors, the output voltage is given as follows:

V(output) = (Resistance2/(Resistance1 + Resistance2))* Voltage In

105

106

Introduction to the Recording Arts

Chapter 5 Microphones

107

Introduction to the Recording Arts

108

Introduction to the Recording Arts

Types of Microphones
Condenser Microphones:
Most popular type of mic for the studio. Good for picking up an entire ensemble
or individual parts. Needs phantom power. Large diaphragm: better for low
frequencies. Small Diaphragm great for capturing high frequencies (cymbals,
violins, fifes.)
Dynamic Microphones:
Handle a lot of sound pressure level (volume). Good for drums, amps, rock vocals.
Can bang these around, resilient.
Ribbon Microphones:
Thin ribbon of aluminum instead of mylar (dynamic mics). Popular with brass
players for the ability to get a nice warm sound at a very high pressure level. Good
for old timey sound. Fragile and expensive.
Dynamic Microphones

Not as sensitive
Hardy
Handles high sound pressure levels
No phantom power
Small Diaphragm
Except on bass drum dynamic microphones
Good for live sound
Use on certain instruments for studio recordings
Good for rock vocals, drums, amps

Condenser Microphones

Sensitive
Fragile
Dont put this on a bass drum or an amp it may be ruined by the high
sound pressure levels
Needs phantom power
48 volts extra boost to work
Be careful with this
Large or Small Diaphragm
May cause feedback if used for live sound because the mics are sensitive
Good for vocals, acoustic guitars, over the drum set
109

Introduction to the Recording Arts

Ribbon Microphones

Sensitive
Fragile
Handles loud sounds like brass very well
No phantom power.
Phantom power can cause damage.
Ribbon
Good for studio and old-timey recordings
Good for brass, old timey sound

Popular Microphone Brands:

AKG, Shure, Neuman, Sennheiser, Audio Technica, Behringer, Rode,


Samson (consumer level)

Dynamic Microphone Examples


Shure SM-58
$100

Shure SM-57
$100

110

Introduction to the Recording Arts

AKG D112
Large diaphragm bass drum mic
$130

Sennheiser MD441-U
This is a supercardoid dynamic mic used for vocals and instruments. Works really
well on stage. Also known as the Elton John Mic. Used for recording sessions as
well, usually on an instrument.
$1500

111

Introduction to the Recording Arts

Sennheiser MD-421-II
This is a dynamic mic used with instruments such as saxophones. Used for
recording sessions.
$479

Condenser Microphone Examples


MXL 990
$70

112

Introduction to the Recording Arts

Shure KSM-141
$400 each, $800 total
Matched Pair - Choose between cardoid and omni settings

AKG C414
$1000
Vintage large diaphragm mic.

113

Introduction to the Recording Arts

Ribbon Microphone Examples


AEA R84
Figure 8 pattern
Ribbon mic
$1,000

Royer Labs R121


Ribbon Mic
$1,300

114

Introduction to the Recording Arts

Microphone Polarity Patterns


Microphone Polarity Patterns are the pickup patterns of a microphone. These
are drawings that describe the area around the microphone that the microphone
picks up. The circles are drawn in 2D, but remember that the mic picks up in 3D.
You have to understand the direction and plane or axis of the diaphragm to
understand the pickup pattern. All diagrams are assuming a diaphragm of the
microphone to be facing so that the flat part is pointing to the top of the page, and
the microphone is parallel to the page.
Omnidirectional - the mic pics up in a circle around the diaphragm

Cardoid microphone pics up in a heart shape around the microphone.

Bi-Directional microphone picks up in front and behind the mic in a figure 8


pattern.

115

Introduction to the Recording Arts

Supercardoid the microphone picks up in a cardoid pattern but with a little bit
in the back of the mic as well.

Hypercardoid reaches a little further back than supercardoid.

All images are credited to Wikimedia Commons.

116

Introduction to the Recording Arts

Microphone Frequency Sensitivity


Charts
You will often find a frequency sensitivity chart in the packaging or technical
description of any microphone. This just tells you how sensitive the mic is to
different frequencies. On the Y axis is the sensitivity in decibels, and on the X axis
is the frequency.
So, for instance, the sensitivity of an SM-58 is as follows:

Notice the line starts to drop off around 200 Hz on the left. This means that it
doesnt pick up those low frequencies very well. Its really good at picking up
frequencies in the 4k, 5k, and 6k range, though. This is the upper range of the
piano and goes into the s and brings out the bright frequencies. It dips between
7k and 8k, then rises back up at 10k, then rolls off after that. The little bump at
10k also gives it a brighter sound.

117

Introduction to the Recording Arts

Mic Placement
Microphone placement is a very important part of audio engineering.
There is a sweet spot for every instrument, and the type of microphone
will also determine what sounds you will get. A typical recording studio
setup will include an hour and a half to getting good tones on the
instruments.
Mic Placement for Vocals: You want to make sure that you point the
diaphragm to the vocals. Use a pop filter to get rid of F and P
sounds (plosives and sibilance).
Mic Placement for Guitars: Make sure that you place the microphone
within one inch of the guitar so that it can pick up the widest range of
frequencies. SM-58s are near-field mics, which means they will pick up
only sounds within one to three inches of the microphone. To get a
stereo sound, place one mic up on the neck for the higher frequencies,
and a second mic on the hole for the low frequencies.
Mic Placement for Amps: You will want to get down on your hands an
knees and listen closely to the guitar player to determine where the best
tone for the amp is. Then place the mic in that area. You must listen to
how it sounds before you place it.

Image source: https://en.wikipedia.org/wiki/Pop_filter

118

Introduction to the Recording Arts

Mic Placement Drum Sets


Depends on the number of mics you have, as well as genre considerations.
Prioritize by:
Overheads (2, one left, one right)
The overhead left mic will pick up mostly snare, high hat, and a crash cymbal if
there is one.
The overhead right mic will pick up mostly ride cymbal and some of the toms.
Snare
Use an SM 57 on the top snare, make bottom of snare a low priority (only if you
have plenty of mics)
Make sure its out of the way of the sticks
Bass Drum
Mic this if youre doing more rock or metal types of stuff. Jazz not so important.
Toms
Mic each tom if you have enough mics, putting one on each tom. In order of
importance, you could put one right in between the mounted tom, and one on the
floor tom, or just use the overhead mic over the floor tom if you dont have
enough.

119

Introduction to the Recording Arts

Drum Microphone Kits


Sure PGDMK6
Two overheads, 3 regular mics for snare and toms, and one bass drum mic. Three
clips to hold the mic to the actual drum.

Audix
Included in the DP7 drum microphone package is the D6, Audix's flagship kick
drum mic, two- D2's for rack toms, one D4 for floor tom, the i5 for snare, and a
pair of ADX51s for overhead micing. Also included are four D-Vice rim mount
clips for the snare and tom mics, and three heavy duty tension fit mic clips for the
other three mics. Everything is conveniently packaged in a foam-lined aluminum
carrying case road case for safe keeping when the mics are not in use.

120

Introduction to the Recording Arts

Drum Mic Placement Setup


http://en.wikiaudio.org/Recording_techniques:Drum_kit

121

Introduction to the Recording Arts

Stereo Mic Techniques


XY Setup
Take two cardoid mics and place them on top of each other at a 90 degree angle.

AB Stereo Mic Setup


The AB Stereo Mic setup intends to imitate the human hearing by placing two mics
apart facing the same direction in a way that is about the width of the human head.

122

Introduction to the Recording Arts

Mid Side Mic Stereo Setup


(MS)
The Mid-Side mic setup (or MS setup) is done with two microphones: a figure 8
and an omni. Place the Omni above the figure 8 mic and it will pick up in a nice
stereo image all around.

MS Mic Setup
A great tutorial with pictures: http://www.uaudio.com/blog/mid-side-micrecording/
All pictures credit to https://en.wikipedia.org/wiki/Stereophonic_sound .

123

Introduction to the Recording Arts

124

Introduction to the Recording Arts

CHAPTER 6 CABLES

125

Introduction to the Recording Arts

126

Introduction to the Recording Arts

Introduction to Cables - Analog


MIDI adapter - use to connect the keyboard to the computer for MIDI.

RCA to RCA - use to connect a device to a sound system, such as a record player
to speakers, or DVD player to TV. Also known as a phono plug.

RCA to Mini - use to connect a device to a sound system, such as an iPod to


Auditorium sound system.

Stereo mini - connect iPod or phone to computer or sound system.

XLR use for microphones. The XL cable has 3 pins called positive, negative, and
ground. The XLR is a balanced cable.

127

Introduction to the Recording Arts

TRS is used for headphones and long speaker wires. TRS stands for Tip Ring
Sleeve. It contains three signals: positive, negative, and ground. It is also called a
stereo cable.

TS (1/4) is used for instruments and speakers. TS stands for Tip Sleeve and
contains two signals: a positive and a negative. The TS cable is also known as a
phone cable.

Other Cables:
Banana Clip Cable

The banana clip is a type of port on the back of old amps. You may need a banana
clip to TS cable to hook the amp to the Speaker (amp = banana, speaker = TS).
Speakon Cable
This cable has a blue end that snaps into place. You have to twist the silver part to
pull it in/out. Used for speakers in live sound.

128

Introduction to the Recording Arts

Digital Cables
Digital Cables transmit data using 1s and 0s (binary code)
Analog Cables use changes in voltage to transmit a signal that is shaped similar to
the source.
The first generation of video and audio cables were designed with analog signals in
mind. An analog signal represents the information by presenting a continuous
waveform similar to the information itself. For example, for a 1000 Hertz sine
wave, the analog signal is a voltage varying from positive to negative and back again
1000 times per second. When that signal is hooked up to a speaker, it drives the
speaker cone to physically move 1000 times a second and we hear the 1000 Hz sine
wave tone as a result.
A digital signal, unlike an analog signal, bears no resemblance to the information it
seeks to convey. Instead, it converts the 1000 Hz signal to a series of "1" and "0"
bits which is then transmitted through the cable and then gets decoded on the
other end.
Optical Cable

ADAT Machines used them (Alesis Digital Audio Tape). An optical cable is most
often used with audio interfaces, sound cards, and home consumer sound systems.

129

Introduction to the Recording Arts

S/PDIF Cable

The S/PDIF cable looks like an RCA cable. It stands for Sony/Philips Digital Interface Format.
Used with ProTools. The back of a sound card may have a S/PDIF cable.

AES/EBU
This is on its way out. It stands for Audio Engineering Society/European Broadcasting Union. The
cable looks like an XLR.

130

Introduction to the Recording Arts

AV Cables
VGA Cable

This is the old fashioned analog cable that connects a computer to a monitor, or a
laptop to a projector. Most PC laptops have this. MACs dont have this. This
transmits video only.
HDMI Cable

This cable transmits HD video and audio. It comes as HDMI, HDMI skinny, and
HDMI mini. The skinny version works with new Mac laptops. The MINI version
works with tablets. You need an adapter to convert from MINI or skinny to regular
HDMI.
Mini Display Port or Thunderbolt

This cable is for Mac laptops only. The port is the same, but the insides changed so
that the thunderbolt is faster. The Mini Display port is for Mac laptops prior to
2013 or so. You have to get a Thunderbolt or MiniDisplay Port to VGA adapter to
put a Mac through the projector. This cable transmits video only, not audio.

131

Introduction to the Recording Arts

Parts of a Cable
XLR = Mic Cable
The XLR cable has three pins connected to three wires inside - one positive, one negative, and one
ground.

TRS = Tip Ring Sleeve


The TRS cable is used for stereo signals or as a balanced speaker cable.
The TRS cable has three wires inside. Tip goes to positive, Ring goes to negative, and Sleeve goes to
ground.

TS = Tip Sleeve
The TS cable is used for instruments such as guitar or piano.
The TS cable has two wires inside. Top goes to positive, and sleeve goes to negative. There is no
shield or ground.

RCA
The RCA cable may or may not be shielded. The tip is positive, the rim is negative, and there may
or may not be another wire connected to the rim that will act as a shield. This depends on if you buy
cheap or nice RCA cables.

132

Introduction to the Recording Arts

Activity: take apart cables to see the multiple wires and shielding inside the rubber casing.

All images credit to Wikipedia. https://en.wikipedia.org/wiki/Phone_connector_(audio)

133

Introduction to the Recording Arts

Balanced/Unbalanced Cables
Balanced Cables: Cables that deflect noise by flipping the signal 180 degrees.
Balanced cables have three wires inside: one wire that is normal, one wire that has
the electrical signal flipped 180 degrees, and one wire that goes to the shield which
adds additional insulation.
Unbalanced Cables: cables that do not deflect noise by flipping the signal 180
degrees. Unbalanced cables may or may not have a shield, depending on how many
wires are inside the cable.
Regarding long distances:

RCA = Unbalanced = not good for long distances


XLR = Balanced = good for long distances
TRS = *if* Balanced = good for long distances
TS = Unbalanced = not good for long distances
Balanced Cable Picture

Image source: https://commons.wikimedia.org/wiki/File:KabelSymetrisch.png


For balanced cables, there are three wires positive, negative, and a shield. One
wire flips the signal at a 180 degree polarity, so the signal is opposite from the
normal signal when it enters the wire. It is flipped back when it comes into the
sound board. As a result, any electromagnetic noise that enters the cable is
cancelled out and the signal is clean. The third wire wraps around as a shield.
Also, I have personally noticed that phantom power wont travel on a cheap nonbalanced XLR cable because that third cable is not present. The default for XLR
cables is to be balanced.
From Wikipedia:

134

Introduction to the Recording Arts

A typical balanced cable contains two identical wires, which are twisted together
and then wrapped with a third conductor (foil or braid) that acts as a shield. The
two wires form a circuit carrying the audio signal; one wire is in phase with respect
to the source signal, the other wire is 180 out of phase. The in-phase wire is called
non-inverting, positive or "hot" while the out-of-phase wire is called inverting,
phase-inverted, anti-phase, negative or "cold". The hot and cold connections are
often shown as In+ and In ("in plus" and "in minus") on circuit diagrams.[1]

135

Introduction to the Recording Arts

Microphone Stands

Set a stand up from the bottom to the top


Drop bottom first
Then adjust the height
Then adjust the angle
Then adjust the length of the arm

Types of Mic Stands


Boom Stand

Gooseneck Mic Stand

Mic Clip

Dont forget the mic clip, which connects the microphone to the stand!

Wireless use a large sized mic clip

136

Chapter 7 Sound Boards

137

Introduction to the Recording Arts

138

Introduction to the Recording Arts

Sound Boards
Sound boards come in all shapes and sizes. They look complex, but they are really a
pattern divided into two sections:
1. Tracks
2. Outputs
Tracks are lined up vertically and usually the outputs are in the center.
Tracks have the same knobs going across which usually include:
Gain (trim), EQ, Aux, Pan, and Effects.
Sound boards can be analog or digital, depending on how the insides work.
Depending on the board, you can have several monitor mixes going through
multiple aux outputs.

139

Introduction to the Recording Arts

Signal Flow
Its important to understand signal flow before digging into the use of a sound
system.
Live Sound Signal Flow
Mics go to the Sound Board which then goes to the Amps which then go to
Passive Speakers.

2. Main Speakers
There is signal going from the overhead mics (6) on stage to the sound board, and
then that signal gets mixed in with all the other tracks to go to the amps which are
back stage, and then go to the main speakers.
3. Monitors:
Monitors are speakers placed on stage that face the performers. Performers need
this to hear themselves or a backing track so that they can sing in tune and know
where they are in the song.
The signal goes from the sound board to the monitors from a separate mix called
the auxiliaries. You have up to 6 possible mixes you can send out with six auxiliary
outputs. Right now the board is set up to have the monitor mix go through aux 1.
The monitors are set up in a daisy chain fashion. That means that the mix from
one monitor goes to another. We could set it up so that the left monitor gets the
Aux 1 mix and the right monitor gets the Aux 2 mix. Why not? Mostly because we
have a lack of the adapter necessary to convert the output in the back of the board
to the XLR plug needed for the snake.

140

Introduction to the Recording Arts

Home Studio Signal Flow


Mic => Audio Interface => A/D Converter => Computer =>
D/A Converter => Audio Interface => Headphones

Audio Interface = a device that is used to hook a microphone up to a computer


A/D Converter = Analog to Digital converter, converts analog audio signal into
binary digits
D/A Converter = Digital to Analog converter, converts binary digits into analog
audio signal
The A/D Converter or D/A Converter is usually part of the audio interface.

141

Introduction to the Recording Arts

Mixing Console

Mackie CR1604-VLZ mixing console


Image source:
https://en.wikipedia.org/wiki/Mixing_console#/media/File:MackieMixer
.jpg

142

Introduction to the Recording Arts

Name_____________________ Date ___ Period _____


Mixing Board Vocabulary:

Outputs

Mains

Groups

Auxiliaries

Send

Return

Inputs

Line Level/Mic Level

Microphone jack

TS jack

Other considerations

Pad = __ dB decrease in sound, makes it line level

143

Introduction to the Recording Arts

144

Introduction to the Recording Arts

Inputs and Outputs on a Mixer

DigiDesigns Venue Profile Mixer (Digital)


https://commons.wikimedia.org/wiki/File:Com_DigidesignProfile.jpg
Each channel stands for an input. Each channel strip has:

Gain

EQ

Aux

Pan

Solo/Mute

Faders/group buttons

Outputs include knobs for:

Aux, including FX

Groups

Mono

Master Stereo Output


145

Introduction to the Recording Arts

Inputs and Outputs on a Mixer


Back view

146

Introduction to the Recording Arts

Inputs

Please note: this channel strip is missing the pad and phantom power buttons,
which are often found at the top of the channel strip. Also missing is a roll off
button. This is a Mackie Mackie CR1604-VLZ mixing console.
Image source:
https://en.wikipedia.org/wiki/Mixing_console#/media/File:MackieMixer
.jpg

147

Introduction to the Recording Arts

Outputs
Bus
The word bus is used to describe any signal flow out of a track. The word bus is
often utilized to describe the actual cable that would be connecting the track to its
alternate output. In the old days, audio engineers would have to connect a cable
from the track output to an external device and then connect another cable back
into the mixer. Now, the entire bus concept is created using pathways within the
digital software.

Sub Mix
A sub mix is the word used to define the process of mixing several tracks down to
stems, or group buses. For instance, you could lump several drum tracks together
into one sub mix, mix that down, and then have one stereo track with just the
drums. The word Sub Mix can also be used in live sound reinforcement to
describe the process of combining certain tracks together to one sub group before
it goes to the master. You can then add effects or turn the volume up and down for
the sub mix and it will apply to all the tracks at once.

148

Introduction to the Recording Arts

Inserts
Inserts:
Inserts are ports on the back of the sound board that allow signal to go out and
come back in. Usually, they are used to add effects such as compression or reverb
to the individual track.
In order to use an insert, you have to have a cable that has the capacity to carry two
signals. Usually, you use an unbalanced TRS cable. The info goes out of the ring
and comes back in at the tip.
A little tip: I have used the output portion of the insert jack to extend a mixer. If
you put a TS cable into the Insert port until you hear one click, it will take signal
out of the mixer. (out only)

An insert port on the back of a mixer will include a send and a return
signal.

149

Introduction to the Recording Arts

Aux Output
An Aux Bus is an output from the board that goes through the Aux output port.
Usually, aux outputs are used for monitors.
Monitors:
Monitors are speakers facing the performers on stage.
o Each track has its own aux pot
o You can control the mix in the monitors by adjusting each tracks
aux pot
o Vocalists and instrumentalists will want a certain amount of each
element in their mix. For instance, they might want to hear a lot of
the bass, piano, and vocals but no drums. You need an aux track to
do this so that it doesnt affect the main mix coming out of the
main speakers.
View from the top of the board where you control Auxiliary output volumes

View from the Back of the Board - These are the output jacks

150

Introduction to the Recording Arts

PFL and AFL


One aspect of the auxiliary output is whether or not the level of the output is
dependent on the level of the track. For instance, you might want to be able to hear
the sound through the monitors but not through the mains. There are two options:
one where the levels are dependent on the track levels (pre) and one where the
levels are independent of the track levels (post).
Pre-Fader Level Pre - PFL
The pre-fader level button, when depressed, will mean that the sound will go
through the auxiliary output at whatever level you set regardless of what you do to
the fader. So if the fader is all the way down, you can still hear the sound come
through the monitors. The fader has no bearing on the levels.
Post-Fader Level Post (after) also known as AFL
The Post-fader level button, when activated, will mean that the sound will go
through the auxiliary output linked to the level of the track fader. So if the track
fader is down, the auxiliary level will be down as well. When the track fader is up,
the auxiliary level will be up as well.

151

Introduction to the Recording Arts

Groups Output (group sub mix)


Groups are an additional output option on a sound board. They are used to group
various channels together before the signal is sent to the main speakers.

Groups allow you to group different tracks together so that one fader
controls the volume for all the tracks in the group.
o Route with the group buttons next to each track

152

Introduction to the Recording Arts

Chapter 8 Digital Terms

153

Introduction to the Recording Arts

Sample Rate
Sample Rate = the number of times per second that the information is sampled,
or read

Sample rate is measured in Hz (cycles per second)

CD Sample Rate is 44.1 kHz

DVD Sample Rate is 48 kHz

The sample rate is also the number of times per second that the CD spins

The more samples, the more accurate the digital representation of the
sound

This is a picture of an analog signal (light blue line) that represents the actual
sound. The vertical red lines (the ones with the dots at the top) represent a
fixed number of samples.

154

Introduction to the Recording Arts

This is a picture of a low and then high sample rate. Notice the sound would be
more accurate with a higher sample rate.

155

Introduction to the Recording Arts

Bit Depth
Bit Depth = the number of 1s and 0s that are part of the word that creates the
digital code that measures amplitude.
Example:
A Bit Depth of 4 has 16 possibilities: 1 1 1 1, 1 0 0 0, 1 1 0 0 , 1 1 1 0, 0 0 1 1, 0 1 1
0, etc.
A Bit Depth of 7 has 128 possibilities
A Bit Depth of 16 has over 65,000 possibilities!
The more possibilities of numbers the more accurate the sample can be.
On the picture below, the first top picture is two bit, and the bottom picture has
multiple bits.

On the picture below, the top picture has 8 bits and the bottom picture has 16 bits.
Both have the same sampling rate. You can see that the 16 bit version is more
accurate to the analog wave than the 8 bit version.

156

Introduction to the Recording Arts

157

Introduction to the Recording Arts

Buffer Size
When recording audio into your computer, your audio interface needs some time
to process the incoming information. The amount of time allotted for processing is
called the Buffer Size. Often times a smaller Buffer Size is desirable, however not
one that is too small. Here's why:
If you have a very large Buffer Size, you will notice a lag between when you speak
in to the Mic, and when the sound of your voice comes out your speakers. While
this can be very annoying, a large Buffer Size also makes recording audio less
demanding on your computer.
If you have a very small Buffer Size, you will notice little to no lag at all between
speaking into the Mic and the audio coming out of the speakers. This makes
recording and hearing your own singing much easier, however this can also place
more strain on your computer as it has very little time to process the audio.
You can fix this by increasing your Buffer Size to something slightly larger. After
some experimentation, you will find the right balance.
When recording audio to a computer, increase buffer size and monitor the
recording through the audio interfaces monitor mix. That way, you can get the best
quality. If you monitor through the device rather than the software program, you
will have no delay in sound. If you monitor through the software program, you will
have delay.
When recording MIDI, lower the buffer size. The quality of the audio isnt as
important as having little to no delay.

Latency
Latency is the amount of delay in the sound. It can be the delay between the time
you press down a key to the time you hear it, or the time between when you speak
and you hear your voice. Latency is measured in milliseconds, or thousandths of a
second.

158

Introduction to the Recording Arts

Nyquist Frequency
Nyquist Frequency, named after Swedish-American engineer Harry Nyquist, is half
the sampling frequency and indicates the highest sound that can be recorded. So, if
your audio interface is sampling at 44.1 kHz, then it will be able to pick up
frequencies up to 22 kHz (which is more than adequate for the human ear.) If your
audio interface is sampling at 22 kHz, the highest frequency it will be able to record
is only 11 kHz. You can tell the difference because the 22 kHz sounds like its
coming from a phone!

Image source: https://en.wikipedia.org/wiki/Nyquist_frequency

159

Introduction to the Recording Arts

160

Introduction to the Recording Arts

Chapter 9 DAW Processing and


Sound Effects

161

Introduction to the Recording Arts

162

Introduction to the Recording Arts

Signal Flow on a DAW


Effects:
Effects is the word used to describe any type of processing done to the track,
such as reverb, compression, or EQ.
You can add effects to your music in one of two ways: on the individual track, or to
a sub group of tracks.
FX:
FX stands for effects (processing) for one particular track. Clicking on the FX
button on a track in Mixcraft adds an effect to that track only.

Signal Flow for Audio Effects in Mixcraft 5:

*Depending on the send volume type, the audio from a track will be sent at one of
the starred * points in the audio signal flow.

163

Introduction to Pan
Pan
Indicates whether you want the sound to come out of the right or left speaker.
Adjusted in Mixcraft for each individual track using the butterfly shaped parameter
above the Mute button.
Applying Pan to Drums
You will need to decide if you want to apply pan based on the point of view of the
drummer vs. the point of view of the audience. Either way, make sure that the
drum set is consistent based on the location of the drums. (See above)
Below is an example of panning from the drummers perspective. You can do it
either way, just make sure you are consistent!

165

Intro to Reverb (Reverberation)


Reverberation, or reverb for short, refers to the way sound waves reflect off
various surfaces before reaching the listeners ear.
Although the sound is projected most strongly toward the listener, sound waves
also project in other directions and bounce off the walls before reaching the
listener.
Sound waves can bounce backwards and forwards.
When sound waves reflect off walls, two things happen:
1. They take longer to reach the listener
2. They lose energy (get quieter) with every bounce.
The reflections are essentially a series of very fast echoes, although to be accurate,
the term Echo usually means a distinct and separate delayed sound. The echoes
in reverberation are so fast they merge together to sound like one single effect.
Reflections
High Frequencies reflect easily, therefore you hear more of them in a large hall.
Low frequencies do not reflect as easily, they are more likely to go through a
surface rather than bounce back. Therefore, you dont hear as much low
frequencies in a large hall. In fact, there is a low frequency cut off point on several
reverb units that allows you to cut out those frequencies because they might make
the music sound really muddy or unclear.
Reverb began in the cathedrals in Europe.

Click here to read about the history of reverb:


http://www.uaudio.com/webzine/2004/may/text/content4.html .
Assignment: Add reverb to a song.

166

Dynamics Processing
Dynamics: loudness. Measured in dB (decibels.) Remember that decibels indicate
perceived loudness, and based on the Fletcher Munson curves, may be different
from absolute loudness. Because of the shape of the pinna and inner ear, humans
are able to hear certain frequencies easier than others.
Noise Floor: the softest sound that humans can hear, which is 0 dB.
Distortion: the point at which a sound becomes so loud, that it changes the
timbre. Distortion adds a certain amount of buzz to the sound. The buzz comes
from the upper harmonics that become present when the sound becomes very
loud.
Drive: Drive is basically like a volume knob, but its designed to add volume at a
level that adds distortion.
Can you have soft distortion?
Yes, by overloading the pre-amps. Remember that there are many different levels
of gain staging and there is potential for distortion at each level. So you could
overload the mic preamp, but you may not hear it very loud in your headphones
because the master fader is down.
Activities:
Add distortion to a track by using the Amp Simulator in Mixcraft.
Check out Boost 11 plugin a free mastering plugin that will boost your songs
loudness. Watch out, though, its designed for rap/hip hop so it will also boost the
bass frequencies. This plugin was designed to create radio mixes (i.e.: songs that
would be heard on the radio.)

167

Dynamics Processors
Expanders/Gates

Limiters/Compressors

168

Dynamics Processing Parameters


Threshold: the level at which the compression kicks in. (in decibels) Simply put,
when a signals level exceeds the set threshold, the compressor activates and begins
lowering volume. If compression is like a librarian telling everyone to be quiet, then
the threshold is the level of loudness at which the librarian starts to say sh!
Threshold may seem confusing because the numbers are negative, and they are in
multiples of 4. Basically, you will set the threshold to be whatever is right above the
loudest point. You have to watch the level meter and see where it starts to distort,
and then set your threshold right below that amount so it can be as loud as possible
without distortion.
Ratio: How much the volume is lowered. For instance, if a compressors ratio is
set to 6:1, then only one decibel is heard for every six decibels louder than the
threshold. Extreme settings, like 10:1, allow only one decibel to pass for every ten.
Note that any ratio above a 10:1 is considered a Limiter which is essentially the
same as a compressor, but more extreme.

169

Dynamics Processing Continued


Attack: This is the one parameter that everybody ignores but is extremely
important in your mix. Basically, the attack is how long it takes for the compressor
to kick in once the sound reaches the threshold. Notice as the number gets bigger
(milliseconds) the attack is slower (longer until the compressor kicks in). For
instance, you need at least 60 milliseconds or so to hear the sound of the s and
t in vocals. Personally, I have found that 73 milliseconds works really well for an
acoustic upright bass.
Decay: How slowly the compressor stops lowering the volume.

Expander/Gate
Noise Gate: This plugin works by creating silence when the main instruments cut
out and all you can hear is noise.
For instance, when you record with an electric guitar, you will have a certain
amount of noise that will be present with the amp. You dont want that noise to be
part of the mix, though, so you can add a Noise Gate, and that will create silence
when the instrument is not playing. It basically detects the threshold of the noise
and then keeps all sound going through above that threshold.

170

Compression
Compression: an audio effect that makes volumes softer.
Have you ever been in the library and there was too much noise, and the librarian
shushed everyone? Well, this is like compression. Basically, when the music gets to
a certain loudness (threshold), the compressor kicks in and makes everything
softer at a certain ratio.
What does compression sound like?
Search Katy Perry Firework chorus isolated vocals on YouTube to hear this in
action.
Anything by Adam Lambert
http://www.youtube.com/watch?v=X1Fqn9du7xo
Knee: The word knee, when applied to a compressor, is an indication of how the
line looks. If its a curved angle, then its a soft knee. If its a very acute angle, its
called a hard knee. With a hard knee, the compressor will not allow any volume
above the threshold at all. So if the threshold is set at -16 dB, and the source of the
sound gets louder than -16 dB, then a hard knee would keep it from ever being
heard above -16 dB, period. With a soft knee, the compressor will gradually kick in
as the source sound becomes greater than -16 dB.

171

172

Appendix A
Skill Based Tutorials

173

174

Skills Tutorial #1: Editing


(demonstration)
Editing: The process of moving clips around in a track. May include splitting clips,
cutting out certain sections, and/or meshing together different takes of the same
song.
Review of commands for Mixcraft 5:
Split Clips: Right click, split or Ctrl-T

Moving clips around by the handle grab the top part of the clip (green)

Mass moving clips you can select multiple clips and then move them all at
the same time by one clips handle

Mass splitting clips


1. Click and drag over the clips you want to split
2. Right click where you want to split and select Split

then

175

Zoom: The process of viewing the song closer and farther away. This is very
important when editing!
Zoom buttons for horizontal zoom are for clips that are located at the top

Hold your cursor under a track until you see the two lines and then click and drag
to zoom up and down.
Playhead vs. The Two Notches in a Track: Adjust the playhead by clicking in
the top dark part. Notice the two notches dont follow adjust the two notches by
clicking in a track.

176

Skills Tutorial #2: Combining clips


and bouncing (activity)
Bounce: When you take multiple things and combine them to be one thing. For
instance, if you take multiple clips and combine them together, bounce it to create
one long clip with all the parts. In Mixcraft, you can do this with multiple clips by
selecting Merge to New Clip under the Edit menu. While Mixcraft stays away
from this term, all other audio editing software uses bounce to describe this type
of merge.
Add Song to Mixcraft: Go to Mix > Add Sound File. Notice you dont go to
Open or Load New Project that will look only for Mixcraft files. In other
software programs, you usually can go to File>Import Audio, or you can click and
drag the file into the timeline.
Objective: Take the two silence sections out of the given song. The song is on a
flash drive coming to a computer near you.
1. Add the sound file to an audio track in Mixcraft.
Go to Mix > Add Sound File (Ctrl-H)

2. Turn the snap off.

3. Click in the track where you want to zoom so you have the two notches at
the start of the first silence.

4. Click on the zoom plus button until you zoom all the way in.

177

5. Right click and split at the point where there is no waveform. Try to be as
exact as possible.
Notice that once you put your cursor in the clip, the white line appears for
the volume which obscures the actual view. Just make sure youre as exact
as possible.
6. Repeat steps 3 5 for the end of the silence.

7. Delete the silence.

8. Move the second clip so its almost touching the first clip.

9. Click at the end of the first clip to put the two notches at the end of the
first section so it will be ready to zoom into that area.

10. To get this as exact as possible, zoom almost all the way in.

11. Move the two clips so they are right up next to each other. Listen to see if
there is any tempo change or static. Adjust as necessary.

178

12. Zoom out and repeat steps 3 11 for the second set of silence.

13. Select all three clips.

14. Bounce to a new clip by going to:


Edit > Merge to New Clip (Ctrl-W)

179

Skills Tutorial #3: The Art of the


Crossfade (activity)
Crossfade: Crossfade is a term that describes the process of turning down one
thing while turning up another.

Objective: student can crossfade two parts of a song without losing or gaining
time while also keeping the proper chord progression.
To Do this:
1. The file should be located on your desktop.
2. Add the sound file by going to Mix> Add Sound File. Navigate to the
desktop and find it.
3. Delete most of the silence. (select what you want to delete and hit delete)

Hit Delete

4. Make sure Snap is set to off


5. Select Time for the timeline.
6. Make sure the first clip is flush with the start of the timeline.
7. Move the clips so they are close together but not touching yet.

180

8. At 42:244, put a marker by right clicking in the timeline and selecting Add
Marker.

You can put your marker in the general area, then zoom in and look at the time at
the bottom to know where you are.

9. Title the marker A and press OK. This is the point where the chord
changes.
10. Now youre going to have to use your musical ears to finish. The
assignment is to move the second clip so the trumpet sixteenth notes lead
into the chord change. The clips will overlap a bit.
Heres how I do this:
The high note on the trumpet needs to go where the marker is.
Listen and figure out where that is.
Then grab it at that point and put it where the marker is.

181

Finished product:

Notice the crossfade actually happens a little before the marker, allowing
one to hear the trumpet sixteenth notes going into the chord change. Also
notice the crossfade extends a little past the marker this is the first clip
getting softer while the second clip is almost at full volume.

182

Skills Tutorial #4: Stereo and Mono


Tracks
Stereo: the song has been mixed so that there are two separate signals sent to two
speakers (or headphones)
Mono: the song has been mixed so that there is one signal; if you listen with
headphones the hardware duplicates the signal so that the same thing is heard in
both ears
In Mixcraft, how do you know?
If you look at the levels in the fader, you will see either one or two lines that go
green/yellow/red.
Stereo:
Notice how the top is different from the bottom. This is indicating that there is a
lot more signal in one output more than another.

Youll notice that many instrument sounds are already mixed to stereo. This is a
lead sound in Rapture.

Mono:
This is a mono track. Notice that there is signal in both outputs, but its the same.

183

Skills Tutorial #5: Introduction to EQ


(demonstration)
EQ is short for Equalization and is defined as the process of boosting or cutting
certain frequencies in a recording or live sound setup. Basically, if you know what
frequency an instrument is playing, you can really change the entire sound of the
song by boosting or lowering that particular frequency. The ability to use EQ
effectively is a year-long learning process, where you start to associate different
sounds with different numbers and then adjust those numbers with an EQ plugin.
Equalization (EQ): The boosting or lowering of certain frequencies
Frequency: Cycles per second (Hz), also known as the pitch, often referred to as
treble or bass
Parametric EQ: A type of equalizer with many parameters (gives you a large

amount of control)
Graphic EQ: A type of equalizer where the bandwidth is set and you use faders at
preset frequencies to adjust the levels.

184

Bandwidth: The range of frequencies that are being adjusted

The frequency in the above picture has a bandwidth of about 1kHz to 5kHz. The
greatest gain is at 2.1 kHz.
Q: the sharpness of the bandwidth (if its a gradual or extreme change)
Filter: the shape of the bandwidth.
Examples include:
Shelf filter: a shelf filter will raise or lower all the frequencies above or below a
certain point. The icon the select that type of filter usually looks like a wishbone.
A low shelf filter would have the following icon below:
A high shelf filter would have the following icon below:

185

Example 1: the picture below is a low shelf filter. All of the frequencies below 100
Hz are softer.

Example 2: the picture below is a high shelf filter. All of the frequencies above 10
kHz are louder.

Low Pass Filter allows only low frequencies to be heard, or low frequencies pass
through.
High Pass Filter Allows only high frequencies to be heard, or high frequencies
pass through.
Notch filter raises or lowers a certain frequency.
You can combine multiple frequency adjustments on one track or over an entire
song. There is an art to creating good EQ for a track, mix, instrument, or song.

186

Skills Tutorial #6: How To Make


Vocals Sound Like A Telephone
(activity)
The following directions pertain to using the graphic EQ on Mixcraft 5 software.
Step 1: Record vocals onto a speaker track, or add a vocal loop.
Step 2: After recording your vocals into the track, click on the little FX button
underneath the track, select Acoustica EQ from the drop down menu for effects.
This is the graphic EQ that comes with Mixcraft 5.
First of all, use this opportunity to hear what all the bands sound like with vocals.
Put all the bands down all the way, and then bring one up at a time while listening
to the track. Using Acoustica EQ, put all of the faders down to zero, then bring 1k
and 2k fader up almost all the way.Bring 4k up a little more than halfway.

187

Skills Tutorial #7: Hearing the Spatial


Aspects of Sound (listening exercise)
Hearing the final product before you start
Acknowledgements: many of the Mixing Tutorials have been highly influenced by
Bobby Owsinskis book, The Mixing Engineers Handbook, 2nd edition.
Step 1: Hear the Final Product in your head before you start mixing
By and large, most mixers can hear some version of the final product in their heads
before they even begin to mix.
Hearing the final product before you start comes from years and years of listening
to various versions of that genre. If youre not familiar with the genre you are
mixing, your mix will reflect your personal tastes rather than the aesthetic
tastes of the people who like that genre.
The best way to prepare for this is to listen to EVERYTHING all the way back
to 1920s Dixieland. Applications like Spotify and Pandora make this easy today.
Then, listen DEEP into your preferred genre. Learn about the history of your
favorite genre, the thought leaders, the producers, the split offs and sub-genres.
Music is constantly morphing and its important to understand how artists
influence each other. This in turn influences production and your input into the
process.
The production for pop music changes pretty quickly you need to listen to the
most recent songs and keep abreast of a lot of technology to keep up with this
genre.
Activity: What are the aesthetics for different genres?
Aesthetics: that which is deemed beautiful to different groups of people. Listen to
the following examples of music and describe the aesthetic of the genre. For
instance, describe:
Which frequencies are emphasized? Does it have a lot of bass? Mids? Highs? What
is the size of the room? (ambience) Is the group near or far away? Where are the
instruments in the mix? Which instruments seem close and which seem farther
away? Which are on the left and which are on the right? What is the sound of the
snare like? Is it tight and short or does it have a long, sustained sound? What
decade was this produced in?

188

Name _____________________________ Date _____

Period _____

Listen to the following examples of music and describe the aesthetic of the genre.
Rock The Black Keys Run Right Back

Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -

Audience

Metal Parkway Drive Dark Days

Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -

Audience

Hip Hop Drake Take Care ft. Rihanna

Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum -

189

Audience

Rap Pete Rock and C.L. Smooth Appreciate (clean rap)

Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -

Audience

Jazz Esperanza Spalding Black Gold

Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -

Audience

Country Carrie Underwood Before he Cheats

Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus

190

Audience

Skills Tutorial #8: Tall, Deep and


Wide
Most great mixers think in three dimensions.
Tall = frequency range
Engineers are thinking about making sure all frequencies are represented.
Clarity is what you aim for.
This will depend on the instruments.
Make sure the instruments dont get in the way of each other, unless it is a very
dense texture and that is what the genre calls for aesthetically.
Deep = Ambience, room size
Do this with reverb, delays, flanging, chorusing, room mics, overhead mics, and
even leakage.
Wide = Pan, or left right dimension
Create a more interesting soundscape by adjusting the pan and creating a 3
dimensional feel to the song.

191

Skills Tutorial #9: Top Beginner


Mistakes (demonstration)
Please note: the teacher should demonstration songs that fit each description.
1. Lack of Texture - the same musical instruments for the whole song
2. No center point in the mix When there is a pause in lyrics, the music loses
energy
3. Noises - clicks, hums, extraneous noises, count-offs, lip-smacks, breaths
4. Missing clarity You cant hear each individual instrument. There is too much
low end or high end.
5. Distant over use of reverb causes the mix to sound far away.
6. Levels are off - instrument levels vary from balanced to soft of too loud.
Certain lyrics cant be distinguished.
7. Boring sounds - generic, dated, or often-heard sounds are used.

192

Mixing Tutorial #10: Elements of a


Mix (listening and lecture)
Every piece of modern music, meaning rock, pop, R&B, rap, country, new age,
swing, drum and bass, trance, and every other genre having a strong backbeat - has
six main elements to a great mix:
1.Balance - the volume level relationship between musical elements
2.Frequency range - having all frequencies properly represented
3.Panorama - placing a musical element in the sound field
4.Dimension - adding ambience to a musical element
5.Dynamics - controlling the volume envelope of a track or instrument
6.Interest - making the mix special
Steps to Mixing:
1. Level the tracks within the song as a whole
2. Add EQ to each track as needed
3. Add effects to each track as needed
4. Adjust levels for the whole song as needed again
5. Add EQ and mastering effects to the Master fader
6. Level the tracks again within the song if needed
7. Export the song, repeat for all the other songs on the album, then
master the entire album to have one consistent sound
Remember that EQ, compression, and reverb will adjust the loudness of the track.
htttp://en.wikipedia.org/wiki/2012_Grammy_Awards#Production.2C_Surround_
Sound

193

Skills Tutorial #11: Balance (activity)


Balance: Balance is described as the loudness or volume of each different
instrument/vocal track as it relates to the other tracks.
Use volume to shape the emotion of the piece
1. Volume within the track

2. Volume of each track as it relates to the other

3. Volume of the song as a whole

Remember, due to the work of many scientists, we have learned that humans
hear certain frequencies louder than others, namely the 1 3 kHz range. (Same
range as a babys cry) Make sure you listen to your songs at 83 decibels to get
the most accurate frequency range. If its too soft you wont hear the bass. You will
learn various techniques to differentiate the sound of the different instruments.
You will also learn how to use the volume of the whole song to build and release
tension.

194

Mixdown Project
Problem: Make this song sound good. It is currently distorted.
Assignment: Try to mix down this song using volumes so that the bass is as loud
as possible without distorting (which is what would be appropriate for that genre
a crossover metal/hip hop/electronic feel).
Technique:
1. Always keep the Master Fader at 100%. Do NOT try to compensate by
turning down the master volume.
Good

Bad

2. Adjust the different track volumes to achieve the desired effect. (Like #2
above)

195

196

ABOUT THE AUTHOR


Shannon Gunn is an active jazz trombonist in the DC metro area. You can find her
on Monday nights with the Bohemian Caverns Jazz Orchestra as well as playing
around town with her own all-female big band, Shannon Gunn and the Bullettes
and her organ trio, Firebird. With the Bohemian Caverns Jazz Orchestra, shes
had the privilege of playing with notable artists such as Oliver Lake, Cheryl Bailey,
Yotam Silverstein, Wycliffe Gordon, Elliott Hughes, Erika Dohi, and Miho
Hazama. Additionally, as lead trombone player at Michigan State University, she
was able to play with Billy Taylor, Rodney Whitaker, and Marian McPartland. She
earned her Masters of Music in Jazz Studies from George Mason University in
Fairfax, Virginia and also attended James Madison University and Michigan State
University for her music studies. She also produces The JazzCast, a podcast
dedicated to curated listening sessions with jazz musicians. As the music
technology teacher at Woodbridge Senior High School, she teaches high school
students how to create, record, produce, and market their own music through the
Center for the Fine and Performing Arts. In addition to the ensembles listed above,
Shannon Gunn has performed with the Metropolitan Jazz Orchestra, Reunion
Music Society, American Festival Pops Orchestra, Manassas Chorale, and at various
venues such as the Kennedy Center, the Takoma Park Jazz Festival, Hylton
Performing Arts Center, Center for the Arts in Fairfax, Westminster Jazz Night,
Atlas Performing Arts Center, and the Washington Women in Jazz Festival. She
resides in Bristow, VA with her husband, Timothy, and her dog, Faith.
Her websites include:
http://jazztothebone.com
http://firebird.band
http://bullettesjazz.com
http://shannongunn.net/audio
http://mypianosmiles.com
http://shannongunn.net

197

198

Das könnte Ihnen auch gefallen