Sie sind auf Seite 1von 69

DIGITAL AUDIO

(HOW SOUND IS REPRESENTED IN COMPUTER?)

SOUNDS CREATED ON A COMPUTER EXIST


AS DIGITAL INFORMATION ENCODED AS
AUDIO FILES.
SOUND
Show understanding of how sound is represented and encoded

Use the associated terminology: sampling, sampling rate,


sampling resolution

Show understanding of how file sizes depend on sampling rate


and sampling resolution

Show understanding of how typical features found in sound-


editing software are used in practice
DIGITAL AUDIO

Sound input through a microphone is converted to digital for storage and


manipulation
Digital sound is broken down into thousands of samples per second. Each sound
sample is stored as binary data.
DIGITAL AUDIO
QUALITY
Factors that affect the quality of digital audio include:
sample rate - the number of audio samples captured every second
bit depth - the number of bits available for each clip
bit rate - the number of bits used per second of audio
SAMPLE RATE
BIT DEPTH
Uncompressed : A file which has not had any data removed through
compression.
PCM : Pulse-code modulation - a process for digitizing analogue
audio and creating an uncompressed audio file.
WAV : Waveform Audio File Format (WAVE, or more commonly
known as WAV due to its filename extension) (rarely, Audio for
Windows) is a Microsoft and IBM audio file format standard for
storing an audio bitstream on PCs. An uncompressed audio file
format developed by Microsoft.
AIFF : Audio interchange file format - an uncompressed audio file
format developed by Apple.
FEATURES FOUND IN SOUND-
EDITING SOFTWARE
Audacity
Recording
Audacity can record live audio through a microphone or mixer, or
digitize recordings from cassette tapes, records or minidiscs.
With some sound cards, and on any Windows Vista, Windows 7 or
Windows 8 machine, Audacity can also capture streaming audio.
FEATURES FOUND IN SOUND-
EDITING SOFTWARE
Device Toolbar manages multiple recording and playback devices.
Level meters can monitor volume levels before, during and after recording.
Clipping can be displayed in the waveform or in a label track.
Record from microphone, line input, USB/Firewire devices and others.
Record computer playback on Windows Vista and later by choosing "Windows
WASAPI" host in Device Toolbar then a "loopback" input.
Timer Record and Sound Activated Recording features.
Dub over existing tracks to create multi-track recordings.
Record at very low latencies on supported devices on Linux by using Audacity
with JACK.
Record at sample rates up to 192,000 Hz (subject to appropriate hardware and
host selection). Up to 384,000 Hz is supported for appropriate high-resolution
devices on Mac OS X and Linux.
Record at 24-bit depth on Windows (using Windows WASAPI host), Mac OS X or
Linux (using ALSA or JACK host).
Record multiple channels at once (subject to appropriate hardware).
FEATURES FOUND IN SOUND-
EDITING SOFTWARE
Import and Export
Import sound files, edit them, and combine them with other files or new
recordings. Export your recordings in many different file formats, including
multiple files at once.

Import and export WAV, AIFF, AU, FLAC and Ogg Vorbis files.
Fast "On-Demand" import of WAV or AIFF files (letting you start work with the
files almost immediately) if read directly from source.
Import and export all formats supported by libsndfile such as GSM 6.10, 32-bit
and 64-bit float WAV and U/A-Law.
Import MPEG audio (including MP2 and MP3 files) using libmad.
Import raw (headerless) audio files using the "Import Raw" command.
Create WAV or AIFF files suitable for burning to audio CD.
Export MP3 files with the optional LAME encoder library.
Import and export AC3, M4A/M4R (AAC) and WMA with the optional FFmpeg
library (this also supports import of audio from video files).
FEATURES FOUND IN SOUND-
EDITING SOFTWARE

Sound Quality
Supports 16-bit, 24-bit and 32-bit (floating point) samples (the latter
preserves samples in excess of full scale).
Sample rates and formats are converted using high-quality resampling and
dithering.
Tracks with different sample rates or formats are converted automatically in
real time.
FEATURES FOUND IN SOUND-
EDITING SOFTWARE

Editing
Easy editing with Cut, Copy, Paste and Delete.
Unlimited sequential Undo (and Redo) to go back any number of
steps.
Edit and mix large numbers of tracks.
Multiple clips are allowed per track.
Label tracks with selectable Sync-Lock Tracks feature for
keeping tracks and labels synchronized.
Draw Tool to alter individual sample points.
Envelope Tool to fade the volume up or down smoothly.
Automatic Crash Recovery in the event of abnormal program
termination.
FEATURES FOUND IN SOUND-
EDITING SOFTWARE

Accessibility
Tracks and selections can be fully manipulated using the keyboard.
Large range of keyboard shortcuts.
Excellent support for JAWS, NVDA and other screen readers on
Windows, and for VoiceOver on Mac.
FEATURES FOUND IN SOUND-
EDITING SOFTWARE

Effects
Change the pitch without altering the tempo (or vice-versa).
Remove static, hiss, hum or other constant background noises.
Alter frequencies with Equalization, Bass and Treble, High/Low Pass
and Notch Filter effects.
Adjust volume with Compressor, Amplify, Normalize, Fade In/Fade
Out and Adjustable Fade effects.
Remove Vocals from suitable stereo tracks.
Create voice-overs for podcasts or DJ sets using Auto Duck effect.
FEATURES FOUND IN SOUND-
EDITING SOFTWARE

Other built-in effects include:


Echo
Paulstretch (extreme stretch)
Phaser
Reverb
Reverse
Truncate Silence
Wahwah
Run "Chains" of effects on a project or multiple files in Batch
Processing mode
TYPICAL AUDIO EDITING
APPLICATIONS

Trim sound bites out of longer audio files


Reduce vocals from a music track
Cut together audio for radio broadcasts or podcasts
Save files for your iPod, PSP or other portable devices
Create ringtones from music files or recordings
Record voiceovers for multimedia projects
Restore audio files by removing noise, hissing or hums
Normalize the level of audio files
BIT RATE
The bit rate of a file tells us how many bits of data are processed
every second. Bit rates are usually measured in kilobits per second
(kbps).

Bitrate refers to the number of bits—or the amount of data—that are


processed over a certain amount of time. In audio, this usually
means kilobits per second. For example, the music you buy on
iTunes is 256 kilobits per second, meaning there are 256 kilobits of
data stored in every second of a song.

The higher the bitrate of a track, the more space it will take up on
your computer. Generally, an audio CD will actually take up quite a
bit of space, which is why it's become common practice to
compress those files down so you can fit more on your hard drive
(or iPod, or Dropbox, or whatever). It is here where the argument
over "lossless" and "lossy" audio comes in.
BIT RATE
Calculating bit rate

The bit rate is calculated using the formula:


Bit rate = Frequency × bit depth × channels

A typical, uncompressed high-quality audio file has a sample rate of


44,100 samples per second, a bit depth of 16 bits per sample and 2
channels of stereo audio. The bit rate for this file would be:
44,100 samples per second × 16 bits per sample × 2 channels =
1,411,200 bits per second (or 1,411.2 kbps)
A four-minute (240 second) song at this bit rate would create a file
size of:
14,411,200 × 240 = 338,688,000 bits (or 40.37 megabytes)
COMPRESSION
Why is compression important?
• Reduce demand for storage space
• Large files take longer time to transmit over the internet
• Needs lots of RAM when played
• Needs powerful processor to play the song
COMPRESSION
How is compression done?
• Lossy
• Lossless
COMPRESSION
Lossy : Loss of data
COMPRESSION
Compression is a useful tool for reducing file sizes. When images,
sounds or videos are compressed, data is removed to reduce the file
size. This is very helpful when streaming and downloading files.
Streamed music and downloadable files, such as MP3s, are usually
between 128 kbps and 320 kbps - much lower than the 1,411 kbps of an
uncompressed file.

Videos are also compressed when they are streamed over a network.
Streaming HD video requires a high-speed internet connection. Without
it, the user would experience buffering and regular drops in quality. HD
video is usually around 3 mbps. SD is around 1,500 kbps.

Streaming : Data that is sent in pieces. Each piece is viewed as it


arrives, eg a streaming video is watched as it downloads.
Downloading : To copy a file from the internet onto your computer or
device.
COMPRESSION
MP3 : A standard audio file format which uses lossy compression.
Compatible with most media players. Designed by the Moving
picture experts group - layer 3.
Kbps : Kilobits per second (Kbps): a measurement of the speed
data is being transferred at.
Buffer : A temporary area of computer memory used to store data
for running processes.
MBps : Megabytes per second - a measurement of data transfer
speed.
LOSSY AND LOSSLESS COMPRESSION

We use compression to reduce file sizes.


It makes it easier to store, stream and download videos, audio and
other
digital assets.
There are two types of compression: lossy and lossless.
Lossy compression means removing data to reduce the file size.
Lossless compression shrinks the whole file, keeping all of the
quality.
With lossless compression, files can be restored back to their
original state.
With lossy compression, once you’ve removed the data, it’s gone
for good.
LOSSY AND LOSSLESS COMPRESSION

Compression can be lossy or lossless.

Lossless compression means that as the file size is compressed, the audio
quality remains the same - it does not get worse. Also, the file can be restored
back to its original state. FLAC and ALAC are open source lossless compression
formats. Lossless compression can reduce file sizes by up to 50% without losing
quality.
Lossy compression permanently removes data. For example, a WAV file
compressed to an MP3 would be lossy compression. The bit rate could be set at
64 kbps, which would reduce the size and quality of the file. However, it would not
be possible to recreate a 1,411 kbps quality file from a 64 kbps MP3.
With lossy compression, the original bit depth is reduced to remove data and
reduce the file size. The bit depth becomes variable.
MP3 and AAC are lossy compressed audio file formats widely supported on
different platforms. MP3 and AAC are both patented codecs. Ogg Vorbis is an
open source alternative for lossy compression.
Not all audio file formats will work on all media players.
DIFFERENCE BETWEEN AUDIO FORMAT

The Lossless Formats: WAV, AIFF, FLAC, Apple Lossless

WAV and AIFF: Both WAV and AIFF are uncompressed formats,
which means they are exact copies of the original source audio.
The two formats are essentially the same quality; they just store
the data a bit differently.
AIFF is made by Apple, so you may see it a bit more often in
Apple products, but WAV is pretty much universal.
However, since they're uncompressed, they take up a lot of
unnecessary space.
Unless you're editing the audio, you don't need to store the
audio in these formats.
THE LOSSLESS FORMATS

FLAC: The Free Lossless Audio Codec (FLAC) is the most


popular lossless format, making it a good choice if you want to
store your music in lossless. Unlike WAV and AIFF, it's been
compressed, so it takes up a lot less space. However, it's still a
lossless format, which means the audio quality is still the same
as the original source, so it's much better for listening than WAV
and AIFF. It's also free and open source, which is handy if you're
into that sort of thing.

Apple Lossless: Also known as ALAC, Apple Lossless is similar


to FLAC. It's a compressed lossless file, although it's made by
Apple. Its compression isn't quite as efficient as FLAC, so your
files may be a bit bigger, but it's fully supported by iTunes and
iOS (while FLAC is not). Thus, you'd want to use this if you use
iTunes and iOS as your primary music listening software.
THE LOSSY FORMATS: MP3, AAC, OGG

The Lossy Formats: MP3, AAC, OGG

MP3: MPEG Audio Layer III, or MP3 for short, is the most common lossy format around. So
much so that it's become synonymous with downloaded music. MP3 isn't the most efficient
format of them all, but its definitely the most well-supported, making it our #1 choice for lossy
audio. You really can't go wrong with MP3.

AAC: Advanced Audio Coding, also known as AAC, is similar to MP3, although it's a bit more
efficient. That means that you can have files that take up less space, but with the same sound
quality as MP3. And, with Apple's iTunes making AAC so popular, it's almost as widely
compatible with MP3. I've only ever had one device that couldn't play AACs properly, and that
was a few years ago, so it's pretty hard to go wrong with AAC either.

Ogg Vorbis: The Vorbis format, often known as Ogg Vorbis due to its use of the Ogg container,
is a free and open source alternative to MP3 and AAC. Its main draw is that it isn't restricted
by patents, but that doesn't affect you as a user—in fact, despite its open nature and similar
quality, it's much less popular than MP3 and AAC, meaning fewer players are going to support
it. As such, we don't really recommend it unless you feel very strongly about open source.
THE LOSSY FORMATS: MP3, AAC, OGG

WMA: Windows Media Audio is Microsoft's own proprietary format,


similar to MP3 or AAC. It doesn't really offer any advantages over
the other formats, and it's also not as well supported. There's very
little reason to rip your CDs into this format.
LOSSY AND LOSSLESS COMPRESSION

Lossy Lossless
• Lossy compression permanently • Files can be restored back to
removes data. their original state.
• Once you’ve removed the data,
• Shrinks the whole file, keeping
it’s gone for good.
all of the quality.
• Lossy compression reduces the
size and quality of the file. • Lossless compression can
reduce file sizes by up to 50%
• The original bit depth is reduced without losing quality.
to remove data and reduce the
file size. • Lossless Formats : FLAC (Free
• Retains apparent original quality Lossless Audio Codec), ALAC
by removing sounds beyond (Apple Lossless)
human hearing.
• Lossy Formats: MP3, AAC, OGG
COMPRESSION
Byte conversion table

1024 bytes = 1 KB(Kilobyte)

1024 KB = 1 MB(Megabyte)

1024 MB = 1 GB(Gigabyte)

1024 GB = 1 TB(Terabyte)

1024 TB = 1 PB(Petabyte)
FUNDAMENTALS OF DATA
REPRESENTATION: SAMPLED
SOUND

Sound is an oscillation of pressure transmitted through a solid, liquid, or gas

(there is no sound in outer space as space is a vacuum and there is no solid, liquid
or gas to transmit sound through!).
FUNDAMENTALS OF DATA
REPRESENTATION: SAMPLED
SOUND
A speaker works by moving its centre cone in and out, this causes the air
particles to bunch together forming waves.

These waves spread out from the speaker travelling at 340 m / s.

If your ear is in the way, then the waves of sound particles will collide with your
ear drum, vibrating it and sending a message to your brain.

This is how you hear:


FUNDAMENTALS OF DATA
REPRESENTATION: SAMPLED
SOUND

When you hear different volumes and pitches of sound all that is happening is
that each sound wave varies in energy for the volume (larger energy waves, the
louder the sound), or distance between sound waves which adjusts the pitch,
(smaller distances between waves leads to higher pitched sound).
FUNDAMENTALS OF DATA
REPRESENTATION: SAMPLED
SOUND

Sound is often recorded for two channels, stereo, feeding a left and right speaker
whose outputs may differ massively.

Where one channel is used, this is called mono. 5.1 surround sound used in
cinemas and home media set ups use 6 channels.

A computer representation of a stereo song, if you look carefully you'll see the
volume of the song varying as you go through it
FUNDAMENTALS OF DATA
REPRESENTATION: SAMPLED
SOUND
FUNDAMENTALS OF DATA
REPRESENTATION: SAMPLED
SOUND

Weakness in representing sound on computers.

Sound waves in nature are continuous, this means they have an almost
infinite amount of detail that you could store for even the shortest sound.

This makes them very difficult to record perfectly, as computers can only
store discrete data, data that has a limited number of data points.
FUNDAMENTALS OF DATA
REPRESENTATION: SAMPLED
SOUND

Sampled sound

So we should know by now that sound waves are continuous


and computers can only store discrete data.
How exactly does an Analogue to Digital Converter convert a
continuous sound wave into discrete digital data?

To do this we need to look at how computers sample sound.


SAMPLE
A Pulse Code Modulation(PCM) signal is a sequence of digital audio samples containing the data
providing the necessary information to reconstruct the original analog signal.
Each sample represents the amplitude of the signal at a specific point in time, and the samples are
uniformly spaced in time.
The amplitude is the only information explicitly stored in the sample, and it is typically stored as either
an integer or a floating point number, encoded as a binary number with a fixed number of digits: the
sample's bit depth.
The resolution of binary integers increases exponentially as the word length increases.
Adding one bit doubles the resolution, adding two quadruples it and so on.
The number of possible values that can be represented by an integer bit depth can be calculated by
using 2n, where n is the bit depth.
Thus, a 16-bit system has a resolution of 65,536 (216) possible values.
PCM audio data is typically stored as signed numbers in two's complement format
SAMPLING RATE
Sampling Rate - The number of samples taken per second

Hertz (Hz) - the SI unit of frequency defined as the number of cycles


per second of a periodic phenomenon,

To create digital music that sounds close to the real thing you need to look at the analogue sound
waves and try to represent them digitally.
This requires you to try to replicate the analogue (and continuous) waves as discrete values.
The first step in doing this is deciding how often you should sample the sound wave, if you do it
too little, the sample stored on a computer will sound very distant from the one being recorded.
Sample too often and sound stored will resemble that being recorded but having to store each of
the samples means you'll get very large file sizes.
To decide how often you are going to sample the analogue signal is called the sampling rate.
SAMPLE RATE
The sample rate is how many samples, or measurements, of the sound are
taken each second. The more samples that are taken, the more detail about
where the waves rise and fall is recorded and the higher the quality of the
audio. Also, the shape of the sound wave is captured more accurately.
Each sample represents the amplitude of the digital signal at a specific point
in time. The amplitude is stored as either an integer or a floating point
number and encoded as a binary number.

amplitude
The maximum displacement of a wave from a crest or trough to the middle.
integer
A whole number - in computing, a data type which represents signed
(positive) or unsigned (negative) whole numbers.
floating point
A data value in computer programming used to denote decimal numbers.
A common audio sample rate for music is 44,100 samples per second.
The unit for the sample rate is hertz (Hz).
44,100 samples per second is 44,100 hertz or 44.1 kilohertz (kHz).
Telephone networks and VOIP services can use a sample rate as low as 8 kHz. This
uses less data to represent the audio. At 8 kHz, the human voice can still be heard
clearly - but music at this sample rate would sound low quality.
SAMPLING RATE
Take a look at the following example:
SAMPLING RATE
SAMPLING RATE
To create digital sound as close to the real thing as possible you
need to take as many samples per second as you can.

When recording MP3s you'll normally use a sampling rate between


32,000, 44,100 and 48,000Hz (samples per second).

That means that for a sampling rate of 44,100, sound waves will
have been sampled 44,100 times per second!

Recording the human voice requires a lower sampling rate, around


8,000Hz.

If you speak to someone on the phone it may sound perfectly


acceptable, but try playing music down a telephone wire and see
how bad it sounds.
SAMPLING RATE

Comparison of the same sound sample recorded at 8kHz, 22kHz and 44kHz sample
rate. Note the spacing of the data points for each sample. The higher the sample rate
the more data points we'll need to store
SAMPLING RESOLUTION

Sampling resolution - the number of bits assigned to each sample

• As you saw earlier, different sounds can have different volumes.


• The sampling resolution allows you to set the range of volumes
storable for each sample.
• If you have a low sampling resolution then the range of volumes
will be very limited, if you have a high sampling resolution then
the file size may become unfeasible.
• The sampling resolution for a CD is 16 bits used per sample.
BIT DEPTH ( SAMPLING RESOLUTION )

Bit depth is the number of bits available for each sample. The higher the bit depth, the
higher the quality of the audio. Bit depth is usually 16 bits on a CD and 24 bits on a DVD.
A bit depth of 16 has a resolution of 65,536 possible values, but a bit depth of 24 has
over 16 million possible values.
16-bit resolution means each sample can be any binary value between 0000 0000 0000
0000 and 1111 1111 1111 1111.

32,768 + 16,384 + 8192 + 4096 + 2048 + 1024 + 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 +


1 = 65,536
24 bit means the maximum binary number is 1111 1111 1111 1111 1111 1111 which creates
16,777,215 possible values.
When an audio file is created it has to be encoded as a particular file type.
Uncompressed audio files are made when high-quality recordings are created.
High-quality audio will be created as a PCM and stored in a file format such as WAV or AIFF.
FILE SIZES
Bit rate - the number of bits required to store 1 second of sound

To work out the size of a sound sample requires the following equation:

File Size = Sample Rate * Sample Resolution * Length of sound

This is the same as saying:

File Size = Bit Rate * Length of sound


FILE SIZES
Let's look at an example:

Example: Sound File Sizes

If you wanted to record a 30 second voice message on your mobile phone


you would use the following:

Sample Rate = 8,000Hz


Sample Resolution = 16 bit
Length of Sound = 30 seconds

Therefore the total file size would be:


8,000 * 16 * 30 = 3 840 000 Bits = 480 000 Bytes
MONOPHONIC SOUND VS STEREOPHONIC SOUND
WHAT IS THE DIFFERENCE BETWEEN MONO AND STEREO?

The difference is in the number of channels (signals) used. Mono uses one,
stereo uses more than one.

• In monaural sound one single channel is used. It can be reproduced through


several speakers, but all speakers are still reproducing the same copy of the
signal.

• In stereophonic sound more channels are used (typically two). You can use two
different channels and make one feed one speaker and the second channel
feed a second speaker (which is the most common stereo setup). This is used
to create directionality, perspective, space.

Mono Stereo
WHAT IS THE DIFFERENCE BETWEEN MONO AND STEREO?

In a common stereo setup of two channels: left and right, one channel is sent
to the left speaker and the other channel is sent to the right speaker.

Now, by controlling to which channel you send the signal you can control the
position of the sound.

You'll hear sounds coming from different directions depending on which


speaker you send the signal to, or in which proportion (you can send just a
little more to the right speaker, so the sound is positioned just a little bit to the
right).

Sounds with equal proportions on both speakers will appear to come from the
center.
Mono versus Stereo

Mono Stereo

Introduction Monaural or monophonic sound Stereophonic sound or, more commonly,


reproduction is intended to be heard as if it stereo, is a method of sound reproduction that
were a single channel of sound perceived as creates an illusion of multi-directional audible
coming from one position. perspective.
Cost Less expensive for recording and More expensive for recording and reproduction
reproduction
Recording Easy to record, requires only basic Requires technical knowledge and skill to
equipment record, apart from equipment. It's important to
know the relative position of the objects and
events.
Key feature Audio signals are routed through a single Audio signals are routed through 2 or more
channel channels to simulate depth/direction
perception, like in the real world.

Stands for Monaural or monophonic sound Stereophonic sound


Usage Public address system, radio talk shows, Movies, Television, Music players, FM radio
hearing aid, telephone and mobile stations
communication, some AM radio stations
Channels 1 2
A 5.1 surround sound system uses 6 channels (feeding into 6 speakers) to
create surround sound.

7.1 surround sound systems use 8 channels.


The two extra channels of sound (and two extra speakers) provide a slightly better
audio quality.

5.1 Surround Sound versus 7.1 Surround Sound


5.1 Surround Sound 7.1 Surround Sound

Channels 6 (5 standard + 1 subwoofer) 8 (7 standard + 1 subwoofer)


Sound Quality Standard surround sound Greater depth and precision
Suitable for Small to medium rooms Large rooms
Cost Varies, but cheaper Varies, but more expensive
Formats Dolby Digital, DTS Dolby TrueHD, DTS-HD Master Audio
Supported by All DVDs, video games, etc. Industry PS3, PS4, Xbox One and most Blu-ray players,
standard although only approx. >150 Blu-ray movies
feature 7.1 sound.
History Invented in 1976 by Dolby Labs. First theatrical 7.1 release was Toy Story 3 in
First used in theaters for Batman Returns 2010.
in 1992 Disney will use it for all future releases.
7.1 Surround Sound
SOUND EDITING
Extension: Sound Editing

If you are interested in sound editing you can start editing your own music using a
program called Audacity.
Using Audacity you can create your own sound samples with different sample
rates and sample resolutions, listening to the difference between them and noting
the different file sizes.
EXERCISE: SAMPLED
SOUND
1. Why might a digital representation of a sound struggle to
be a perfect representation?
2. Why might you choose to have a lower sampling rate
than a higher one for storing a song on your computer?
3. What is the sampling resolution?
4. What is the equation to work out the bit rate of a song
5. For the following sound sample work out its size:
Sample Rate = 16,000Hz
Sample Resolution = 8 bit
Length of Sound = 10 seconds
EXERCISE: SAMPLED
SOUND
6. Work out the sample rate of the following sound file:
Sound File = 100,000 bits
Sample Resolution = 10 bit
Length of Sound = 5 seconds

7. Why might a song recorded with the following settings:


Sample Rate = 22,000Hz
Sample Resolution = 16 bit
Length of Sound = 10 seconds

have a file size of 7,040,000 bits?


ANSWERS
1. A sound wave is continuous data, whilst digital data is discrete
and the representation is an approximation of the original.
2. The higher the sampling rate the more data is needed to be
stored, meaning the larger the file size.
3. The number of bits assigned to each sample, effecting the range
of volumes that can be stored in a sample
4. Sampling Rate * Sampling Resolution
5. 16,000 * 8 * 10 = 1 280 000 Bits
6. 100,000 / (10 * 5) = 2,000Hz
7. The file might be recorded in stereo, meaning twice the amount of
data would have to be stored
Past Exam paper

1. Calculate the file size for a 5 minute song playing on


stereo system( 2 channel system).
2. Calculate the file size for a 5 minute song playing on
5.1digital surround system.
HOMEWORK
Download the application Audacity, edit a sound file and
submit the edited sound file.
AUDACITY
Features :
• Envelope tool
• Speed
• Reverse
• Crop
• Time phase pitch shift
• Amplifier
• Invert
• Bass
• Echo
• Treble
• Reverb
• Combine different pieces
• Studio fade out
• Fade-in fade-out
• Wah-wah

Das könnte Ihnen auch gefallen