Sie sind auf Seite 1von 31

 

RADIO PRODUCTION 
UNIT 1  

Radio as a medium of Mass Communication 


● we see every medium of mass communication works in its own unique way and carries the 
message far and wide. Each medium has its advantages and limitations in the areas of 
operation, influence and impact. For instance, print depends on the ability to read. For 
communicating a message to a child or an illiterate person, television, film or radio would be 
effective.  

● The 1920s witnessed the coming of radio broadcasting in many countries, including India. 
However, during World War I, the need of radio started taking the shape of reality. First time, 
during war, the need of wireless transmission is realized to communicate with allies,  

● By 1930s, radio was considered an intimate and credible medium. The public used it as a 
news source and expected it to provide factual information. Radio was the first truly mass 
medium of communication, reaching millions of people instantly. 

● However, over a period of time, the media scene has changed drastically. Television with its 
strength of audio-visual component has captured the imagination of the people. However, 
despite the presence of a plethora of media, there is room and scope for each medium. 
Experience has revealed that ‘new technologies add things on but they don’t replace’. In the 
changed media scenario, radio is reorienting itself with more innovative programmes and 
formats. 

● Radio is an attractive medium among the various mass communication media because of 
its special characteristics. It continues to be as relevant and potent as it was in the early 
years despite the emergence of more glamorous media.  

Kapoor, Director general of AIR (1995) said, " Radio is far more interactive and stimulating 
medium than TV where the viewer is spoon-fed. Radio allows you to think, to use your 
imagination. That is why nobody ever called it the idiot box". 

Characteristics of Radio 
 

 

 

1. Radio is a cost effective medium 

Radio sets are not at all a luxury now, unlike olden days, when radio sets were not affordable for 
common people. Advancement of technology made radio production and transmission less 
expensive. Unlike other media, production format is sound which can be produced at a minimum 
rate. 

For Example, radio a caters to a large rural population which has no access to TV and where there 
is no power supply. In such places, All India Radio's programmes continue to be the only source of 
information and entertainment. Moreover, AIR broadcasts programmes in 24 languages and 140 
dialects. 

2. Radio is a public medium 

Radio can be accessed by any number of people simultaneously without much technical 
paraphernalia.  

Literacy is not a prerequisite for listening radio. In developing and less economically developed 
countries, it becomes a popular medium because of these characteristics. Majority of the 
population in these countries are illiterate. They show a special affinity towards radio as they can 
overcome the deficiency of illiteracy through radio programmes. 

4. Radio is a mobile medium 

We can listen to radio while we are moving. we can listen to radio while driving car, jogging, walking 
or doing any job. 

It has advantages over the other mass media like television and newspapers in terms of being 
handy, portable, easily accessible and cheap. It is the most portable of the broadcast media, being 
accessible at home, in the office, in the car, on the street or beach, virtually everywhere at any time. 

5. Radio needs less energy 

Radio consumes very less energy. In that sense, it is an environment friendly medium. Since radio 
sets can also be operated with batteries, it became popular in remote villages where electricity is 
inaccessible. 

6. Radio is a speedy medium 

 

 

Radio is the fastest medium as it requires less time for preparation and transmission. Instant live 
broadcasting with a few equipments is possible in radio section. These characteristics extend the 
scope of radio as a mass medium. 

7. Radio transmits information 

Radio is effective not only in informing the people but also in creating awareness regarding many 
social issues and need for social reformation, developing interest and initiating action. 

For example, in creating awareness regarding new policies, developmental projects and programs, 
new ideas etc. It can help in creating a positive climate for growth and development. 

Limitations of Radio 
1. A one chance medium 

When you read a newspaper, you can keep it with you and read it again. You have the printed word 
there and unless the paper is destroyed it will remain with you. 

Now think of radio. Suppose you are listening to a news bulletin in English and you hear words that 
you don’t understand. Can you refer to a dictionary or ask someone else for the meaning? If you 
stop to do that, you will miss the rest of the news. You have to understand what is being said on 
radio as you listen. You have only one chance to listen. What is said on radio does not exist any 
longer; unless you record it.  

2. Radio has no visual images 

Let us consider a news item on radio and the same item on television. For example, the news about 
the devastating cyclone ‘Nargis’ that hit Myanmar in May 2008. Radio news talked about the 
intensity of the cyclone, the number of deaths, details about property destroyed etc. However in the 
case of television, it showed the actual cyclone hitting the country, visuals of properties destroyed, 
rescue operations and many more details which could be seen. Now compare the two. A natural 
disaster like a cyclone when seen on television is more effective than what you hear on radio.  

3. Messages on radio are easily forgotten 

The problem of not having visuals leads to another limitation of radio. What is seen is often 
remembered and may remain with us. For example if you have seen the fine visuals of the Taj 
Mahal in Agra, it will remain in your memory. But what you hear is normally forgotten fast. 

 

 

TOPIC 2: Radio Broadcasting in India (pre and 


post-independence) 

Pre Independence 
● Broadcasting in India actually began about 13 years before AIR came into existence. In June 
1923 the Radio Club of Bombay made the first ever broadcast in the country. This was 
followed by the setting up of the Calcutta Radio Club five months later. The Indian 
Broadcasting Company (IBC) came into being on July 23, 1927, inaugurated by The British 
Viceroy of India, Lord Irwin, only to face liquidation in less than three years. It was under an 
agreement between the Government of India and a private company called the Indian 
Broadcasting Company Ltd. 

● On 26th August 1927, five weeks later the Calcutta station was inaugurated by the Governor 
of Bengal, Sir Stanley Jackson. Both the stations at Bombay and Calcutta promoted music 
and drama.  

● IBC was shut down due to financial crisis. Faced with a widespread public outcry against the 
closure of the IBC, the Government acquired its assets and from April 1, 1930 constituted 
the Indian Broadcasting Service under the Department of Labour and Industries.  

● The Delhi station of Indian State Broadcasting Service (ISBS) went on air on 1st January 
1936. Lionel Fielden was the Controller of Broadcasting. On June 8, 1936, the Indian State 
Broadcasting Service became All India Radio. 

● The Central News Organisation (CNO) came into existence in August, 1937. In the same 
year, AIR came under the Department of Communications and four years later came under 
the Department of Information and Broadcasting. 

● On June 3rd 1947, the historic announcement of partition was made by All India Radio. The 
AIR network by then had 9 stations of which Delhi Calcutta, Madras, Bombay, Tiruchi and 
Lucknow remained in India and the remaining three Lahore, Peshawar and Dacca went to 
Pakistan. When the princely states joined India, the stations at Mysore, Trivandrum, 
Hyderabad, Aurangabad and Baroda became a part of AIR by 1950.  

 

 

Post Independence 
● India’s first five year plan allocated 40 million Rupees for the expansion and development of 
AIR. During this period six new stations were set up and a number of low power transmitters 
were upgraded. On July 20 1952, the first national programme on music went on air.  

● In 1956 the name AKASHVANI was adopted for the National Broadcaster. The Vividh Bharati 
Service was launched in 1957 with popular film music as its main component. 

● Indian radio was regarded as a vital medium of networking and communication, mainly 
because of the lack of any other mediums. All the major national affairs and social events 
were transmitted through radio. Indian radio played a significant role in social integration of 
the entire nation. All India Radio mainly focused on development of a national 
consciousness as well as over all National integration. Programming was organised and 
created keeping in mind the solitary purpose of national political integration. This supported 
in prevailing over the imperative crisis of political instability, which was created after the 
Independence. Thus political enhancement and progressive nation building efforts were 
aided by the transmission of planned broadcasts.  

● All India Radio also provided assistance in enhancing the economic condition of the country. 
Indian radio was particularly designed and programmed to provide support to the procedure 
of social improvement, which was a vital pre-requisite of economic enhancement. Later, 
with the modernisation of the country, television was introduced and broadcasting achieved 
new status. But by then, radio had become a veteran medium in India. 

● FM Radio was first introduced by All India Radio in 1972 at Madras and later in 1992 
Jalandhar. In 1993, the government sold airtime blocks on its FM channels in Madras, 
Mumbai, Delhi, Kolkata and Goa to private operators, who developed their own programme 
content.  

● Currently, AIR’s home service comprises 420 stations located across the country, reaching 
nearly 92% of the country’s area and 99.19% of the total population. AIR originates 
programming in 23 languages and 179 dialects. The function in of All India Radio is 
unparalleled in sense that it is perhaps the only news organizations, which remain active, 
round-the-clock and never sleeps. 

 

 

 
 

 
 

 
 

 
 

 
 

 

 

UNIT 3 

TOPIC 3: Equipment used in Radio Production: Types of 


Microphones, Headphones and Talk Backs, Audio Mixers and 
Transmitters  
 

TYPES OF MICROPHONES 

Microphones 
A microphone is a transducer - a device which converts energy from one form to another. 
The microphones convert acoustical energy (sound waves) into electrical energy (the audio 
signal). Different types of microphones have different ways of converting energy but they all 
share one thing in common: The diaphragm. This is a thin piece of material (such as paper, 
plastic or aluminium) which vibrates when it is struck by sound waves. The diaphragm is located in 
the head of the microphone. 

When the diaphragm vibrates, it causes other components in the microphone to vibrate. These 
vibrations are converted into an electrical current which becomes the audio signal.  

Types of Microphones  
There are various types of microphones in common use. The differences can be divided into three 
areas:  

1) The type of conversion technology they use:  


This refers to the technical method the mic uses to convert sound into electricity. The most 
common technologies are dynamic, condenser, ribbon and crystal. 

​1. a. Dynamic Microphones  

Dynamic microphones are versatile and ideal for general-purpose use. They are relatively 
sturdy and resilient to rough handling. They are also better suited to handling high volume 
levels, such as from certain musical instruments or amplifiers.   

 

 

They have no internal amplifier and do not require batteries or external power. The diaphragm is 
attached to the coil. When the diaphragm vibrates in response to incoming sound waves, the coil 
moves backwards and forwards past the magnet. This creates a current in the coil which is 
channelled from the microphone along wires.  

1.b. Condenser Microphones  

This type of microphone, which uses a capacitor to convert acoustical energy into electrical 
energy. It requires power from a battery or external source. The resulting audio signal is 
stronger signal than that from a dynamic. A capacitor has two plates with voltage between them. In 
the condenser mic, one of these plates is made of very light material and acts as the diaphragm. 
The diaphragm vibrates when struck by sound waves, changing the distance between the two 
plates and therefore changing the capacitance. Specifically, when the plates are closer 
together, capacitance increases and a charge current occurs. When the plates are further 
apart, capacitance decreases and a discharge current occurs.  

2) The Directional Properties  


Every microphone has a property known as directionality. This describes the microphone's 
sensitivity to sound from various directions. Some microphones pick up sound equally from all 
directions, while others pick up sound only from one direction or a particular combination of 
directions. The types of directionality are divided into three main categories:  

2.a. Omni directional  

It picks up sound evenly from all directions. The disadvantage is that it cannot discriminate 
between the sound you want to hear and unwanted sounds such as reflections from walls, noises 
from nearby people or equipment, ventilation noise, footsteps, and so on. Omni sound is very 
general and unfocused - if you are trying to capture sound from a particular subject or area it is 
likely to be overwhelmed by other noise.  

2.b. Bidirectional  

It picks up sound from two opposite directions. Or Uses a figure-of-eight pattern and picks up 
sound equally from two opposite directions. Uses: As you can imagine, there aren't a lot of 
situations which require this polar pattern. One possibility would be an interview with two people 
facing each other (with the mic between them).  

2.c.Unidirectional  

 

 

Picks up sound predominantly from one direction. This includes cardioid and hypercardioid 
microphones.  

2.c.a. Cardioid Cardioid means "heart-shaped", which is the type of pick-up pattern these mics use. 
Sound is picked up mostly from the front, but to a lesser extent the sides as well. Uses: The 
cardioid is a very versatile microphone, ideal for general use. Handheld mics are usually cardioid.  

2.c.b Hypercardioid This is exaggerated version of the cardioid pattern. It is very directional and 
eliminates most sound from the sides and rear. Due to the long thin design of hypercardioids, they 
are often referred to as shotgun microphones. Uses: Isolating the sound from a subject or 
direction when there is a lot of ambient noise; Picking up sound from a subject at a distance.  

2.d. Variable Directionality  

Some microphones allow you to vary the directional characteristics by selecting omni, cardioid or 
shotgun patterns. This feature is sometimes found on video camera microphones, with the idea 
that you can adjust the directionality to suit the angle of zoom, e.g. have a shotgun mic for long 
zooms. Some models can even automatically follow the lens zoom angle so the 
directionality changes from cardioid to shotgun as you zoom. 

AUDIO MIXERS 

An audio mixer takes input from multiple audio channels and lets the user determine which 
channels to use in the output, and at what levels. A mixer or console is essential for any station 
that will broadcast using multiple audio sources. A nice mixer should have ample channels to 
accommodate all audio sources and easily visible level meters with sliding controls.  

Every studios includes some kind of audio mixer – analogue, digital or fully computerized. This is 
essentially a device for mixing together the various programme sources, controlling their level or 
volume, and sending the combined output to the required destination – generally either the 
transmitter or a recorder. Traditionally, it contains three types of circuit function: 

Programme circuits:  

A series of differently sourced audio channels, with their individual volume levels controlled by 
separate slider faders. In addition to the main output, a second or auxiliary output – generally 
controlled by a small rotary fader on each channel – can provide a different mix of programme 
material typically used for public address, echo, foldback into the studio for contributors to hear, a 
clean feed or separate audio mix sent to a distant contributor, etc.  

 
10 
 

Monitoring circuits: 

A visual indication (either by a programme meter or a vertical column of lights) and an aural 
indication (loudspeaker or headphones) to enable the operator to hear and measure the individual 
sources as well as the final mixed output. 

Control circuits.  

The means of communicating with other studios or outside broadcasts by means of ‘talkback’ or 
telephone. In learning to operate a mixer there is little substitute for first 

TRANSMITTER 

The transmitter modulates the audio signal, turning it from a sound wave that our ears can hear 
into a radio wave that FM receivers can detect. It is a requirement that the FM transmitter for your 
LPFM station is \type certied" - meaning it has been through certain tests by the manufacturer. It 
may be necessary to call and ask the transmitter maker if their transmitters t the bill. An important 
transmitter characteristic is output power, which determines how strong the signal is and therefore 
how far it reaches.  

HEADPHONES 

Studio Monitor Speakers are automatically muted whenever a microphone is turned on. As a result, 
anyone in a studio needs headphones to hear what is going to air. Headphone selection is often a 
very personal decision based on your preferences in comfort and frequency response. 

Ear bud headphones 

These are probably the most common types of headphones, as they are used with all kinds 
of portable music players and mobile phones. While some low quality ear bud headphones fit 
loosely within the external ear, there are some that fit into the ear canal itself. While earphones are 
good for listening to music, they are best avoided to monitor audio while recording.   

On-ear headphones 

These are headphones that sit on the ears rather than over them. As a result, they are usually a bit 
smaller and lighter than over-the-ear models. They tend to have foam or sometimes leatherette 
pads for extra comfort, and usually have adjustable headbands for a snug fit. These headphones 
are normally good on treble but not on bass. Since they don’t cover the ears, ambient noise tends 
to enter the ears, making it difficult to monitor audio in critical conditions. They are therefore best 

 
11 
 
used in office situations, for simple listening purposes, or for conducting voice chats over 
the Internet.   

Over-the-ear headphones 

These are traditional-looking headphones, with cushioned pads that enclose and cover the whole 
ear. This makes them more comfortable to wear over long periods, and they generally 
deliver good sound quality. Bulkier than other types of headphones, these are best suited for 
audio monitoring purposes in the studio as well as in the field. Some varieties also cancel 
out noise, making it easier for the producer/technical personnel to monitor audio. The balanced 
headphone variety under this category provides the same impression as the sound you would be 
hearing from two or more speakers.  

TALKBACK 

A talkback is a microphone-and-receiver system installed in a recording/mixing console for 


communication between people in the control room and performers in the recording studio. Most 
semi-professional and professional consoles include such a system. The typical setup includes an 
internal microphone built directly into the console, and a series of switches. The switches allow the 
recording engineer to route the microphone signal to a variety of audio paths in the studio, such as 
the performer's headphones, a set of speakers in the recording area, or directly to a tape recorder. 
Using this tool, the engineer can communicate with a performer with headphones while they are 
performing in the studio without interfering with the recording. Another use is to announce the title 
or other relevant information at the beginning of a recording (called a "slate"). 

 
12 
 

TOPIC 4: Recording, Broadcasting and Troubleshooting 


a.Indoor: Studio, Acoustics and Perspective 
b.Outdoor: Ambience and Noise 
Introduction  

Recording is the process of saving data, and audio in this case, for future references and 
use. Signal processors: devices and software which allow the manipulation of the signal in 
various ways. The most common processors are tonal adjusters such as bass and treble controls. 
Record and playback section: devices which convert a signal to a storage format for later 
reproduction. Recorders are available in many different forms, including magnetic tape, 
optical CD, computer hard drive, etc.  

Process of Recording  

The audio signal from the transducer (microphone) is passed through one or more 
processing units, which prepare it for recording (or directly for amplification). The signal is fed to 
a recording device for storage. The stored signal is played back and fed to more processors. 
The signal is amplified and fed to a loudspeaker. Sound recording and reproduction is an 
electrical or mechanical inscription and recreation of sound waves, such as spoken voice, 
singing, instrumental music, or sound effects. The two main classes of sound recording technology 
are analog recording and digital recording.  

sound mixer is a device which takes two or more audio signals, mixes them together and provides 
one or more output signals. As well as combining signals, mixers allow you to adjust levels, 
enhance sound with equalization and effects, create monitor feeds, record various mixes, etc. 
Mixers come in a wide variety of sizes and designs, from small portable units to massive 
studio consoles. The term mixer can refer to any type of sound mixer; the terms sound desk and 
sound console refer to mixers which sit on a desk surface as in a studio setting. 

INDOOR  

A. Studio 
However good or experienced a presenter is, he or she can always benefit from being 
studio produced by another person. The studio producer can be the person who wrote the script 
and put the programme together if the presenter(s) did not write it. Or, the studio producer 
can be a colleague who has gone through the script and assisted in putting the audio and 

 
13 
 
script together. Much of what is said here is relevant to producing or directing live 
programmes, but with live programmes there are other considerations to do with timing and 
moving stories. Seven tips for preparing to go into the studio to produce:  

1. Available studio: Book the studio, or make sure it is free when you want it.  

2. Clean studio: The sound technician should have made sure that the studio is clean and has a 
table, chairs and the right microphones in place. If this is not the case then get what you need!  

3. Rehearse before going in: Do all rehearsing and script discussions before you go into the 
studio – it’s a waste of the sound technician’s time to listen to endless discussions about the 
wording in the script.  

4. Make arrangements for the studio guest: If there is a studio guest then make sure transport has 
been arranged and directions to the studio are clear. Time the guest's arrival to coincide with the 
actual start of the recording session; guests don’t want to sit around while presenter(s) and studio 
producer discuss the script's finer points. 5. Scripts available: Make sure that there are scripts 
printed out for: a. Self (studio producer) b. Sound technician c. Presenter(s)  

6. Water: Make sure that there is water available for the presenter(s) and guest(s) – broadcasting 
can be thirsty work!  

7. Audio ready: Make sure the playlist is ready – in other words, all audio is ready and numbered in 
order.  

B. Acoustics  
The term “acoustics” describes how sound behaves in an enclosed space. On the surface, it 
considers the direct sound, reflections and reverberation. The sound waves travel out and strike a 
multitude of surfaces the floor, ceiling, walls, chairs or pews, windows, people, and so on.  

Depending on the makeup of each surface being struck, a portion of that sound will be reflected 
back into the room, a portion will be absorbed by the material, and some of the sound may even 
travel through that material.  

A recording studio is a facility for sound recording and mixing. Ideally, the space is specially 
designed by an acoustician to achieve the desired acoustic properties (sound diffusion, low level of 
reflections, adequate reverberation time for the size of the ambient, etc.).  

 
14 
 

Acoustics is the interdisciplinary science that deals with the study of all mechanical waves 
in gases, liquids, and solids including vibration, sound, ultrasound and infrasound.  

The application of acoustics can be seen in almost all aspects of modern studios with the most 
obvious being the audio and noise control industries. There are three types of surfaces which come 
into play while talking about acoustics  

1. Reflective  
2. Absorbing  
3. Diffusing  

Fine-tuning sound quality inside a studio setting requires strategic placement of sound 
absorption surfaces to control reverb time and diffusion materials to control "placement" of 
the sound energy. Today’s state of the art acoustic materials include fiber based (fiber glass, 
Cotton /Polyester), foams and a variety of alternative resin-based products. Selection of the 
proper materials is dependent on room size, composition, building codes and desired finished 
appearance.  

C. Sound Perspective​:  
The sense of a sound's position in space, yielded by volume, timbre, pitch, and, in 
stereophonic reproduction systems, binaural information. And the perspective refers to the 
apparent distance of a sound. Clues to the distance of the source include the volume of the sound, 
the balance with other sounds, the frequency range (high frequencies may be lost at a distance), 
and the amount of echo and reverberation. A closer sound perspective may sometimes be 
simulated by recording with a directional microphone which rejects sound from other directions. A 
more distant perspective may sometimes be simulated in post-production by processing the sound 
and mixing in other sounds. In recording sound for film, you usually select a sound perspective to 
match the picture with which it will be used.  

Direct sound:​ Direct sound issues from the source itself, such as those frequencies coming from 
an actor’s mouth. When a person is close to us, we hear essentially direct sound including 
low-frequency chest tones. As the person moves farther away, we hearmore of the reflected 
sound. Close perspective sound contains a high ratio of direct sound to reflected sound. 

Reflected sound:​ Reflected sound is produced by the direct sound bouncing off the walls, floor 
etc. Reflected sounds much more complex in character than direct sound because the surfaces 
are at different distances from the source and have widely varying reflective properties.  

 
15 
 

Sound Balance​ Balance is the relative volume of different sound elements in a scene. Since 
background sound effects can usually be added separately in post-production, the best original 
recording of dialogue or sound effects is often the cleanest recording, with the least background 
noise and reverberation. Placing the microphone close to the sound source is the best way of 
reducing the relative amount of reverberation in an interior recording. Quilts or other absorbent 
material will also help reduce reverberation off hard surfaces.  

Interiors that contains a lot of hard surfaces - glass, stone, metal, etc. - are said to be "live" because 
their high reflectivity. Soft or porous materials, like carpeting, draperies and upholstered furniture, 
are sound deadening. As furniture is moved into a empty room, the acoustics became "dead". 
Distant perspective sound contains a high ratio of reflected sound to direct sound.  

Outdoors, the relative level of wind and other background noise can also be reduced by close mic 
placement, even when a more distant sound perspective might be preferable. (Note: the mic must 
also be protected from direct wind pressure.) So the sound editor might prefer to use wild sound 
recorded in closer perspective or recorded somewhere else.  

OUTDOOR 

A. Ambience 
Ambient sound in relation to audio refers to the background noise present at a given scene 
or a location. This can include noises such as rain, traffic, crickets, birds, et cetera. In audio 
recording sometimes there are unwanted ambient sounds in the background that you might want 
to remove from the recording, such as a hiss, tapping, or some other unwanted noise. Ambience is 
referred to as the atmosphere associated with a particular environment. From music to film 
ambient sound is something that creates an atmospheric setting and engages the 
viewer/listener into the surroundings of said environment. Ambient sound is used not only 
to correlate a particular setting to the story, but to also transition into other parts of a specific 
setting in film, maintaining the current flow the film proceeds to take when moving from one scene 
or cut to another.  

Ambiences (atmospheres or backgrounds) - Provide a sense of place where, and perhaps of time 
when, events occur. Background sounds which identify location, setting, or historical time.  

Interiors are usually reverberant ("wet") to some degree, indicating the size of an enclosed space. 
Exteriors are usually flat, layered elements of sound in a non-reverberant ("dry") space. Even 
voice characteristics are different outside. A good unedited background can cover a choppily 

 
16 
 
edited dialogue, making it sound real and continuous. Ambiences can be done with 
continuous tape carts, tape loops, long recordings, or other means.  

B. Noise 
Ambient noise, sometimes called “background noise,” refers to all noise present in a given 
environment, with the exclusion of the primary sound that an individual is monitoring or 
directly producing as a result of his or her work activities. Noise is usually defined as unwanted 
sound pollutant which produces undesirable physiological and psychological effects in an 
individual, by interfering with one’s social activities like work, rest, recreation, sleep etc. A sound 
might be unwanted because it is:  

● Loud 
● Unpleasant or annoying 
● Intrusive or distracting 

Usually the sound of a violin is referred to as music - is something pleasing. Depending on other 
factors, the sound may be perceived as noise. Noise perception is subjective. Factors such as the 
magnitude, characteristics, duration, and time of occurrence may affect one's subjective 
impression of the noise. Noise is also considered a mixture of many different sound frequencies at 
high decibel levels.  

 
17 
 

UNIT 4 

TOPIC 1: Editing and Mixing 


Sound Editing  
Sound editing is a process that requires both skill and instinct. Today, most sound editing is done 
digitally using specialized software. It wasn’t always this way. Magnetic tape and tape 
recorders were first invented in the late 1940s. Using magnetic tape for recording and editing 
sound was the status quo until the mid-1990s when computers and digital software 
revolutionized the sound editing process.  

The editing process was slow, tedious, and sometimes unstable. To edit sound with 
magnetic tape, the user had to find both points on the tape where the splice needed to occur, place 
the tape in an “editing block” in the block, use a razor blade to cut the tape , and then 
physically join the magnetic tape back together with a specially designed editing tape.  

And if you screwed up the splice, then you had to undo everything, use the editing tape to put the 
magnetic tape back the way it was, and then try again. It was tedious and sometimes frustrating 
work. The user had no visual of the recorded sound to refer to, either. It was all done by ear. Now a 
computer user can own software which is as powerful as an older magnetic tape based 64-track 
recording studio. It’s an amazing amount of power and a user can acquire many software 
applications, may free or extremely low in cost. 

Editing principles 

The purpose of editing can be summarized as:  

1 To rearrange recorded material into a more logical sequence.  

2 To remove the uninteresting, repetitive or technically unacceptable.  

3 To reduce the running time.  

4 For creative effect to produce new juxtapositions of speech, music, sound and silence.  

Editing must not be used to alter the sense of what has been said – which would be regarded as 
unethical – or to place the material within an unintended context. There are always two 
considerations when editing, namely the editorial and the technical.   

 
18 
 

In the editorial sense it is important to leave intact, for example, the view of an interviewee, 
and the reasons given for its support. It would be wrong to include a key statement but to 
omit an essential qualification through lack of time. On the other hand, facts can often be 
edited out and included more economically in the introductory cue material. It is often possible to 
remove some or all of the interviewer’s questions, letting the interviewee continue. If the 
interviewee has a stammer, or pauses for long periods, editing can obviously remove these gaps. 
However, it would be unwise to remove them completely, as this may alter the nature of the 
individual voice. It would be positively misleading to edit pauses out of an interview where 
they indicate thought or hesitation. The most frequent fault in editing is the removal of the tiny 
breathing pauses which occur naturally in speech. There is little point in increasing the pace while 
destroying half the meaning – silence is not necessarily a negative quantity.  

Types of sou​nd editing  

Mechanical/Linear Sound Editing  

Before computers came into wide use for sound editing in the 1990s, everything was done 
with magnetic tape. To make edits using magnetic tape, you literally had to cut the tape, 
remove the piece of audio that you didn’t want and splice the tape back together again. The 
machine of choice for mechanical audio editing was the reel-to-reel tape recorder. With this 
piece of equipment, you could record and playback audio from circular reels of magnetic 
audiotape. You also needed several pieces of specialized editing equipment: a razor blade, 
an editing block and editing tape.  

Here’s the basic cut-and-splice editing process using magnetic tape:  

1. Find the initial edit point (or in point), which is the starting point on the tape for the 
section of audio you want to remove. This is done through a process called scrubbing, 
where the sound editor slowly rocks the reels back and forth to find the precise point to make the 
cut.  

2. Using a grease pencil, make a mark on the tape directly over the tape recorders play head.  

3. Play the tape until you reach the first sound you want to keep, called the out point. Also mark 
that edit point with a grease pencil.  

4. Remove the tape from the reel-to-reel and place it in an editing block. The editing block contains 
special grooves at 45 and 90 degree angles. 

 
19 
 

5. Line the first edit point up with the 45 degree groove, cut the tape along the groove with a razor 
blade. Do the same with the second edit point.  

6. Using special editing tape, tape the two loose ends of magnetic tape back together, leaving no 
space in between.  

7. Put the tape back on the reel-to-reel and test the edit. You may need to cut more off one of the 
ends, or maybe you already cut too much! When magnetic tape was invented in the late 1940s, 
one of its greatest advantages was that it could hold multiple audio channels without creating a 
lot of excess noise. This allowed for a process called overdubbing or multi-track recording.  

Digital/Non- linear Sound Editing  

Now almost all sound editors use computerized editing systems called digital audio 
workstations (DAW). Digital audio workstations are multi-track systems that greatly simplify 
and enhance the sound editing process for all types of professional audio production (film 
audio, studio recording, DJs, et cetera).  

Digital audio workstations vary greatly in size, price and complexity. The most basic systems are 
simply software applications that can be loaded onto a standard personal computer.   

More professional systems, like Digi Design’s Pro Tools, require a special sound card and are 
typically used in conjunction with large digital mixing boards and are compatible with 
hundreds of effects and virtual instrument plug-ins.  

★ The advantage of all of these systems is that an editor can work with all kinds of audio 
files -- voices, Foley clips, analog and MIDI music -- from the same interface.  

★ the total amount of tracks is limitless.  

★ Besides multiple dialogue tracks, an editor can add dozens of background effects and layers and 
layers of Foley and music.  

★ Multiple tracks can be cut, copied, pasted, trimmed and faded at once.  

★ And each track comes with dozens of controls for volume, stereo panning and effects, which 
greatly simplifies the mixing process.  

★ One of the big advantages of digital audio workstations is that they allow sound editors to 
work with graphical representations of sound. With magnetic tape, everything was done by 
ear. Now editors can look at the sound waves on the screen. They can see extraneous 

 
20 
 
background noise and remove it with a click of the mouse. Some DAWs can automatically clean 
up audio, removing clicks, hisses and low-level background noise that would have ruined a take 
in the old days. With graphical interfaces, sound effects designers can study the waveform 
of a sound and easily bend and distort it to create something completely new.  

Mixing in the multi-track  


Editing in the multi-track is where you put the whole programme together: music, sound 
effects, presenters’ speech and speech from recorded interviews. It’s an exciting stage – you will 
now see if the programme that you had in your head and sketched out on paper really works as a 
piece of audio.  

Good quality recordings and fantastic presentation can be ruined by poor multi track mixing. But 
remember that getting good at multi track mixing is about practice, practice and more practice.  

Preparing material for mixing in multi-track 

Don’t go into the multi-track until you have done the following:  

1. Planned: ​ You have properly planned your programme on paper, first with a running order and 
then with a rough draft of a written script.  

2. Edited in single track: ​ You have edited all audio down to size and adjusted levels of 
each piece of audio. Normally, the sound levels should be between -9 and -6 in Adobe Audition.  

3. Kept audio long:​ Don’t cut music and sound too short – you never know how much you might 
need until you start mixing.  

4. Numbered:​ Number your audio clips and label with short descriptions, eg "01/SFX COWS", or 
"04/INT/HEALTH WORKER YOUNG" Now you’re ready to import your individual audio clips into the 
multi-track window.  

Mixing tips  

1. Save it:​ Set up your session and save it with a name, for example "Programme 05 – 
health programme". If you don’t save you could lose the session in a power cut or PC failure and 
then you will have to start all over again.  

2. Keep it loose: ​ You must pace the individual clips of speech, music and sound effects 
manually. An automatic snapping device creates a very tight junction that doesn’t take mood 
and pace into account.  

 
21 
 

3. Judging the junction between two pieces of sound: ​ If you are using very sad music it 
should be faded out slowly; if the music is fast in can be faded out quicker. If an interview ends on 
a sad note, the back announcement needs to come in slowly; if the interview was faster and happy 
then the back announcement can come in quicker.  

4. Zoom in:​ Zoom in when you want to make any change on the multi-track (just like in the single 
track).  

5. Zoom out:​ Zoom out to check any change in the multi-track.  

6. Fading in and fading out​. The secret to good fades is creating a smooth and intelligible 
junction from one piece of audio to another. Fades can be fast, (called steep) or slow 
(called shallow), depending on the audio's texture and pace. Cross fades are when you 
overlap the "fade out" of one piece of audio with the "fade in" of the next piece of audio. It’s 
used to take the hard edge off a bit of audio and may only cover the first word that is spoken. you 
may lose, or slowly fade out, some speech. This is rare but can be a very graceful way of 
leaving a person chatting about something when the main substance of the interview has 
been covered. 

 
 

 
 

 
22 
 

TOPIC 2: Adding Sound Effects and Music 


 

Sound Effects 
As dramatic radio developed, so did a need for convincing sound effects.A sound other than 
speech or music made artificially for the use in film, play , radio or any other broadcast production 
is called a sound effect. It gives a sense of location and adds realism to the programme. Sound 
effects can turn a mere script into a real-time drama as opposed to just a story being recited.  

Sound effects perform two general functions: contextual and narrative. Contextual sound 
emanates from and duplicates a sound source as it is. It is also known as diegetic sound— whose 
source is present on the scene or whose source is implied to be present by the action. For eg. 
Voices of characters, sounds made by objects in the story, . 

Non diegetic - Sound whose source is neither in the story nor has been implied to be present in the 
action. For e.g. Narrator’s commentary, sound effects which is added for the dramatic effect, mood 
music Narrative sound adds more to a scene than what is apparent. Narrative sound can be 
descriptive or commentative 

The two primary sources of sound effects are: 

1) Spot Effects: Effects that are created as we speak 


2) Recorded Sound Effects: Placed digitally 

Prerecorded sound effects that can number from several dozen to several thousand are available 
in libraries. The major advantage of sound-effect libraries is that for relatively little cost many 
different, perhaps difficult-to-produce, sounds are at your fingertips. The disadvantages include the 
lack of control over the dynamics and the timing of the effects, possible mismatches in ambience 
and the possibility that the effects may sound canned, and the effects not being long enough for 
your needs.  

Producing live sound effects in the studio goes back to the days of radio drama. Producing SFX in 
synchronization with the picture is known as Foleying, after former film soundman Jack Foley.  

The keys to creating a sound effect are analyzing its sonic characteristics and then finding a sound 
source that contains similar qualities, whatever it may be. The capacitor microphone is most 

 
23 
 
frequently used in recording. Tube-type capacitors help smooth transients and warm digitally 
recorded effects.  

A critical aspect of recording is making sure that the effects sound consistent and integrated and 
do not seem to be outside or on top of the sound track. Some directors decry the use of 
studio-created effects, preferring to capture authentic sounds either as they occur on the set during 
production, by recording them separately in the field, or both.  

Radio sound effects artists are NOT the same as foley artists, who do film post-production sound 
work. Many people mistakenly use the term "foley" when describing radio sound effects. Radio SFX 
artists do everything film foley artists do and a whole lot more. Radio SFX-perts have a broader 
skill-set, a larger sonic range, and a talent for multi-tasking. 

Many of the dynamic sound effects were achieved with props, often built by the sound-effects 
specialists themselves. Thunder was simulated by shaking a large sheet of metal; galloping horses 
were reenacted by pounding coconut half shells in a sandbox; and the crunch of footsteps in the 
snow was created with bags full of cornstarch. Specially designed boxes were created to 
reproduce the sounds of telephones and doors. Sound engineers kept a large supply of shoes and 
various floor surfaces on hand to reproduce the sounds of footsteps. 

Sound effects can also be generated electronically with synthesizers and computers and by 
employing MIDI—an approach called electronic Foley. A synthesizer is an audio instrument that 
uses sound generators to create waveforms. Computer sound effects can be generated from 
preprogrammed software or software that allows sounds to be produced from scratch.  

Music  
Music is an art form whose medium is sound. It is the soul of radio. It is used in different ways 
on the radio. Film songs and classical music are independent programmes. It can also be used as 
signature themes or tunes in various radio programmes. 

Elements of music Pitch:  

Pitch​ represents frequency of a sound. Pitch allows the construction of melodies; pitches are 
compared as "higher" and "lower", and are quantified as frequencies (cycles per second, or hertz), 
corresponding very nearly to the repetition rate of sound waves.   

 
24 
 

Rhythm:​ Rhythm a "movement marked by the regulated succession of strong and weak elements, 
or of opposite or different conditions." While rhythm most commonly applies to sound, such 
as music and spoken language, it may also refer to visual presentation, as "timed movement 
through space." (And its associated concepts tempo, meter, and articulation),  

Dynamics: ​In music, dynamics normally refers to the volume of a sound or note, but can 
also refer to every aspect of the execution of a given piece, either stylistic (staccato, legato 
etc.) or functional (velocity). The term is also applied to the written or printed musical 
notation used to indicate dynamics. Dynamics do not indicate specific volume levels, but are 
meant to be played with reference to the ensemble as a whole. Dynamic indications are derived= 
from Italian words.  

Timbre​: In music, timbre is the quality of a musical note or sound or tone that distinguishes 
different types of sound production, such as voices or musical instruments. The physical 
characteristics of sound that mediate the perception of timbre include spectrum and 
envelope. 

Texture:​ In music, texture is the way the melodic, rhythmic, and harmonic materials are combined 
in a composition determining the overall quality of sound of a piece. Texture is often described in 
regards to the density, or thickness, and range, or width between lowest and highest 
pitches, in relative terms as well as more specifically distinguished according to the number 
of voices, or parts, and the relationship between these voices.  

Types of Music  

Music: When we say radio, the first thing that comes to our mind is music. So music is the main 
stay in radio. There is no radio without music. Music is used in different ways on radio. There are 
programmes of music and music is also used in different programmes. These include 
signature tunes, music used as effects in radio plays and features. India has a great heritage of 
music and radio in India reflects that. Let us understand the different types of music.  

Classical Music There are 3 types of classical music in India.  

They are:-   

● Hindustani classical   
● Carnatic classical   
● Western classical  

 
25 
 

TOPIC 3: Audio Filters: Types, Need and Importance  


 

Audio Filter  

An audio filter is a circuit, working in the audio frequency range that processes sound 
signals. Many types of filters exist for applications including graphic equalizers, synthesizers, 
sound effects, CD players and virtual reality systems. In its simplest form, an audio filter is 
typically designed to pass some frequency regions through attenuated while significantly 
attenuating others.  

Types of filter 

High-Pass Filter, or HPF,​ is a filter that passes high frequencies well but attenuates (i.e., 
reduces the amplitude of) frequencies lower than the filter's cut-off frequency.   

Low-Pass Filter​ is a filter that passes low-frequency signals but attenuates (reduces the 
amplitude of) signals with frequencies higher than the cut-off frequency. The actual amount 
of attenuation for each frequency varies from filter to filter. A low-pass filter is the opposite of a 
high-pass filter.. Low-pass filters exist in many different forms, including electronic circuits 
(such as a hiss filter used in audio), digital filters for smoothing sets of data, acoustic barriers, 
blurring of images, and so on. Low-pass filters provide a smoother form of a signal, removing 
the short-term fluctuations, and leaving the longer-term trend. 

Band-pass filter​ is a combination of a low-pass and a high-pass. This Filter frequencies within a 
certain range and rejects (attenuates) frequencies outside that range. An example of an analogue 
electronic band-pass filter is an RLC circuit (a resistor–inductor–capacitor circuit). These 
filters can also be created by combining a low-pass filter with a high-pass filter. 

All-pass: ​ An all pass filter allows to pass all the frequencies but it modifies the phase relationship 
between them. So, at the output of the all pass filter, different frequency ranges have phase 
difference with each other.  

Linear Filter​ applies a linear operator to a time-varying input signal. Linear filters are very 
common in electronics and digital signal processing but they can also be found in mechanical 
engineering and other technologies. They are often used to eliminate unwanted frequencies from 
an input signal or to select a desired frequency among many others. Regardless of whether they 

 
26 
 
are electronic, electrical, or mechanical, or what frequency ranges or timescales they work on, 
the mathematical theory of linear filters is universal.  

Equalization (Eq) Filter​ is a filter, usually adjustable, designed to compensate for the unequal 
frequency response of some other signal processing circuit or system. In audio engineering, 
the EQ filter is more often used creatively to alter the frequency response characteristics of a 
musical source or a sound mix. An EQ filter typically allows the user to adjust one or more 
parameters that determine the overall shape of the filter's transfer function. It is generally 
used to improve the fidelity of sound, to emphasize certain instruments, to remove undesired 
noises, or to create completely new and different timbres. Equalizers may be designed with 
peaking filters, shelving filters, band pass filters, or high-pass and low-pass filters.   

 
 

 
 

 
27 
 

TOPIC 4: Evaluation: Process and Measurement Techniques  


 

A crucial activity for any producer is the regular evaluation of what he or she is doing. Programmes 
have to be justified. Other people may want the resources or the airtime. Station owners, 
advertisers, sponsors and accountants will want to know what programmes cost and whether they 
are worth it. Above all, conscientious producers striving for better and more effective 
communication will want to know how to improve their work and achieve greater results. 
Broadcasters talk a great deal about quality and excellence, and rightly so, but these are 
more than abstract concepts, they are founded on the down-to-earth practicalities of 
continuous evaluation. Programmes can be assessed from several viewpoints. We shall 
concentrate on three:   

● Production and quality evaluation   


● Audience evaluation   
● Cost evaluation  

Programme Quality  

Programme evaluation carried out among professionals is the first of the evaluative methods and 
should be applied automatically to all parts of the output. However, it is more than simply a 
discussion of individual opinion, for a programme should always be evaluated against 
previously agreed criteria. quality will mean that at least some of the following eight components 
will be in evidence.  

First, a
​ ppropriateness. ​ Irrespective of the size of the audience gained, did the programme 
actually meet the needs of those for whom it was intended? Was it a well-crafted piece of 
communication which was totally appropriate to its target listeners, having regard to their 
educational, social or cultural background?   

Second, ​creativity.​ Did the programme contain those sparks of newness, difference and 
originality that are genuinely creative, so that it combined the science and logic of communication 
with the art of delight and surprise? This leaves a more lasting impression, differentiating the 
memorable from the dull, bland or predictable.  

Third, ​accuracy​. Was it truthful and honest, not only in the facts it presented and in their portrayal 
and balance within the programme, but also in the sense of being fair to people with 

 
28 
 
different views? It is in this way that programmes are seen as being authoritative and reliable – 
essential of course for news, but necessary also for documentary programmes, magazines 
and, in its own way, for drama. 

Fourth, e
​ minence.​ A quality programme is likely to include first-rate performers – actors or 
musicians. It will make use of the best writers and involve people eminent in their own 
sphere. This, of course, extends to senior politicians, industrial leaders, scientists, sportsmen 
and women – known achievers of all kinds. Their presence gives authority and stature to the 
programme. It is true that the unknown can also produce marvels of performance, but quality 
output cannot rely on this and will recognize established talent and professional ability.  

Fifth,​ holistic.​ A programme of quality will certainly communicate intellectually in that it is 
understandable to the sense of reason, but it should appeal to other senses as well – the pictorial, 
imaginative or nostalgic. It will arouse emotions at a deeper and richer level, touching us as 
human beings responsive to feelings of awe, love, compassion, sadness, excitement – or even the 
anger of injustice. A quality programme makes contact with more of the whole person – it 
will surely move me.  

Sixth, ​technical advance​. An aspect of quality lies in its technical innovation, its daring – either in 
the production methods or the way in which the audience is involved. Technically ambitious 
programmes, especially when ‘live’, still have a special impact for the audience.  

Seventh, ​personal enhancement​. Was the overall effect of the programme to enrich the 
experience of the listener, to add to it in some way rather than to leave it untouched – or worse to 
degrade or diminish it? The end result may have been to give pleasure, to increase knowledge, to 
provoke or to challenge. An idea of ‘desirable quality’ should have some effect which gives, or at 
least lends, a desirable quality to its recipient. It will have a ‘Wow’ factor.  

Eighth, p
​ ersonal rapport​. As the result of a quality experience, or perhaps during it, the listener will 
feel a sense of rapport – of closeness – with the programme makers. One intuitively 
appreciates a programme that is perceived as well researched, pays attention to detail, achieves 
diversity or depth, or has personal impact – in short, is distinctive. The listener identifies not only 
with the programme and its people, but also with the station. Programmes which take the trouble 
to reach out to the audience earn a reciprocal benefit of loyalty and sense of ownership. 
Combining accuracy with appropriateness, for example, means providing truthful and relevant 
news in a manner that is totally understandable to the intended audience at the desired time and 
for the right duration.  

 
29 
 

Audience Evaluation  
Audience research is designed to tell the broadcaster specific facts about the audience size 
and reaction to a particular station or to individual programmes. The measurement of 
audiences and the discovery of who is listening at what times and to which stations is of 
great interest not only to programme makers and station managers, but also to advertisers or 
sponsors who buy time on different stations.  

Audience measurement is the most common form of audience research, largely because of the 
importance attached to obtaining this information by the commercial world. Several methods of 
measurement are used and, in each, people are selected at random from a specific category to 
represent the target population:  

1 People are interviewed face to face, generally at home.  

2 Interviews are conducted by phone.  

3 Respondents complete a listening diary.  

4 A selected sample wear a small personal meter.  

The more detailed the information required the more research will cost. This is because the 
sample will need to be larger, requiring more interviews (or more diaries) and because the 
research has to be done exclusively for radio. If the information required is fairly basic – how many 
people tuned to Radio Mirchi for any programme last month and their overall opinion of the 
station – the cost will be much less, since suitable short questions can be included in a 
general market survey of a range of other products and services. It should be said that 
constructing a properly representative sample of interviewees is a process requiring some 
precision and care.   

Nevertheless, a correct sample should cover all demographic groups and categories in terms 
of age, gender, social or occupational status, ethnic culture, language and lifestyle, e.g. urban 
and rural. It should reflect in the proper proportions any marked regional or other differences 
in the area surveyed.   

RAM (Radio Audience Measurement) -​ Radio broadcasting, because of its versatility, is 
considered an effective medium to provide entertainment, information and education. 

 
30 
 
Terrestrial radio coverage in India is available in Frequency Modulation (FM) mode and 
Amplitude Modulation (AM) mode (Short Wave and Medium Wave).  

Radio broadcasting services are provided by the public broadcaster All India Radio (AIR) as well as 
by private sector radio operators. AIR transmits programs both in AM and FM mode and has 
415 radio stations (AM & FM) that cover almost 92% of the country by area and more than 99.19% 
of the country’s population 

1. Private sector radio operators transmit programs in FM mode only.  

At present, radio audience measurement in India is conducted by AIR and TAM Media Research. 
AIR carries out periodical large scale radio audience surveys on various AIR channels only. TAM 
Media Research conducts radio audience measurement on private FM Radio channels only 
through an independent division It uses the paper diary method to measure Radio listenership with 
a panel size of 600 individuals each in Bengaluru, Delhi, Mumbai and Kolkata. Listenership data is 
provided on a weekly basis. There is no integrated listenership data available either for AIR or 
private FM radio channels. Thus the advertisers do not have any realistic data for making 
decisions relating to placement of advertisements in various channels.  

Cost Evaluation  
What does a programme cost? Like many simple questions in broadcasting this one has a myriad 
possible answers  

Total costing will include all the management costs and all the overheads, over which the 
individual programme maker can have little or no control. One way of looking at this is to 
take a station’s annual expenditure and divide it by the number of hours it produces, so arriving at 
a cost per hour. But since this results in the same figure for all programmes, no 
comparisons can be made.   

More helpful is to allocate to programmes all cash resource costs which are directly attributable to 
it, and then add a share of the general overheads – including both management and transmission 
costs – in order to arrive at a true, or at least a truer, cost per hour figure which will bear 
comparison with other programmes. Does news cost more than sport? How expensive is a 
well-researched documentary or a piece of drama?   

 
31 
 

Of course, it is not simply the actual cost of a programme which matters – coverage of a live 
event may result in several hours of output, which with recorded repeat capability could provide a 
low cost per hour. Furthermore, given information about the size of the audience it is 
possible, by dividing the cost per hour by the number of listeners, to arrive at a cost per listener 
hour.  

Relatively cheap programmes which attract a substantial audience may or may not be what a 
station wants to produce. It may also want to provide programmes that are more costly to make 
and designed for a minority audience – programmes for the disabled, for a particular linguistic, 
religious or cultural group, or for a specific educational purpose. These will have a higher 
cost per listener hour, but will also give a channel its public service credibility. It is important for 
each programme to be true to its purpose – to achieve results in those areas for which it is 
designed. It is also important for each programme to contribute to the station’s purpose – its 
Mission Statement.  

Das könnte Ihnen auch gefallen