Beruflich Dokumente
Kultur Dokumente
INTRODUCTION TO
DOORDARSHAN
Doordarshan
Type Broadcast television network
Country India
Availability National
Owner Prasar Bharati
Key people …………….. (CEO)
Launch date 1959
Past names All India Radio
Website http://www.ddindia.gov.in/
Beginning
Doordarshan had a modest beginning with the experimental telecast starting in Delhi in
September 1959 with a small transmitter and a makeshift studio. The regular daily
1
INDUSTRIAL TRAINING REPORT
transmission started in 1965 as a part of All India Radio. The television service was
extended to Mumbai (then Bombay) and Amritsar in 1972. Till 1975, seven Indian cities
had television service and Doordarshan remained the only television channel in India.
Television services were separated from radio in 1976. Each office of All India Radio and
Doordarshan were placed under the management of two separate Director Generals in
New Delhi. Finally Doordarshan as a National Broadcaster came into existence.
The word television is a hybrid word, created from both Greek and Latin. Tele- is Greek
for "far", while -vision is from the Latin visio, meaning "vision" or "sight". It is often
abbreviated as TV or the telly.
Electromechanical television
The origins of what would become today's television system can be traced back to the
discovery of the photoconductivity of the element selenium by Willoughby Smith in 1873,
and the invention of a scanning disk by Paul Gottlieb Nipkow in 1884.
The German student Paul Nipkow proposed and patented the first electromechanical
television system in 1884. Nipkow's spinning disk design is credited with being the first
television image rasterizer. Constantin Perskyi had coined the word television in a paper
2
INDUSTRIAL TRAINING REPORT
read to the International Electricity Congress at the International World Fair in Paris on
August 25, 1900
However, it wasn't until 1907 that developments in amplification tube technology made
the design practical. The first demonstration of the instantaneous transmission of still
images was by Georges Rignoux and A. Fournier in Paris in 1909, using a rotating mirror-
drum as the scanner, and a matrix of 64 selenium cells as the receiver.
In 1911, Boris Rosing and his student Vladimir Kosma Zworykin created a television
system that used a mechanical mirror-drum scanner to transmit, in Zworykin's words,
"very crude images" over wires to the electronic Braun tube (cathode ray tube) in the
receiver. Moving images were not possible because, in the scanner, "the sensitivity was
not enough and the selenium cell was very laggy."
In 1927 Baird transmitted a signal over 438 miles of telephone line between London and
Glasgow. In 1928 Baird's company (Baird Television Development Company / Cinema
Television) broadcast the first transatlantic television signal, between London and New
York, and the first shore to ship transmission. He also demonstrated an electromechanical
Electronic television
In 1911, engineer Alan Archibald Campbell-Swinton gave a speech in London, reported in
The Times, describing in great detail how distant electric vision could be achieved by using
cathode ray tubes at both the transmitting and receiving ends. The speech, which expanded
on a letter he wrote to the journal Nature in 1908, was the first iteration of the electronic
television method that is still used today. Others had already experimented with using a
cathode ray tube as a receiver, but the concept of using one as a transmitter was novel. By
the late 1920s, when electromechanical television was still being introduced, inventors
Philo Farnsworth and Vladimir Zworykin were already working separately on versions of
all-electronic transmitting tubes.
The decisive solution — television operating on the basis of continuous electron emission
with accumulation and storage of released secondary electrons during the entire scansion
cycle — was first described by the Hungarian inventor Kálmán Tihanyi in 1926, with
further refined versions in 1928.
3
INDUSTRIAL TRAINING REPORT
On September 7, 1927, Philo Farnsworth's Image Dissector camera tube transmitted its
first image, a simple straight line, at his laboratory at 202 Green Street in San Francisco.
[2] By 1928, Farnsworth had developed the system sufficiently to hold a demonstration for
the press, televising a motion picture film. In 1929, the system was further improved by
elimination of a motor generator, so that his television system now had no mechanical
moving parts. That year, Farnsworth transmitted the first live human images by his
television system, including a three and a half-inch image of his wife Pem with her eyes
closed (possibly due to the bright lighting required).
In Britain Isaac Shoenburg used Zworykin's idea to develop Marconi-EMI's own Emitron
tube, which formed the heart of the cameras they designed for the BBC. Using this, on
November 2, 1936 a 405 line service was started from studios at Alexandra Palace, and
transmitted from a specially-built mast atop one of the Victorian building's towers; it
alternated for a short time with Baird's mechanical system in adjoining studios, but was
more reliable and visibly superior. So began the world's first high-definition regular
service. The mast is still in use today.
Color television
Most television researchers appreciated the value of color image transmission, with an
early patent application in Russia in 1889 for a mechanically-scanned color system
showing how early the importance of color was realized. John Logie Baird demonstrated
the world's first color transmission on July 3, 1928, using scanning discs at the
transmitting and receiving ends with three spirals of apertures, each spiral with filters of a
different primary color; and three light sources at the receiving end, with a commutator to
alternate their illumination. In 1938 shadow mask technology for color television was
patented by Werner Flechsig in Germany. Color television was demonstrated at the
International radio exhibition Berlin in 1939. On August 16, 1944, Baird gave a
demonstration of a fully electronic color television display. His 600-line color system used
triple interlacing, using six scans to build each picture.
4
INDUSTRIAL TRAINING REPORT
While television elements of the frame by means of the scanning process, it is necessary to
present the picture to the eye in such a way that an illusion of continuity and any motion of
the scene appears on the picture tube screen as a smooth and continuious change.to
achieve this, advantage is taken of ‘persistence of vision(1/16 second)’or storage
characteristics of human eye . thus if scanning rate per second is made greater than
sixteen , or the number of picture shown per second is more than sixteen, the eye is able to
integrate the changing levels of brightness in the scene. So when the picture elements are
scanned rapidly enough ,they appear to eye as a complete picture unit , with none of the
individual elements visible separrately.
2.2 scanning :
A similar process is carried out in the television system . the scene is scanned rapidly both
in the horizontal and vertical directions simultaneously to provide sufficient number of
complete pictures or frames per second to give the illusion of continuous motion . instead
of the 24 as in commercial motion picture practice, the frame repetition rate is 25 per
second in most television systems.
5
INDUSTRIAL TRAINING REPORT
from concideration of flicker, it has been found that 50 picture frames per second is the
minimum requirement in telvision scanning . for a 625-line system, this means that the
horizontal line scanning frquency should be 31,250hz, with the line period of 32μs. For a
desired resolution of 546/2 alterations in the horizontal line, this leads to a very high
bandwidh requirement,viz. ((546/20)*1/(32-6)=)10MHz , if the line scanning is the simple
sequential way.
The first set set of 312.5 odd number of lines in the 625 lines, called the first field or the
odd field, are first scanned sequentially . halfway through the the 313 th line, the spot is
returned tonthe top of the scene and remaining 312.5 even number lines, called the second
field or the even field are then traced interleaved between the lines of the first set as shown
in figure 2.0
This is done by operating the vertical field scan at 50 Hz so that the two successive
interlaced scans,each at a 25 Hz rate, make up the complete picture frame. This keeps the
line scanning speed down, as only 312.5 lines are scanne in 1/50 second. The 625 linesof
the full pictureare are scanned in 1/25 second, thus keeping down the bandwidth
requirement.
Here, through the picture is scanned25 times per second , the area of the screen is
converted in an interlaced fashion at twice the rate, viz. 50 times per second. A closed
examination may reveal the small area’ interlaced flicker’, as actullyeach individual line
repeats only 25 times per second . but this is tolerable and the overall effect is closer to
that of a 50 Hz scanning. The flicker becomes noticeable at high brightness only. In
practice , the flyback from the bottom to the top is not instantaneous and takes a finite
time equal to several line periods. Up to 20 lines are allowed for vertical flyback after
6
INDUSTRIAL TRAINING REPORT
each of the two fields that make a complete picture . this means that out of 625 lines, only
(625-40=) 585 lines actually bear picture information. These are called the active lines.
in a 625-line system , there are effectively about 410 lines of vertical resolution . the
horizontal resolution should be of the same order . because of aspect ratio of 4:3, the
number of vertical lines for equivalent resolution will be (410*4/3≈) 546 black and white
alternate lines, which means (546*1/2≈)273 cycles of black and white alternations of
elementary areas. For the 625-line system , the horizontal scan or line frequency fH is
given by :
as each picture line is scanned 25 times in one second. The totalline period is thus
fmax = (active lines *kell factor *aspect ratio)/2*line forward scan period
fmax ≈5 MHz
7
INDUSTRIAL TRAINING REPORT
This is a spectrum of energy that starts with low frequency radio waves, moves through
VHF-TV, FM radio, UHF-TV (which now includes the new digital TV band of
frequencies), all the way through x-rays
Additive Colour
When colored lights are mixed (added) together, the result is additive rather than
subtractive. Thus, when the additive primaries (red, green and blue light) are mixed
together, the result is white.
When all three primary colors overlap (are added together) on a white screen, the result is
white light. Note in this illustration that the overlap of two primary colors (for example,
red and green) creates a secondary color (in this case, yellow).
tanθ=(R-Y)/(B-Y)………………………………………..(5)
8
INDUSTRIAL TRAINING REPORT
amplitude (NTSC and PAL) or frequency (SECAM) modulated. In the PAL system, the
color subcarrier is 4.43 MHz above the video carrier, while in the NTSC system it is 3.58
MHz above the video carrier. SECAM uses two different frequencies, 4.250 MHz and
4.40625 MHz above the video carrier.
Table:1 Relative Value of Luminence and Chrominance Signals for 100% Saturated colours
9
INDUSTRIAL TRAINING REPORT
Chrominance is represented by the U-V color plane in PAL and SECAM video signals,
and by the I-Q color plane in NTSC
The composite video signal is formed by the electrical signal corresponding to the picture
information in the lines scanned in the TV camera pick-up tube and the synchronizing
signals introduced in it. It is important to preserve its waveform as any distortion of the
video signal will affect the reproduced picture, while a distortion in the sync pulses will
affect synchronization resulting in an unstable picture. The signal is, therefore, monitored
with the help of an oscilloscope, at various stages in the transmission path to conform with
the standards. In receivers, observation of the video signal waveform can provide valuable
clues to circuit faults and malfunctions
Composite video is the format of an analog television (picture only) signal before it is
combined with a sound signal and modulated onto an RF carrier.
10
INDUSTRIAL TRAINING REPORT
that by itself it could be displayed as a monochrome picture. U and V between them carry
the colour information. They are first mixed with two orthogonal phases of a colour carrier
signal to form a signal called the chrominance. Y and UV are then added together. Since Y
is a baseband signal and UV has been mixed with a carrier, this addition is equivalent to
frequency-division multiplexing.
3.2 Colorburst
11
INDUSTRIAL TRAINING REPORT
The channel allocations in band I and band III are given in table There are four channels in
band I, of which channel (6 MHz) is no longer used for TV broadcasting, being assigned to
other services.
3.4 Broadcasting of TV Programs
The public television service is operated by broadcasting picture and sound from picture
transmitters and associated sound transmitters in three main frequency ranges in the VHF
and UHF bands. By international ruling of the ITD, these ranges are exclusively allocated
to television broadcasting. Subdivision into operating channels and their assignment by
location are also ruled by international regional agreement. The continental standards are
valid as per the CCIR 1961 Stockholm plan. The details of the various system parameters
are as follows.
12
INDUSTRIAL TRAINING REPORT
The saving of frequency band is about 40%; the polarity is negative because of the
susceptibility to interference of the synchronizing circuits of early TV receivers
(exception: positive modulation}; residual carrier with negative modulation 10%
(exception 20%).
Sound: F3E; PM for better separation from vision signal in the receiver (exception: AM).
Sound carrier above vision carrier within RF channel, inversion at IF; (exception:
standards A, E and, in part, L).
3.6 Vestigial Sideband Transmission
In the video signal very low frequency modulating components exist along with the rest of
the signal. These components give rise to sidebands very close to the carrier frequency
which are difficult to remove by physically realizable filters. Thus it is not possible to go
to the extreme and fully suppress one complete sideband in the case of television signals.
The low video frequencies contain the most important information of the picture and any
effort to completely suppress the lower sideband would result in objectionable phase
distortion at these frequencies. This distortion will be seen by the eye as 'smear' in the
reproduced picture. Therefore, as a compromise, only a part of the lower sideband is
suppressed, and the radiated signal then consists of a full upper sideband together with the
carrier, and the vestige (remaining part) of the partially suppressed lower sideband. This
pattern of transmission of the modulated signal is known as vestigial sideband or A5C
13
INDUSTRIAL TRAINING REPORT
transmission. In the 625 line system, frequencies up to0.75 M H in z the lower sideband are
fully radiated. The net result is a normal double sideband transmission for the lower video
frequencies corresponding to the main body of picture information.
As stated earlier, because of filter design difficulties it is not possible to terminate the
bandwidth of a signal abruptly at the edges of the sidebands. Therefore, an attenuation
slope covering approximately 0.5 MHz is allowed either end. Any distortion at the
higher frequency end, if attenuation Slopes were not allowed, would mean a serious loss in
horizontal detail,
Since the high frequency components of the video modulation determine the amount of
horizontal detail in the picture. Fig illustrates the saving of band. Space. which results
from vestigial sideband transmission. The picture signal is seen to occupy a bandwidth of
6.75 MHz instead of 11MHz.
14
INDUSTRIAL TRAINING REPORT
3.9 Transmission
This high channel capacity can only be achieved with internal studio links via coaxial
cables or fiber optics. The public communications networks of present-day technology, the
limits per channel lie at the hierarchical step of 34 Mbits/s for microwave links, later 140
Mbits/s. Therefore great attempts are being made at reducing the bit rate with the aim of
achieving satisfactory picture quality with 34 Mbits/s per channel.
Terrestrial TV transmitters and coaxial copper cable networks are unsuitable for digital
transmissions. 5-EEllites with carrier frequencies of about 20 GHz and above may be
used.
The digital coding of sound ·signals for satellite sound broadcasting and for the digital
sound studio is more elaborate with respect to quantizing than for video signals.
A quantization q of 16 Bits/amplitude value is· required to obtain a quantizing signal-to-
noise ratio S/Nq of 98 dB,
[S/Nq = 96 + 2) dB].
The sampling frequency must follow the sampling theorem f sample =/> 2 x fmax, where
fmax is the maximum frequency of the base band.
Direct satellite
sound broadcasting
with 16 stereo 32 KHz 16 bits 512Kbits/sec
channels
15
INDUSTRIAL TRAINING REPORT
In older video cameras, before the 1990s, a video camera tube or pickup tube was used
instead of a charge-coupled device (CCD). Several types were in use from the 1930s to the
1980s. These tubes are a type of cathode ray tube
• 1 Image dissector
• 2 The iconoscope
• 3 Image Orthicon
• 4 Vidicon
• 5 Plumbicon
• 6 Saticon
• 7 Pasecon
• 8 Newvicon
• 9 Trinicon
4.1.1 Vidicon
A vidicon tube (sometimes called a hivicon tube) is a video camera tube in which the
target material is made of antimony trisulfide (Sb2S3).
The terms vidicon tube and vidicon camera are often used indiscriminately to refer to
video cameras of any type. The principle of operation of the vidicon camera is typical of
other types of video camera tubes.
16
INDUSTRIAL TRAINING REPORT
Pyroelectric photocathodes can be used to produce a vidicon sensitive over a broad portion
of the infrared spectrum.
Vidicon tubes are notable for a particular type of interference they suffered from, known
as vidicon microphony. Since the sensing surface is quite thin, it is possible to bend it
with loud noises. The artifact is characterized by a series of many horizontal bars evident
in any footage (mostly pre 1990) in an environment where loud noise was present at the
time of recording or broadcast. A studio where a loud rock band was performing or even
gunshots or explosions would create this artifact.
4.1.2 Plumbicon
Compared to Saticons, Plumbicons had much higher resistance to burn in, and coma and
trailing artifacts from bright lights in the shot. Saticons though, usually had slightly higher
resolution. After 1980, and the introduction of the diode gun plumbicon tube, the
resolution of both types was so high, compared to the maximum limits of the broadcasting
standard, that the Saticon's resolution advantage became moot.
17
INDUSTRIAL TRAINING REPORT
18
INDUSTRIAL TRAINING REPORT
• 1 Studio floor
• 2 Production control room
• 3 Master control room
The studio floor is the actual stage on which the actions that will be recorded take place. A
studio floor has the following characteristics and installations:
While a production is in progress, the following people work in the studio floor.
• The on-screen "talent" themselves, and any guests - the subjects of the show.
• A floor director, who has overall charge of the studio area, and who relays timing
and other information from the director.
• One or more camera operators who operate the television cameras.
19
INDUSTRIAL TRAINING REPORT
The production control room (also known as the 'gallery') is the place in a television studio
in which the composition of the outgoing program takes place. Facilities in a PCR include:
• a video monitor wall, with monitors for program, preview, videotape machines,
cameras, graphics and other video sources
• switcher a device where all video sources are controlled and taken to air. Also
known as a special effects generator
• audio mixing console and other audio equipment such as effects devices
• character generator creates the majority of the names and full screen graphics that
are inserted into the program
• digital video effects and/or still frame devices (if not integrated in the vision mixer)
• technical director's station, with waveform monitors, vectorscopes and the camera
control units or remote control panels for the camera control units (CCUs)
• VTRs may also be located in the PCR, but are also often found in the central
machine room
The master control room houses equipment that is too noisy or runs too hot for the
production control room. It also makes sure that wire lengths and installation requirements
keep within manageable lengths, since most high-quality wiring runs only between devices
in this room. This can include:
• The actual circuitry and connection boxes of the vision mixer, DVE and character
generator devices
• camera control units
• VTRs
• patch panels for reconfiguration of the wiring between the various pieces of
equipment.
20
INDUSTRIAL TRAINING REPORT
A television studio usually has other rooms with no technical requirements beyond
program and audio monitors. Among them are:
A vision mixer (also called video switcher, video mixer or production switcher) is a
device used to select between several different video sources and in some cases composite
(mix) video sources together and add special effects. This is similar to what a mixing
console does for audio.
Explanation
Vision mixer and video mixer are almost exclusively European terms to describe both the
equipment and operatorsSoftware vision mixers are also available.
Besides hard cuts (switching directly between two input signals), mixers can also generate
a variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision
mixers can perform keying operations and generate color signals (called mattes in this
context). Most vision mixers are targeted at the professional market, with newer analog
models having component video connections and digital ones using SDI. They are used in
live and video taped television productions and for linear video editing, even though the
use of vision mixers in video editing has been largely supplanted by computer based non-
linear editing.
21
INDUSTRIAL TRAINING REPORT
A character generator (CG for short) is a device or software that produces static or
animated text (such as crawls and rolls) for keying into a video stream. Modern character
generators are actually computer-based, and can generate graphics as well as text.
Character generators are primarily used in the broadcast areas of live sports or news
presentations, given that the modern character generator can rapidly (i.e., "on the fly")
generate high-resolution, animated graphics for use when an unforseen situation in the
game or newscast dictates an opportunity for broadcast coverage -- for example, when, in
a football game, a previously unknown player begins to have what looks to become an
outstanding day, the character generator operator can rapidly, using the "shell" of a
similarly-designed graphic composed for another player, build a new graphic for the
previously unanticipated performance of the lesser known player. The character generator,
then, is but one of many technologies used in the remarkably diverse and challenging work
of live television, where events on the field or in the newsroom dictate the direction of the
coverage. In such an environment, the quality of the broadcast is only as good as its
weakest link, both in terms of personnel and technology. Hence, character generator
development never ends, and the distinction between hardware and software CG's begins
to blur as new platforms and operating systems evolve to meet the live television
Hardware CGs
Hardware CGs are used in television studios and video editing suites. A DTP-like interface
can be used to generate static and moving text or graphics, which the device then encodes
into some high-quality video signal, like digital SDI or analog component video, high
definition or even RGB video. In addition, they also provide a key signal, which the
compositing vision mixer can use an alpha channel to determine which areas of the CG
video are translucent.
Software CGs
22
INDUSTRIAL TRAINING REPORT
Software CGs run on standard off-the-shelf hardware and are often integrated into video
editing software such as nonlinear video editing applications. Some stand-alone products
are available, however, for applications that do not even attempt to offer text generation on
their own, as high-end video editing software often does, or whose internal CG effects are
not flexible and powerful enough. Some software CGs can be used in live production with
special software and computer video interface cards. In that case, they are equivalent to
hardware CGs.
The camera control unit (CCU) is installed in the production control room (PCR), and
allows various aspects of the video camera on the studio floor to be controlled remotely.
The most commonly made adjustments are for white balance and aperture, although
almost all technical adjustments are made from controls on the CCU rather than on the
camera. This frees the camera operator to concentrate on composition and focus, and also
allows the technical director of the studio to ensure uniformity between all the cameras.
As well as acting as a remote control, the CCU usually provides the external interfaces for
the camera to other studio equipment, such as the vision mixer and intercom system, and
contains the camera's power supply.
A video tape recorder (VTR), is a tape recorder that can record video material. The video
cassette recorder (VCR), where the videotape is enclosed in a user-friendly videocassette
shell, is the most familiar type of VTR known to consumers. Professionals may use other
types of video tapes and recorders.
23
INDUSTRIAL TRAINING REPORT
• U-matic (3/4")
• Betacam (Sony)
• M-II (Panasonic)
• Betacam SP (Sony)
A VCR.
The videocassette recorder (or VCR, more commonly known in the British Isles as the
video recorder), is a type of video tape recorder that uses removable videotape cassettes
containing magnetic tape to record audio and video from a television broadcast so it can be
played back later. Many VCRs have their own tuner (for direct TV reception) and a
programmable timer (for unattended recording of a certain channel at a particular time).
24
INDUSTRIAL TRAINING REPORT
A patch panel or patch bay is a panel, typically rackmounted, that houses cable
connections. One typically shorter patch cable will plug into the front side, while the back
will hold the connection of a much longer and more permanent cable. The assembly of
hardware is arranged so that a number of circuits, usually of the same or similar type,
appear on jacks for monitoring, interconnecting, and testing circuits in a convenient,
flexible manner.
Patch panels offer the convenience of allowing technicians to quickly change the path of
select signals, without the expense of dedicated switching equipment.
A video monitor is a device similar to a television, used to monitor the output of a video
generating device, such as a video camera, VCR, or DVD player. It may or may not have
audio monitoring capability.
Unlike a television, a video monitor has no tuner and, as such, is unable to independently
tune into an over-the-air broadcast.
One common use of video monitors in is Television stations and Outside broadcast
vechicles, where broadcast engineers use them for confidence checking of signals
throughout the system.
Video monitors are also used extensively in the security industry with Closed-circuit
television cameras and recording devices.
25
INDUSTRIAL TRAINING REPORT
In professional audio, a mixing console, digital mixing console, mixing desk (Brit.), or
audio mixer, also called a sound board or soundboard, is an electronic device for
combining (also called "mixing"), routing, and changing the level, tone, and/or dynamics
of audio signals. A mixer can mix analog or digital signals, depending on the type of
mixer. The modified signals (voltages or digital samples) are summed to produce the
combined output signals.
Mixing consoles are used in many applications, including recording studios, public address
systems, sound reinforcement systems, broadcasting, television, and film post-production.
An example of a simple application would be to enable the signals that originated from
two separate microphones (each being used by vocalists singing a duet, perhaps) to be
heard through one set of speakers simultaneously. When used for live performances, the
signal produced by the mixer will usually be sent directly to an amplifier, unless that
particular mixer is “powered” or it is being connected to powered speakers.
Further channel controls affect the equalization of the signal by separately attenuating or
boosting a range of frequencies (e.g., bass, midrange, and treble frequencies). Most large
mixing consoles (24 channels and larger) usually have sweep equalization in one or more
bands of its parametric equalizer on each channel, where the frequency and affected
bandwidth of equalization can be selected. Smaller mixing consoles have few or no
26
INDUSTRIAL TRAINING REPORT
equalization control. Some mixers have a general equalization control (either graphic or
parametric).
Each channel on a mixer has an audio taper pot, or potentiometer, controlled by a sliding
volume control (fader), that allows adjustment of the level, or amplitude, of that channel in
the final mix. A typical mixing console has many rows of these sliding volume controls.
Each control adjusts only its respective channel (or one half of a stereo channel); therefore,
it only affects the level of the signal from one microphone or other audio device. The
signals are summed to create the main mix, or combined on a bus as a submix, a group of
channels that are then added to get the final mix (for instance, many drum mics could be
grouped into a bus, and then the proportion of drums in the final mix can be controlled
with one bus fader).
There may also be insert points for a certain bus, or even the entire mix.
On the right hand of the console, there are typically one or two master controls that enable
adjustment of the console's main mix output level.
Finally, there are usually one or more VU or peak meters to indicate the levels for each
channel, or for the master outputs, and to indicate whether the console levels are
overmodulating or clipping the signal. Most mixers have at least one additional output,
besides the main mix. These are either individual bus outputs, or auxiliary outputs, used,
for instance, to output a different mix to on-stage monitors. The operator can vary the mix
(or levels of each channel) for each output.
As audio is heard in a logarithmic fashion (both amplitude and frequency), mixing console
controls and displays are almost always in decibels, a logarithmic measurement system.
This is also why special audio taper pots or circuits are needed. Since it is a relative
measurement, and not a unit itself (like a percentage), the meters must be referenced to a
nominal level. The "professional" nominal level is considered to be +4 dBu. The
"consumer grade" level is −10 dBV.
27
INDUSTRIAL TRAINING REPORT
The lighting is controlled by varying the effective current flow through the lamps by
means of silicon controlled rectifier (SCR) dimmers. These enable the angle of current
flow to be continuously varied by suitable gate-triggering signals. The lighting patch
panels and SCR dimmer controls for the lights are provided in a separate room. The
lighting is energized and controlled by switches and faders on the dimmer console in the
PCR, from the technical presentation panel. The lighting has to prevent shadows and
produce desired contrast effects. Following are some of the terms used in lighting.
High key: lighting is the lighting that gives a picture that has gradations that fall
between gray shades and white, confining dark gray and black to few areasasin news
reading, panel discussions,'etc.
Low key: lighting is the lighting that gives picture having gradations falling from gray
to black with few areas of light gray or white.
Key light: is the principal source of direct iliumination often with hinged panels or
shutters to control the spread 'of the light beam.
28
INDUSTRIAL TRAINING REPORT
Fill light: is the supplementary soft light to fill details to reduce shadow contrast range.
Back light: is the illumination from behind the subject in the plane of camera optical
axis, to provide 'accent lighting' to bring out the subject against the background or the
scene.
29
INDUSTRIAL TRAINING REPORT
base may recover fairly quickly, but the field time base may take several seconds to
recover if the phase change is large resulting in picture roll. This can be avoided if the
field sync components of the two sources are brought approximately in phase by suitable
phase shifting networks at the moment of switching.
In the genlock process, the line and field components of the local SPG are locked in
frequency and phase to the line and field components of a remote incoming video signal
without producing any visible disturbance in the monitor. The line and field sync
components of the incoming composite 'video signal are separated and are used to lock the
local SPG master oscillator through a timing phase comparator.
In order to bring the local line and field sync components in phase with the remote ones,
the local line or field frequency is changed for some time. Field phasing is achieved
automatically by deviating the field frequency from its normal value by altering the
number of lines per field. The field frequency divider count is changed so that the system
runs at 623 or 627 lines until field coincidence is achieved. When the field phasing is
correct, the normal number of 625 lines is restored. This method can give a fairly rapid
lock, in a matter of a few seconds and hence is called 'quick genlock' .
Another method of genlock is to change the line-time until coincidence is obtained and
then resetting it to the usual 64 fls period. This allows a much slower phasing process and
can, therefore, be carried out without disturbing other sources tied to the generator to be
phased in. The number of lines in each field is constant.
In both modes, the timing phase comparator is situated at the mixing point. In the
genlock mode, the error control signal is used locally to adjust the timing of the
appropriate sync component of (the SPG to) the video signal at the mixing point. In the
slavelock, the error control signal is fed back in an invened sense to correct the timing of
the appropriate synchronising component of (the SPG of) the contributio~, from the
incoming video signal.
30
INDUSTRIAL TRAINING REPORT
Any system capable of slavelock operation can also be used for genlock operation. But a
genlock system cannot operate for slavelock unless the error signal is suitable for other
generators and the feedback delay is tolerated by the system for maintaining stability.
It is common for professional cameras to split the incoming light into the three primary
colors that humans are able to see, feeding each color into a separate pickup tube (in older
cameras) or charge-coupled device (CCD). Some high-end consumer cameras also do this,
producing a higher-quality image than is normally possible with just a single video pickup.
Often used in independent films, ENG video cameras are similar to consumer camcorders,
and indeed the dividing line between them is somewhat blurry, but a few differences are
generally notable:
• They are bigger, and usually have a shoulder stock for stabilizing on the
cameraman's shoulder
• They use 3 CCDs instead of one (as is common in digital still cameras and
consumer equipment), one for each primary color
31
INDUSTRIAL TRAINING REPORT
Lens Turret- a judicious choice of the lens can considerably improve the quality of image ,
depth of field and the impact which is intended to be created on the viewer. Accordingly a
number of different viewing angles are provided. Their focal lengths are slightly adjusted
by movement of the front element of the lens located on the lens assembly .
Zoom Lens- a zoom lens has a variable focal length with an angle of 10:1 or more in this
lens the viewing angle and field view can be varied without loss of focus . This enables
dramatic close up control of the smooth and gradual change of focal length by the camera-
man while televising a scene appears to viewers if he approaching or recording from the
scene.
Camera Mounting- studio camera is necessary to be able to move camera up and down and
arround its centre axis to pic-up different sections of the scene.
View Finder- to permit the camera operator to frame the scene and maintain proper focus
of an electronic view finder is provided with most Tv camera. It receive video signals from
the control room stabilizing amplifier.The view finder has its own diflection circuitry as in
any other monitor , to produce the raster. The view finder also has a built in dc restorer for
maintaining average brightness of the scene televised.
The gamma corrected RGB signals are combined in the Y-matrix to form the Y signal.the
U-V matrix combines the R,B and –Y signals to obtain R-Y and B-Y, which are weighted
to obtain U and V signals. weighting by the factor 0.477 for R-Y, and 0.895 for prevents
overmodulation on saturated colours.This gives:
Y= 0.30R+0.59+0.11B
32
INDUSTRIAL TRAINING REPORT
U=0.477(R-Y)
V=0.895(B-Y)
• The first and largest part is the production area where the director, technical
director, assistant director, character generator operator and producers usually sit in
front a wall of monitors. This area is very similar to a Production control room.
The technical director sits in front of the video switcher. The monitors show all the
video feeds from various sources, including computer graphics, cameras, video
33
INDUSTRIAL TRAINING REPORT
tapes, or slow motion replay machines. The wall of monitors also contains a
preview monitor showing what could be the next source on air (does not have to be
depending on how the video switcher is set up) and a program monitor that shows
the feed currently going to air or being recorded.
• The second part of a van is for the audio engineer; it has a sound mixer (being fed
with all the various audio feeds: reporters. commentary, on-pitch microphones, etc.
The audio engineer can control which channels are added to the output and will
follow instructions from the director. The audio engineer normally also has a dirty
feed monitor to help with the synchronization of sound and video.
• The 3rd part of the van is video tape. The tape area has a collection of video tape
machines (VTRs) and may also house additional power supplies or computer
equipment.
• The 4th part is the video control area where the cameras are controlled by 1 or 2
people to make sure that the iris is at the correct exposure and that all the cameras
look the same.
• The 5th part is transmission where the signal is monitored by and engineered for
quality control purposes and is transmitted or sent to other trucks.
A video switcher is multi contact crossbar switch matrix with provision for selecting any
one or more out of large no. of inputs and switching them on to out going ckts.The input
source includes Cameras,V.T.R, and Telecine Machine outputs, besides test signal and
special effects generators.
34
INDUSTRIAL TRAINING REPORT
Clips are arranged on a timeline, music tracks and titles are added, effects can be created,
and the finished program is "rendered" into a finished video.
35
INDUSTRIAL TRAINING REPORT
Non-linear editing for film and television postproduction is a modern editing method
which involves being able to access any frame in a video clip with the same ease as any
other. This method is similar in concept to the "cut and glue" technique used in film
editing from the beginning. However, when working with film, it is a destructive process,
as the actual film negative must be cut. Non-linear, non-destructive methods began to
appear with the introduction of digital video technology.
Video and audio data are first digitized to hard disks or other digital storage devices. The
data is either recorded directly to the storage device or is imported from another source.
Once imported they can be edited on a computer using any of a wide range of software.
With the availability of commodity video processing specialist video editing cards, and
computers designed specifically for non-linear video editing, many software packages are
now available to work with them
Linear Editing
It is done by using VCR and using monitor to see the output of editing.
5.24 Graphics
The paint-box is a professional tool for graphics designer. Using an electronics curser or
pen and a electronics board, by paint box any type of design can be created.
An artist can capture any live video-frame and retouch it and subsequently process, cut or
paste it on another picture and prepare a stencil out of the grabbed picture.
The system consists of:
Mainframe electronics, a graphics table, a keyboard, a floppy disk drive, 385MB
Winchester disk drive.
It basically comes under the outside broadcasting. ENG may be live or recorded type.
36
INDUSTRIAL TRAINING REPORT
There are two types of professional video cameras: High End Portable, Recording
Cameras (essentially, high-end camcorders) used for ENG and studio cameras which lack
the recording capability of a camcorder, and are often fixed on studio pedestals. Portable
professional cameras are generally much larger than consumer cameras and are designed
to be carried on the shoulder.
37
INDUSTRIAL TRAINING REPORT
a desired color effect. (The desired effect may be a neutral "daylight" color, or any other
effect, e.g. a slight warm-up effect for portraits.)
Each point in the image can be described with three values. These could be chosen to be
the percentage intensity of the colors red, green, and blue, relative to their maximum
values for the particular film. This is completely analogous to the RGB color space in
computer graphics.
Three values describe the image at any given point, but only two values are required to
describe the color balance. Think of it this way: the overall intensity doesn't matter; if it is
dark blue or light blue it is still blue. If you mix 25% of each of red, green, and blue you
get a neutral gray color. If you mix 50% intensity you still get neutral gray, albeit a slightly
lighter gray.
In the table below cells in the same row have the same color balance, only the intensity
changes. All the colors in the first row are red and red only with no trace of blue or green.
We are of course free to choose any two (different) values to measure the color balance.
In photography it is traditional to choose as the two variables the ratio of blue to red and
the ration of green to the overall intensity. These correspond to the traditional light-
balancing filters (80, 81, 82, and 85 series filters), and green and magenta filters (CC-G
and CC-M).
38
INDUSTRIAL TRAINING REPORT
Color temperature is a term that is borrowed from physics. In physics we learn that a so
called "black body" will radiate light when it is heated. The spectrum of this light, and
therefore its color, depends on the temperature of the body. You probably know this effect
from everyday life: if you heat an iron bar, say, it will eventually start to glow dark red
("red hot"). Continue to heat it and it turns yellow (like the filament in a light-bulb) and
eventually blue-white. The color moves from red to wards blue. But we say that red is a
"warmer" color than blue! So a warm body radiates a cold color and a (comparatively)
cold body radiates warm colors.
The photographic color temperature is not the same as the color temperature defined in
physics or colorimetry. As mentioned above, the photographic color temperature is
measured only on the relative intensity of blue to red. However, we borrow the basic
measurement scale from physics and we will measure the photographic color temperature
in degrees Kelvin (K).
39
INDUSTRIAL TRAINING REPORT
This means that you will find photographers talking about "daylight balanced" film
(nominally 5500K) and type A and B tungsten balanced films (3400K and 3200K). This
gives the color of the light: below we will define a measure of how much a filter moves
the color temperature (the mired shift).
Light balancing filters are used to change the color temperature of light. If you place a
light balancing filter in front of your lens, the overall temperature of the scene will be
changed. These filters are sometimes called conversion filters because they may be used to
"convert" daylight balanced film to use in tungsten light or tungsten films to use in
daylight
Where T1 is the color temperature you have and T2 is the color temperature you desire (for
example the color temperature of your film). The mired shift is sometimes called the light
balance (LB) index of the filter
Satellite Communications
Television could not exist in its contemporary form without satellites. Since 10 July 1962,
when NASA technicians in Maine transmitted fuzzy images of themselves to engineers at
a receiving station in England using the Telstar satellite, orbiting communications
satellites have been routinely used to deliver television news and programming between
companies and to broadcasters and cable operators. And since the mid-1980s they have
been increasingly used to broadcast programming directly to viewers, to distribute
advertising, and to provide live news coverage
40
INDUSTRIAL TRAINING REPORT
mv2 =
Mm *G
r r2
v = √ (GM/r)
T = orbital period of the satellite = 24 hrs.
= 2*Π*r/v
= 24*3600 seconds
Put: M = 5.974 X 1024 kg,
G = 6.6672 X 10-11
Gives the orbital radius of a synchronous satellite as 42164 km. Deducting the radius of
earth equal to 6378 km, the distance from earth surface will be 35786 km.
7.1.2 Footprints
As the satellite radio beam is aimed towards the earth, it illuminates on the earth an oval
service area, called the 'footprint'. Because of slant illumination of the earth by the
equatorial satellite, this is actually an egg-shaped area with the sharper side pointing
towards the pole. The size of the footprint depends on how greatly the beam spreads on to
the surface of the earth intercepted by it. The foot prints for contours of 3 dB or half power
beam width are usually considered. The beam width planning depends on the angle of
incidence of the beam on the earth or the angle elevation of the satellite. It can be directly
controlled by the size of the on-board parabolic antenna. Present day launchers can carry
antennas of around 3m, giving a minimum beam width of about 0.6°. With difficulties in
accurate station-keeping, it is prudent to allow for a margin of around 0.10 when
planning the footprint to cover a country. Some satellite employ additional antennas to
emit spot beams that cover regions beyond the normal oval shape the slant range of a
satellite involves calculation of the distances from the bore sight point of the beam,
covered by the semi-beam width angle, considering the geometry of footprint
41
INDUSTRIAL TRAINING REPORT
The free space loss depends on the path length d which is related to the angle of
elevation. The radio waves undergo attenuation loss due to scattering and absorption in
the lower layers of atmosphere and by rain, clouds etc. The atmospheric loss depends
on the length of path through atmosphere, and naturally increases with lower angles of
elevation. The loss also varies according to the atmospheric fluctuating with respect
time. Hence maximum attenuation loss values, encountered for 99% or 99.9% time,
during which satellite broadcasts are received are considered, depending on the degree
of reliability sought. For 99% reliability, the attenuation in the 12 GHz band is found to
increase e.g. from 1.5 dB at 45° angle of elevation, to 6.8 dB at 5° and for 99.9 %
reliability it increases to 4.8 dB and 14 dB resp.
Down
Pt Converter
At the satellite transponder, a power amplifier feeds power Pt to the transmitting antenna
of maximum directive gain Gt. The maximum radiated power EIRP from the antenna is
Pt* Gt.
EIRP = Pt + Gt.
As this power propagates towards the earth it spreads into space and encounters the so
called free space loss. The spreading factor is given by ¼*Π * d2. The power density along
the direction of maximum radiation is
PFD = Pt. Gt /4Π d2
When a parabolic dish receiving antenna is positioned to collect maximum power from the
radiated power, the total power intercepted and received is given by
Pr = PFD* Aeff
Where Aeff is the effective dish area or aperture, (= η *A, efficiency coefficient η
accounting for the dish coupling loss)
Pr = PFD x Aeff , where Aeff = η *A
The power gain of an antenna (in the direction of maximum directivity) with respect to
isotropic antenna is given by the basic relation:
G =(4Π· Aeff) / λ2
42
INDUSTRIAL TRAINING REPORT
The radiation pattern from a parabolic dish can be calculated from the equation:
43
INDUSTRIAL TRAINING REPORT
obtained for the argument, from Bessel function tables/graphs. The values of the argument
for which the Bessel function becomes zero, will be found to be 3.8, 10.3 and 13.5. The
ang1e of the radiation pattern where the first null occurs is given by
The main lobe of circular dish lies within the angle ¢ between the first nulls on either side
or is given by twice this angle 0.
The 3 dB beamwidth of the main lobe is given by the half lobe angle
Φ 3Db=58(λ/D)
It may be observed that the antenna gain is inversely proportional to square of the beam
width. That is, a decrease of the beam width by a factor of 2 obtained by doubling the
diameter of the dish increases the antenna gain by a factor of 4 (6 dB).
44
INDUSTRIAL TRAINING REPORT
The majority of earth stations are used to communicate with communications satellites,
and are called satellite earth stations or teleports, but others are used to communicate
with space probes, and manned spacecraft. Where the communications link is used mainly
to carry telemetry or must follow a satellite not in geostationary orbit, the earth station is
often referred to as a tracking station.
Many earth station receivers use the double superhet configuration shown in which has
two stages of frequency conversion. The front end of the receiver ~~ mounted behind the
antenna feed and converts the incoming RF signals to a first IF in the range 900 to 1400
MHz. This allows the receiver to accept all the signals transmitted from a satellite in a
500-MHz bandwidth at C band or Ku band, for example. The RF amplifier has a high gain
and the mixer is followed by a stage of IF amplification. This section of the receiver is
called a low noise block converter (LNB). The 900-1400 MHz signal is sent over a coaxial
cable to a set-top receiver that contains another down-converter and a tunable local
oscillator. The local oscillator is tuned to convert the incoming signal from a selcted
transponder to a second IF frequency. The second IF amplifier has a bandwidth matched to
the spectrum of the transponder signal. Direct broadcast satellite TV receivers at Ku band
use this approach; with a second IF filter bandwidth of 20 MHz
45
INDUSTRIAL TRAINING REPORT
46
INDUSTRIAL TRAINING REPORT
To ensure that the uplink and downlink signals do not interfere with each other,
separate frequencies are used for uplinking and downlinking.
7.4 Transponders
The word transponder is coined from transmitter-responder and it refers to the
equipment channel through the satellite that connects the receive antenna with
the transmit antenna. The transponder itself is not a single unit of equipment,
but consists of some units that are common to all transponder channels and
others that can be identified with a particular channel.
47