Beruflich Dokumente
Kultur Dokumente
Digital telecommunications
Modern digital telephone systems have less trouble in the voice frequency range as only the local
line to the subscriber now remains in analog format, but DSL circuits operating in the MHz range on
When there is no intersymbol interference (from a multipath channel, from imperfect pulse shaping,
or from imperfect timing), the impulse response of the system from the source to the recovered
message has a single nonzero term. The amplitude of this single spike depends on the transmission
losses, and the delay is determined by the transmission time. When there is intersymbol interference
caused by a multipath channel, this single spike is scattered, duplicated once for each path in the
channel. The number of nonzero terms in the impulse response increases. The channel can be
modeled as a finite-impulse-response, linear filter C, and the delay spread is the total time interval
during which reflections with significant energy arrive. The idea of the equalizer is to build (another)
filter in the receiver that counteracts the effect of the channel. In essence, the equalizer must
unscatter the impulse response. This can be stated as the goal of designing the equalizer E so that
the impulse response of the combined channel and equalizer CE has a single spike. This can be cast
as an optimization problem, and can be solved using techniques familiar from Chapters [link], [link],
and [link].
The baseband linear (digital) equalizer is intended to (automatically) cancel unwanted effects of the
channel and to cancel certain kinds of additive interferences.
The signal path of a baseband digital communication system is shown in [link], which emphasizes
the role of the equalizer in trying to counteract the effects of the multipath channel and the additive
interference. As in previous chapters, all of the inner parts of the system are assumed to operate
precisely: thus, the up conversion and down conversion, the timing recovery, and the carrier
synchronization (all those parts of the receiver that are not shown in are assumed to be flawless and
unchanging. Modelling the channel as a time-invariant FIR filter, the next section focuses on the task
of selecting the coefficients in the block labelled linear digital equalizer, with the goal of removing
the intersymbol interference and attenuating the additive interferences. These coefficients are to be
chosen based on the sampled received signal sequence and (possibly) knowledge of a prearranged
training sequence. While the channel may actually be time varying, the variations are often much
slower than the data rate, and the channel can be viewed as (effectively) time invariant over small
time scales.
This chapter suggests several different ways that the coefficients of the equalizer can be chosen. The
first procedure, in "A Matrix Description", minimizes the square of the symbol recovery error1 over a
block of data, which can be done using a matrix pseudo inversion. Minimizing the (square of the)
error between the received data values and the transmitted values can also be achieved using an
adaptive element, as detailed in "An Adaptive Approach to Trained Equalization". When there is no
training sequence, other performance functions are appropriate, and these lead to equalizers such as
the decision-directed approach in "Decision-Directed Linear Equalization" and the dispersion
minimization method in "Dispersion-Minimizing Linear Equalization". The adaptive methods
considered here are only modestly complex to implement, and they can potentially track time
variations in the channel model, assuming the changes are sufficiently slow.
Multipath Interference
The villains of this chapter are multipath and other additive interferers. Both should be
familiar from.
y ( t ) = 1 u ( t - 1 ) + 2 u ( t - 2 ) + ... + N u ( t - N ) + ( t ) ,
where (t) represents additive interferences. This model of the transmission channel has
the form of a finite impulse response filter, and the total length of time N-1 over which
the impulse response is nonzero is called the delay spread of the physical medium.
y ( k T s ) = a 1 u ( k T s ) + a 2 u ( ( k - 1 ) T s ) + ... + a n u ( ( k - n ) T s ) + ( k T s ) .
In order for the model to closely represent the system the total time over which the
impulse response is nonzero (the time nTs) must be at least as large as the maximum
delay N. Since the delay is not a function of the symbol period Ts, smaller Ts require
more terms in the filter (i.e., larger n).
For example, consider a sampling interval of Ts40 nanoseconds (i.e., a transmission rate
of 25 MHz). A delay spread of approximately 4 microseconds would correspond to one
hundred taps in the model . Thus, at any time instant, the received signal would be a
combination of (up to) one hundred data values. If Ts were increased to 0.4 microsecond
(i.e., 2.5 MHz), only 10 terms would be needed, and there would be interference with
only the 10 nearest data values. If Ts were larger than 4 microseconds (i.e., 0.25 MHz),
only one term would be needed in the discrete-time impulse response. In this case,
adjacent sampled symbols would not interfere. Such finite duration impulse response
models as can also be used to represent the frequency-selective dynamics that occur in the
wired local end-loop in telephony, and other (approximately) linear, finite-delay-spread
channels.
The design objective of the equalizer is to undo the effects of the channel and to remove
the interference. Conceptually, the equalizer attempts to build a system that is a delayed
inverse of removing the intersymbol interference while simultaneously rejecting additive
interferers uncorrelated with the source. If the interference (kTs) is unstructured (for
instance white noise) then there is little that a linear equalizer can do to remove it. But
As shown in Example of , the solution for the optimal sampling times found by the clock
recovery algorithms depend on the ISI in the channel. Consequently, the digital model
(such as formed by sampling an analog transmission path (such as depends on when the
samples are taken within each period Ts. To see how this can happen in a simple case,
consider a two-path transmission channel
(t)+0.6(t-),
where is some fraction of Ts. For each transmitted symbol, the received signal will
contain two copies of the pulse shape p(t), the first undelayed and the second delayed
by and attenuated by a factor of 0.6. Thus, the receiver sees
c(t)=p(t)+0.6p(t-).
This is shown in for =0.7Ts. The clock recovery algorithms cannot separate the
individual copies of the pulse shapes. Rather, they react to the complete received shape,
which is their sum. The power maximization will locate the sampling times at the peak of
this curve, and the lattice of sampling times will be different from what would be
expected without ISI. The effective (digital) channel model is thus a sampled version
of c(t). This is depicted in by the small circles that occur at Ts spaced intervals.
The optimum sampling times (as found by the energy maximization algorithm) differ when there is
ISI in the transmission path, and change the effective digital model of the channel.
LMS Note that the receiver does not have access to the transmitted signal when it is not in
training mode. If the probability that the equalizer makes a mistake is sufficiently small, the
symbol decisions made by the equalizer may be substituted.
RLS
A well-known example is the decision feedback equalizer, a filter that uses feedback of
detected symbols in addition to conventional equalization of future symbols.[5] Some systems use
predefined training sequences to provide reference points for the adaptation process.
ORGINAL BASEBAND MESSAGE:
Baseband refers to analog or digital data before being intermixed with other data. ... For Example
the output of an analog microphone is baseband. When an FM station's carrier frequency is stripped
away in the radio (demodulated), the original audio signal that you hear is the baseband signal.
MODULATOR:
A radio frequency modulator (or RF modulator) takes a baseband input signal and then outputs a
radio frequency modulated signal. This is often a preliminary step in signal transmission, either by
antenna or to another device such as a television.
A demodulator is a circuit that is used in amplitude modulation and frequency modulation receivers
in order to separate the information that was modulated onto the carrier from the carrier itself. A
demodulator is the analog part of the modulator. A modulator puts the information onto a carrier
wave at the transmitter end and then a demodulator pulls it so it can be processed and used on the
receiver end.
Power supply The energy source used to power the device and create the energy for
broadcasting
Electronic oscillator Generates a wave called the carrier wave where data is imposed and
carried through the air
Modulator Ads the actual data into the carrier wave by varying some aspect of the carrier
wave
RADIO CHANNEL:
An assigned band of frequencies sufficient for radio communication. The bandwidth of a radio
channel depends upon the type of transmission and the frequency tolerance. Note 2: A channel is
usually assigned for a specified radio service to be provided by a specified transmitter.
Megahertz means "millions of cycles per second," so "91.5 megahertz" means that the transmitter at
the radio station is oscillating at a frequency of 91,500,000 cycles per second. ... All FM radio
stations transmit in a band of frequencies between 88 megahertz and 108 megahertz.
RF RECEIVER FRONT END:
In a radio receiver circuit, the RF front end is a generic term for all the circuitry between
the antenna up to and including the mixer stage.[1] It consists of all the components in the receiver
that process the signal at the original incoming radio frequency (RF), before it is converted to a
lower intermediate frequency (IF). In microwave and satellite receivers it is often called the low-
noise block (LNB) or low-noise downconverter (LND) and is often located at the antenna, so that the
signal from the antenna can be transferred to the rest of the receiver at the more easily handled
intermediate frequency.
For most superheterodyne architectures, the RF front end consists of:[2]
A band-pass filter (BPF) to reduce image response. This removes any signals at the image
frequency, which would otherwise interfere with the desired signal. It also prevents strong out-
of-band signals from saturating the input stages.
An RF amplifier, often called the low-noise amplifier (LNA). Its primary responsibility is to
increase the sensitivity of the receiver by amplifying weak signals without contaminating them
with noise, so that they can stay above the noise level in succeeding stages. It must have a very
low noise figure(NF). The RF amplifier may not be needed and is often omitted (or switched off)
for frequencies below 30 MHz, where the signal-to-noise ratio is defined by atmospheric and
man-made noise.
A local oscillator (LO) which generates a radio frequency signal at an offset from the incoming
signal, which is mixed with the incoming signal.
The mixer, which mixes the incoming signal with the signal from the local oscillator to convert
the signal to the intermediate frequency (IF).
In digital receivers, particularly those in wireless devices such as cell phones and Wifi receivers, the
intermediate frequency is digitized; sampled and converted to a binary digitalform, and the rest of the
processing - IF filtering and demodulation - is done by digital filters (digital signal processing, DSP),
as these are smaller, use less power and can have more selectivity.[3] In this type of receiver the RF
front end is defined as everything from the antenna to the analog to digital converter (ADC) which
digitizes the signal.[3] The general trend is to do as much of the signal processing in digital form as
possible, and some receivers digitize the RF signal directly, without down-conversion to an IF, so
here the front end is merely an RF filter.
Time diversity: Multiple versions of the same signal are transmitted at different time instants.
Alternatively, a redundant forward error correction code is added and the message is spread in
time by means of bit-interleaving before it is transmitted. Thus, error bursts are avoided, which
simplifies the error correction.
Frequency diversity: The signal is transmitted using several frequency channels or spread over
a wide spectrum that is affected by frequency-selective fading. Middle-late 20th
century microwave radio relay lines often used several regular wideband radio channels, and one
protection channel for automatic use by any faded channel. Later examples include:
OFDM modulation in combination with subcarrier interleaving and forward error correction
Spread spectrum, for example frequency hopping or DS-CDMA.
Space diversity: The signal is transmitted over several different propagation paths. In the case of
wired transmission, this can be achieved by transmitting via multiple wires. In the case of
wireless transmission, it can be achieved by antenna diversity using multiple transmitter antennas
(transmit diversity) and/or multiple receiving antennas (reception diversity). In the latter case,
a diversity combining technique is applied before further signal processing takes place. If the
antennas are far apart, for example at different cellular base station sites or WLAN access points,
this is called macro diversity or site diversity. If the antennas are at a distance in the order of
one wavelength, this is called micro diversity. A special case is phased antenna arrays, which
also can be used for beam forming, MIMO channels and spacetime coding (STC).
Polarization diversity: Multiple versions of a signal are transmitted and received via antennas
with different polarization. A diversity combining technique is applied on the receiver side.
Multiuser diversity: Multiuser diversity is obtained by opportunistic user scheduling at either
the transmitter or the receiver. Opportunistic user scheduling is as follows: at any given time, the
transmitter selects the best user among candidate receivers according to the qualities of each
channel between the transmitter and each receiver. A receiver must feedback the channel quality
information to the transmitter using limited levels of resolution, in order for the transmitter to
implement Multiuser diversity.
Cooperative diversity: Achieves antenna diversity gain by using the cooperation of distributed
antennas belonging to each node.
Rake receiver
If, in a mobile radio channel reflected waves arrive with small relative time delays, self interference
occurs. Direct Sequence (DS) Spread Spectrum is often claimed to have particular properties that
makes it less vulnerable to multipath reception. In particular, the rake receiver architecture allows an
optimal combining of energy received over paths with different. It avoids wave cancellation (fades)
if delayed paths arrive with phase differences and appropriately weighs signals coming in with
different signal-to-noise ratios.
The rake receiver consists of multiple correlators, in which the receive signal is multiplied by time-
shifted versions of a locally generated code sequence. The intention is to separate signals such that
each finger only sees signals coming in over a single (resolvable) path. The spreading code is chosen
to have a very small autocorrelation value for any nonzero time offset. This avoids crosstalk between
fingers. In practice, the situation is less ideal. It is not the full periodic autocorrelation that
determines the crosstalk between signals in different fingers, but rather two partial correlations, with
contributions from two consecutive bits or symbols. It has been attempted to find sequences that
have satisfactory partial correlation values, but the crosstalk due to partial (non-periodic) correlations
remains substantially more difficult to reduce than the effects of periodic correlations.
The rake receiver is designed to optimally detect a DS-CDMA signal transmitted over
a dispersive multipath channel. It is an extension of the concept of the matched filter.
In the matched filter receiver, the signal is correlated with a locally generated copy of the signal
waveform. If, however, the signal is distorted by the channel, the receiver should correlate the
incoming signal by a copy of the expected received signal, rather than by a copy of transmitted
In a multipath channel, delayed reflections interfere with the direct signal. However, a DS-CDMA
signal suffering from multipath dispersion can be detected by a rake receiver. This receiver optimally
combines signals received over multiple paths.
Like a garden rake, the rake receiver gathers the energy received over the various delayed
propagation paths. According to the maximum ratio combining principle, the SNR at the output is the
sum of the SNRs in the individual branches, provided that
Signals arriving with the same excess propagation delay as the time offset in the receiver are
retrieved accurately, because
Rake Performance
A spread spectrum receiver with rake outperforms a simple receiver with a single correlator.
Mathematical definition
A rake receiver utilizes multiple correlators to separately detect M strongest multipath components.
Each correlator may be quantized using 1, 2, 3 or 4 bits.
The outputs of each correlator are weighted to provide better estimate of the transmitted signal than
is provided by a single component. Demodulation and bit decisions are then based on the weighted
outputs of the M correlators
Use
Rake receivers are common in a wide variety of CDMA and W-CDMA radio devices such as mobile
phones and wireless LAN equipment.
Rake receivers are also used in radio astronomy. The CSIRO Parkes radio telescope and Jodrell Bank
telescope have 1-bit filter bank recording formats that can be processed in real time or prognostically
by software based rake receivers.
QUANTISATION TECHNIQUES
Quantization, in mathematics and digital signal processing, is the process of mapping input values
from a large set (often a continuous set) to output values in a (countable) smaller
set. Rounding and truncation are typical examples of quantization processes. Quantization is
involved to some degree in nearly all digital signal processing, as the process of representing a signal
in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy
compression algorithms.
The difference between an input value and its quantized value (such as round-off error) is referred to
as quantization error. A device or algorithmic function that performs quantization is called
a quantizer. An analog-to-digital converter is an example of a quantizer.
Analog-to-digital converter
An analog-to-digital converter (ADC) can be modeled as two processes: sampling and quantization.
Sampling converts a time-varying voltage signal into a discrete-time signal, a sequence of real
numbers. Quantization replaces each real number with an approximation from a finite set of discrete
values. Most commonly, these discrete values are represented as fixed-point words. Though any
number of quantization levels is possible, common word-lengths are 8-bit (256 levels), 16-
bit (65,536 levels) and 24-bit (16.8 million levels). Quantizing a sequence of numbers produces a
sequence of quantization errors which is sometimes modeled as an additive random signal
called quantization noise because of its stochastic behavior. The more levels a quantizer uses, the
lower is its quantization noise power.
Ratedistortion optimization
VOCODERS
A vocoder is an audio processor that captures the characteristic elements of an an audio signal and
then uses this characteristic signal to affect other audio signals. The technology behind the vocoder
effect was initially used in attempts to synthesize speech. The effect called vocoding can be
recognized on records as a "talking synthesizer", made popular by artists such as Stevie Wonder. The
basic component extracted during the vocoder analysis is called the formant. The formant describes
the fundamental frequency of a sound and its associated noise components.
The vocoder works like this: The input signal (your voice saying "Hello, my name is Fred") is fed
into the vocoder's input. This audio signal is sent through a series of parallel signal filters that create
a signature of the input signal, based on the frequency content and level of the frequency
components. The signal to be processed (a synthesized string sound, for example) is fed into another
input on the vocoder. The filter signature created above during the analysis of your voice is used to
filter the synthesized sound. The audio output of the vocoder contains the synthesized sound
modulated by the filter created by your voice. You hear a synthesized sound that pulses to the tempo
of your voice input with the tonal characteristics of your voice added to it.
A vocoder (/vokodr/, a portmanteau of voice encoder) is a category of voice codec that analyzes
and synthesizes the human voice signal for audio data compression, multiplexing, voice encryption,
voice transformation, etc.
The earliest type of vocoder, the channel vocoder, was originally developed as a speech
coder for telecommunications applications in the 1930s, the idea being to code speech in order to
Applications
Linear predictive coding (LPC) is a tool used mostly in audio signal processing and speech
processing for representing the spectral envelope of a digital signal of speech in compressed form,
using the information of a linear predictive model.[1] It is one of the most powerful speech analysis
techniques, and one of the most useful methods for encoding good quality speech at a low bit rate
and provides extremely accurate estimates of speech parameters.
Envelope Calculation
The LPC method is quite close to the FFT. The envelope is calculated from a number of formants or
poles specified by the user.
The LPC performance is limited by the method itself, and the local characteristics of the signal.
The harmonic spectrum sub-samples the spectral envelope, which produces a spectral
aliasing. These problems are especially manifested in voiced and high-pitched signals,
affecting the first harmonics of the signal, which refer to the perceived speech quality and
formant dynamics.
A correct all-pole model for the signal spectrum can hardly be obtained.
The desired spectral information, the spectral envelope is not represented : we get too close to
the original spectra. The LPC follows the curve of the spectrum down to the residual noise
level in the gap between two harmonics, or partials spaced too far apart. It does not represent
the desired spectral information to be modeled since we are interested in fitting the spectral
envelope as close as possible and not the original spectra. The spectral envelope should be
a smooth function passing through the prominent peaks of the spectrum, yielding a flat
sequence, and not the "valleys" formed by the harmonic peaks.
Sample Quality
LPC usually requires a very good speech sample to work with, which is not always the case with
omnidirectional microphones recordings
A cellular system divides any given area into cells where a mobile unit in each cell communicates
with a base station. The main aim in the cellular system design is to be able to increase the
capacity of the channel, i.e., to handle as many calls as possible in a given bandwidth with a
sufficient level of quality of service.
There are several different ways to allow access to the channel. These includes mainly the following
Wideband Systems
In wideband systems, the transmission bandwidth of a single channel is much larger than the
coherence bandwidth of the channel. Thus, multipath fading doesnt greatly affect the received
signal within a wideband channel, and frequency selective fades occur only in a small fraction of the
signal bandwidth.
FDMA allots a different sub-band of frequency to each different user to access the network.
If FDMA is not in use, the channel is left idle instead of allotting to the other users.
FDMA is implemented in Narrowband systems and it is less complex than TDMA.
Tight filtering is done here to reduce adjacent channel interference.
The base station BS and mobile station MS, transmit and receive simultaneously and
continuously in FDMA.
TDMA shares a single carrier frequency with several users where each users makes use of
non-overlapping time slots.
Data transmission in TDMA is not continuous, but occurs in bursts. Hence handsoff process
is simpler.
TDMA uses different time slots for transmission and reception thus duplexers are not
required.
TDMA has an advantage that is possible to allocate different numbers of time slots per frame
to different users.
Bandwidth can be supplied on demand to different users by concatenating or reassigning time
slot based on priority.
In CDMA every user uses the full available spectrum instead of getting allotted by separate
frequency.
CDMA is much recommended for voice and data communications.
While multiple codes occupy the same channel in CDMA, the users having same code can
communicate with each other.
CDMA offers more air-space capacity than TDMA.
The hands-off between base stations is very well handled by CDMA.
All users can communicate at the same time using the same channel.
SDMA is completely free from interference.
A single satellite can communicate with more satellites receivers of the same frequency.
The directional spot-beam antennas are used and hence the base station in SDMA, can track a
moving user.
Controls the radiated energy for each user in space.
There are two main types of spread spectrum multiple access techniques
The combinational sequences called as hybrid are also used as another type of spread
spectrum. Time hopping is also another type which is rarely mentioned.
Since many users can share the same spread spectrum bandwidth without interfering with one
another, spread spectrum systems become bandwidth efficient in a multiple user environment.