Beruflich Dokumente
Kultur Dokumente
Contents
[hide]
1 Explanation
2 Definition
3 Estimation
4 Properties
5 Related concepts
6 Applications
o 6.2 Coherence
7 See also
8 References
[edit]Explanation
In physics, the signal is usually a wave, such as an electromagnetic wave, random vibration, or an acoustic wave. The spectral density of the wave, when multiplied by an
appropriate factor, will give the power carried by the wave, per unit frequency, known as the power spectral density (PSD) of the signal. Power spectral density is commonly
expressed in watts per hertz (W/Hz).[1]
For voltage signals, it is customary to use units of V 2Hz1 for PSD, and V2sHz1 for ESD.[2]
For random vibration analysis, units of g2Hz1 are sometimes used for acceleration spectral density.[3]
Although it is not necessary to assign physical dimensions to the signal or its argument, in the following discussion the terms used will assume that the signal varies in time.
[edit]Definition
require that the signal is described by a square-integrable function. In this case, the energy spectral density of the signal is the square of the magnitude of
the continuous Fourier transform of the signal
where is the angular frequency ( times the ordinary frequency) and is the continuous Fourier transform of , and is
itscomplex conjugate. As is always the case, the multiplicative factor of is not absolute, but rather depends on the particular normalizing constants used in the
definition of the various Fourier transforms.
As an example, if represents the potential (in volts) of an electrical signal propagating across a transmission line, then the units of measure for spectral
density would appear as volt2seconds2, which is per se not yet dimensionally correct for an spectral energy density in the sense of the physical sciences.
However, after dividing by the characteristic impedance Z (in ohms) of the transmission line, the dimensions of would become volt2seconds2 per
ohm, which is equivalent to joules per hertz, the SI unit for spectral energy density as defined in the physical sciences.
This definition generalizes in a straightforward manner to a discrete signal with an infinite number of values such as a signal sampled at discrete
times :
o
o where is the discrete-time Fourier transform of In the mathematical sciences, the sampling interval is often set to one. It is needed,
however, to keep the correct physical units and to ensure that we recover the continuous case in the limit
o [edit]Power spectral density
o The above definition of energy spectral density is most suitable for pulse-like signals for which the Fourier transforms of the signals exist. For continued signals
that describe for example stationary physical processes, it makes more sense to define a power spectral density (PSD), which describes how the power of a
signal or time series is distributed with frequency. Here, power can be the actual physical power, or more often, for convenience with abstract signals, can be
defined as the squared value of the signal. This instantaneous power is then given by
for a signal . The mean (or expected value) of is the total power, which is the integral of the power spectral density over all
frequencies.
We can use a normalized Fourier transform:
o
and define the power spectral density as: [4][5]
o
For stochastic signals, the squared magnitude of the Fourier transform typically does not approach a limit, but its expectation does;
see periodogram.
Remark: Many signals of interest are not integrable and the non-normalized (=ordinary) Fourier transform of the signal
does not exist. Some authors (e.g. Risken [6] ) still use the non-normalized Fourier transform in a formal way to formulate a definition
of the power spectral density
o ,
where is the Dirac delta function. Such formal statements may be sometimes useful to guide the intuition,
but should always be used with utmost care.
Using such formal reasoning, one may already guess that for a stationary random process, the power spectral
signal , should be a Fourier pair. This is indeed true and is a deep theorem
that was worked out by Norbert Wiener and Aleksandr Khinchin (the WienerKhinchin theorem).
o
o Many authors use this equality to actually define the power spectral density.[7] There are strong mathematical reasons
for doing so. It avoids the "mathematical handwaving" that is found in many textbooks.
o The power of the signal in a given frequency band can be calculated by integrating over positive
and negative frequencies,
o
o The power spectral density of a signal exists if the signal is a wide-sense stationary process. If the signal is not wide-
sense stationary, or strictly stationary, then the autocorrelation function must be a function of two variables. In some
cases, such as wide-sense cyclostationary processes, a PSD may still exist.[8] More generally, similar techniques may
be used to estimate a time-varying spectral density.
o The definition of the power spectral density generalizes in a straightforward manner to finite time-
o .
In a real-world application, one would typically average this single-measurement PSD over several
repetitions of the measurement to obtain a more accurate estimate of the real PSD underlying the
observed physical process. This computed PSD is sometimes called periodogram. One can prove that
this periodogram converges to the true PSD when the averaging time interval T goes to infinity (Brown
& Hwang[9]) to approach the Power Spectral Density (PSD).
If two signals both possess power spectra (the correct terminology), then a cross-power spectrum can
be calculated by using their cross-correlationfunction.
[edit]Properties of the power spectral density
Some properties of the PSD include:[10]
the spectrum of a real valued process is symmetric: , or in other words, it is an even function
it is continuous and differentiable on [-1/2, +1/2]
its derivative is zero at f = 0 (this is required by the fact that the power spectrum is an even function), or else the derivative might not exist at f = 0
the autocovariance function can be reconstructed by using the Inverse Fourier transform
it describes the distribution of variance across time scales. In particular
o
it is a linear function of the autocovariance function
o where
o
[edit]Cross-spectral density
"Just as the Power Spectral Density (PSD) is the Fourier transform of the auto-covariance function we
may define the Cross Spectral Density (CSD) as the Fourier transform of the cross-
covariance function."[12]
The PSD is a special case of the cross spectral density (CPSD) function, defined between two signals
xn and yn as
This section
requires expansion.
(April 2009)
[edit]Estimation
The spectral density of and the autocorrelation of form a Fourier transform pair (for PSD versus ESD, different definitions of autocorrelation function are
used).
One of the results of Fourier analysis is Parseval's theorem which states that the area under the energy spectral density curve is equal to the area under the square of the
magnitude of the signal, the total energy:
o The above theorem holds true in the discrete cases as well. A similar result holds for the total power in a power spectral density being equal to the corresponding
mean total signal power, which is the autocorrelation function at zero lag.
o [edit]Related concepts
Most "frequency" graphs really display only the spectral density. Sometimes the complete frequency spectrum is graphed in two parts, "amplitude" versus frequency (which is the
spectral density) and "phase" versus frequency (which contains the rest of the information from the frequency spectrum). cannot be recovered from the spectral
density part alone the "temporal information" is lost.
The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
The spectral edge frequency of a signal is an extension of the previous concept to any proportion instead of two equal parts.
Spectral density is a function of frequency, not a function of time. However, the spectral density of small "windows" of a longer signal may be calculated, and plotted versus time
associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier
transform and wavelets.
In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power carried by each frequency or
"color" in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it
not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers. Values are used to calculate other specifications and then
plotted to demonstrate the spectral attributes of the source. This can be a helpful tool in analyzing the color characteristics of a particular source.
o [edit]Applications
o [edit]Electrical engineering
o The concept and use of the power spectrum of a signal is fundamental
in electrical engineering, especially in electronic communication systems,
including radio communications, radars, and related systems, plus
passive [remote sensing] technology. Much effort has been expended
and millions of dollars spent on developing and producing electronic
instruments called "spectrum analyzers" for aiding electrical engineers
and technicians in observing and measuring the power spectra of
signals. The cost of a spectrum analyzer varies depending on its
frequency range, its bandwidth, and its accuracy. The higher the
frequency range (S-band, C-band, X-band, Ku-band, K-band, Ka-band,
etc.), the more difficult the components are to make, assemble, and test
and the more expensive the spectrum analyzer is. Also, the wider the
bandwidth that a spectrum analyzer possesses, the more costly that it is,
and the capability for more accurate measurements increases costs as
well.
o The spectrum analyzer measures the magnitude of the short-time Fourier
transform (STFT) of an input signal. If the signal being analyzed can be
considered a stationary process, the STFT is a good smoothed estimate
of its power spectral density. These devices work in low frequencies and
with small bandwidths.
o [edit]Coherence
o See Coherence (signal processing) for use of the cross-spectral density
In communications, noise spectral density N0 is the noise power per unit of bandwidth; that is, it is the power spectral density of the noise. It has dimension
of power/frequency (see dimensional analysis), whose SI coherent unit is watts per hertz, which is equivalent to watt-seconds or joules. If the noise is white noise, i.e., constant
with frequency, then the total noise power N in a bandwidth B is BN0. This is utilized in signal-to-noise ratiocalculations.
The thermal noise density is given by N0 = kT, where k is Boltzmann's constant in joules per kelvin, and T is the receiver system noise temperature in kelvins.
N0 is commonly used in link budgets as the denominator of the important figure-of-merit ratios Eb/N0 and Es/N0.
In statistical signal processing, the goal of spectral density estimation is to estimate the spectral density (also known as the power spectrum) of arandom signal from a
sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. The purpose of estimating the spectral density
is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
SDE should be distinguished from the field of frequency estimation, which assumes a limited (usually small) number of generating frequencies plus noise and seeks to find their
frequencies. SDE makes no assumption on the number of components and seeks to estimate the whole generating spectrun.
[edit]Techniques
Techniques for spectrum estimation can generally be divided into parametric and non-parametric methods. The parametric approaches assume that the underlying stationary
stochastic process has a certain structure which can be described using a small number of parameters (for example, using an auto-regressive or moving average model). In
these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the
covariance or the spectrum of the process without assuming that the process has any particular structure.
Following is a partial list of spectral density estimation techniques:
Spectral efficiency, spectrum efficiency or bandwidth efficiency refers to the information rate that can be transmitted over a given bandwidth in a specific communication
system. It is a measure of how efficiently a limited frequency spectrum is utilized by the physical layer protocol, and sometimes by the media access control (the channel
access protocol).
Contents
[hide]
3 Comparison table
4 See also
5 References
o Example 1: A transmission technique using one kilohertz of bandwidth to transmit 1,000 bits per second has a modulation efficiency of 1 (bit/s)/Hz.
o Example 2: A V.92 modem for the telephone network can transfer 56,000 bit/s downstream and 48,000 bit/s upstream over an analog telephone network. Due to
filtering in the telephone exchange, the frequency range is limited to between 300 hertz and 3,400 hertz, corresponding to a bandwidth of 3,400 300 = 3,100
hertz. The spectral efficiency or modulation efficiency is 56,000/3,100 = 18.1 (bit/s)/Hz downstream, and 48,000/3,100 = 15.5 (bit/s)/Hz upstream.
o An upper bound for the attainable modulation efficiency is given by the Nyquist rate or Hartley's law as follows: For a signaling alphabet with Malternative symbols,
each symbol represents N = log2 M bits. N is the modulation efficiency measured in bit/symbol or bpcu. In the case ofbaseband transmission (line coding or pulse-
amplitude modulation) with a baseband bandwidth (or upper cut-off frequency) B, the symbol rate can not exceed 2B symbols/s in view to avoid intersymbol
interference. Thus, the spectral efficiency can not exceed 2N (bit/s)/Hz in the baseband transmission case. In the passband transmission case, a signal with
passband bandwidth W can be converted to an equivalent baseband signal (using undersampling or a superheterodyne receiver), with upper cut-off
frequency W/2. If double-sideband modulation schemes such as QAM, ASK, PSK or OFDM are used, this results in a maximum symbol rate of W symbols/s, and
in that the modulation efficiency can not exceed N (bit/s)/Hz. If digital single-sideband modulation is used, the passband signal with bandwidth W corresponds to a
baseband message signal with baseband bandwidth W, resulting in a maximum symbol rate of 2W and an attainable modulation efficiency of 2N (bit/s)/Hz.
o Example 3: A 16QAM modem has an alphabet size of M = 16 alternative symbols, with N = 4 bit/symbol or bpcu. Since QAM is a form of double sideband
o Example 4: The 8VSB (8-level vestigial sideband) modulation scheme used in the ATSC digital television standard gives N=3 bit/symbol or bpcu. Since it can be
described as nearly single-side band, the modulation efficiency is close to 2N = 6 (bit/s)/Hz. In practice, ATSC transfers a gross bit rate of 32 Mbit/s over a 6 MHz
o Example 5: The downlink of a V.92 modem uses a pulse-amplitude modulation with 128 signal levels, resulting in N = 7 bit/symbol. Since the transmitted signal
before passband filtering can be considered as baseband transmission, the spectral efficiency cannot exceed 2N = 14 (bit/s)/Hz over the full baseband channel (0
to 4 kHz). As seen above, a higher spectral efficiency is achieved if we consider the smaller passband bandwidth.
If a forward error correction code is used, the spectral efficiency is reduced from the uncoded modulation efficiency figure.
o Example 6: If a forward error correction (FEC) code with code rate 1/2 is added, meaning that the encoder input bit rate is one half the encoder output rate, the
spectral efficiency is 50% of the modulation efficiency. In exchange for this reduction in spectral efficiency, FEC usually reduces the bit-error rate, and typically
o Example 7: If the SNR is 1 times expressed as a ratio, corresponding to 0 decibel, the link spectral efficiency can not exceed 1 (bit/s)/Hz for error-free detection
(assuming an ideal error-correcting code) according to Shannon-Hartley regardless of the modulation and coding.
o Note that the goodput (the amount of application layer useful information) is normally lower than the maximum
throughput used in the above calculations, because of packet retransmissions, higher protocol layer overhead, flow
control, congestion avoidance, etc. On the other hand, a data compression scheme, such as
the V.44 or V.42bis compression used in telephone modems, may however give higher goodput if the transferred data
is not already efficiently compressed.
o The link spectral efficiency of a wireless telephony link may also be expressed as the maximum number of
simultaneous calls over 1 MHz frequency spectrum in erlangs per megahertz, or E/MHz. This measure is also affected
by the source coding (data compression) scheme. It may be applied to analog as well as digital transmission.
o In wireless networks, the link spectral efficiency can be somewhat misleading, as larger values are not necessarily
more efficient in their overall use of radio spectrum. In a wireless network, high link spectral efficiency may result in
high sensitivity to co-channel interference (crosstalk), which affects the capacity. For example, in a cellular
telephone network with frequency reuse, spectrum spreading and forward error correction reduce the spectral
efficiency in (bit/s)/Hz but substantially lower the required signal-to-noise ratio in comparison to non-spread spectrum
techniques. This can allow for much denser geographical frequency reuse that compensates for the lower link spectral
efficiency, resulting in approximately the same capacity (the same number of simultaneous phone calls) over the same
bandwidth, using the same number of base station transmitters. As discussed below, a more relevant measure for
wireless networks would be system spectral efficiency in bit/s/Hz per unit area. However, in closed communication
links such as telephone lines and cable TV networks, and in noise-limited wireless communication system where co-
channel interference is not a factor, the largest link spectral efficiency that can be supported by the available SNR is
generally used.
o [edit]System spectral efficiency or area spectral efficiency
o In digital wireless networks, the system spectral efficiency or area spectral efficiency is typically measured in (bit/s)/Hz
per unit area, (bit/s)/Hz percell, or (bit/s)/Hz per site. It is a measure of the quantity of users or services that can be
simultaneously supported by a limited radio frequency bandwidth in a defined geographic area. It may for example be
defined as the maximum throughput or goodput, summed over all users in the system, divided by the channel
bandwidth. This measure is affected not only by the single user transmission technique, but also by multiple
access schemes and radio resource management techniques utilized. It can be substantially improved by
dynamic radio resource management. If it is defined as a measure of the maximum goodput, retransmissions due to
co-channel interference and collisions are excluded. Higher-layer protocol overhead (above the media access
control sublayer) is normally neglected.
o Example 8: In a cellular system based on frequency-division multiple access (FDMA) with a fixed channel allocation (FCA) cellplan using afrequency reuse
factor of 4, each base station has access to 1/4 of the total available frequency spectrum. Thus, the maximum possible system spectral efficiency in (bit/s)/Hz per
site is 1/4 of the link spectral efficiency. Each base station may be divided into 3 cells by means of 3 sector antennas, also known as a 4/12 reuse pattern. Then
each cell has access to 1/12 of the available spectrum, and the system spectral efficiency in(bit/s)/Hz per cell or (bit/s)/Hz per sector is 1/12 of the link spectral
efficiency.
o The system spectral efficiency of a cellular network may also be expressed as the maximum number of simultaneous
phone calls per area unit over 1 MHz frequency spectrum in E/MHz per cell, E/MHz per sector, E/MHz per site, or
(E/MHz)/m2. This measure is also affected by the source coding (data compression) scheme. It may be used in analog
cellular networks as well.
o Low link spectral efficiency in (bit/s)/Hz does not necessarily mean that an encoding scheme is inefficient from a
system spectral efficiency point of view. As an example, consider Code Division Multiplexed Access (CDMA) spread
spectrum, which is not a particularly spectral efficient encoding scheme when considering a single channel or single
user. However, the fact that one can "layer" multiple channels on the same frequency band means that the system
spectrum utilization for a multi-channel CDMA system can be very good.
o Example 9: In the W-CDMA 3G cellular system, every phone call is compressed to a maximum of 8,500 bit/s (the useful bitrate), and spread out over a 5 MHz
wide frequency channel. This corresponds to a link throughput of only 8,500/5,000,000 = 0.0017 (bit/s)/Hz. Let us assume that 100 simultaneous (non-silent) calls
are possible in the same cell. Spread spectrum makes it possible to have as low a frequency reuse factor as 1, if each base station is divided into 3 cells by means
of 3 directional sector antennas. This corresponds to a system spectrum efficiency of over 1 100 0.0017 = 0.17 (bit/s)/Hz per site, and 0.17/3 = 0.06 (bit/s)/Hz
Spectral efficiency
Net
bitrate R per
Launched carrier
Service Standard
year
(Mbit/s)
0.013 8 timeslots
2G cellular GSM 1991
= 0.104
0.013 3 timeslots
2G cellular D-AMPS 1991
= 0.039
The Fourier transform of a function of time, s(t), is a complex-valued function of frequency, S(f), often referred to as an amplitude spectrum. Any linear time-invariant operation on
s(t) produces a new spectrum of the form H(f)S(f), which changes the relative magnitudes and/or angles (phase) of the non-zero values of S(f). Any other type of operation
creates new frequency components that may be referred to as spectral leakage in the broadest sense. Sampling, for instance, produces leakage, but we call it aliasing.
For Fourier transformpurposes, sampling is modeled as a product between s(t) and a Dirac combfunction. The spectrum of a product is the convolution between S(f) and another
function, which inevitably creates the new frequency components. But the term 'leakage' usually refers to the effect of windowing, which is the product of s(t) with a different kind
of function, the window function. Window functions happen to have finite duration, but that is not necessary to create leakage. Multiplication by a time-variant function is sufficient.
Comparison of two window functions in terms of their effects on equal-strength sinusoids with additive noise. The sinusoid at bin -20 suffers no scalloping and the one at bin +20.5 exhibits worst-
case scalloping. The rectangular window produces the most scalloping but also narrower peaks and lower noise-floor. A third sinusoid with amplitude -16 dB would be noticeable in the upper
Contents
[hide]
1 Discrete-time functions
2 Window tradeoffs
3 Citations
4 See also
[edit]Discrete-time functions
When both sampling and windowing are applied to s(t), in either order, the leakage caused by windowing is a relatively localized spreading of frequency components, with often a
blurring effect, whereas the aliasing caused by sampling is a periodic repetition of the entire blurred spectrum.
[edit]Window tradeoffs
Main article: Window function
The total leakage of a window function is measured by a metric called equivalent noise bandwidth (ENBW)[1] or noise equivalent bandwidth (NEB). The best window in that regard
is the simplest, called rectangular because of its flat top and vertical sides. Its spreading effect occurs mostly a factor of 10 to 100 below the amplitude of the original component.
Unfortunately the spreading is very wide, which may mask important spectrum details at even lower levels. That prevents the rectangular window from being a popular choice.
Non-rectangular window functions actually increase the total leakage, but they can also redistribute it to places where it does the least harm, depending on the application.
Specifically, to different degrees they reduce the level of the spreading by increasing the high-level leakage in the near vicinity of the original component. In general, they control
the trade-off between resolving comparable strength signals with similar frequencies or resolving disparate strength signals with dissimilar frequencies: one speaks of "high
resolution" versus "high dynamic range" windows. And leakage near the original component is actually beneficial for a metric known as scalloping loss.
We customarily think of leakage as a spreading out of (say) a sinusoid in one "bin" of a DFT into the other bins at levels that generally decrease with distance. What that actually
means is that when the actual sinusoid frequency lies in bin "k", its presence is sensed/recorded at different levels in the other bins; i.e. the correlations they measure are non-
zero. The value measured in bin k+10 and plotted on the spectrum graph is the response of that measurement to the imperfect (i.e. windowed) sinusoid 10 bins away. And when
the input is just white noise (energy at all frequencies), the value measured in bin k is the sum of its responses to a continuum of frequencies. One could say that leakage is
actually a leaking in process, rather than leaking out. That perspective might help to interpret the different noise-floor levels between the two graphs in the figure on the right. Both
spectra were made from the same data set with the same noise power. But the bins in the bottom graph each responded more strongly than the bins in the top graph. The exact
amount of the difference is given by the ENBW difference of the two window functions.
In signal processing, a window function (also known as an apodization function or tapering function[1]) is a mathematical function that is zero-valued outside of some
chosen interval. For instance, a function that is constant inside the interval and zero elsewhere is called a rectangular window, which describes the shape of its graphical
representation. When another function or a signal (data) is multiplied by a window function, the product is also zero-valued outside the interval: all that is left is the part where
they overlap; the "view through the window". Applications of window functions include spectral analysis, filter design, and beamforming.
A more general definition of window functions does not require them to be identically zero outside an interval, as long as the product of the window multiplied by its argument
is square integrable, that is, that the function goes sufficiently rapidly toward zero. [2]
In typical applications, the window functions used are non-negative smooth "bell-shaped" curves,[3] though rectangle, triangle, and other functions are sometimes used.
Contents
[hide]
1 Applications
1.1.1 Windowing
2 Window examples
4 Overlapping windows
5 Two-dimensional windows
6 See also
7 Notes
8 References
9 Further reading
10 External links
[edit]Applications
Applications of window functions include spectral analysis and the design of finite impulse response filters.
[edit]Spectral analysis
The Fourier transform of the function cos t is zero, except at frequency . However, many other functions and data (that is, waveforms) do not have convenient closed form
transforms. Alternatively, one might be interested in their spectral content only during a certain time period.
In either case, the Fourier transform (or something similar) can be applied on one or more finite intervals of the waveform. In general, the transform is applied to the product of the
waveform and a window function. Any window (including rectangular) affects the spectral estimate computed by this method.
Comparison of two window functions in terms of their effects on equal-strength sinusoids with additive noise. The sinusoid at bin -20 suffers no scalloping and the one at bin +20.5 exhibits worst-
case scalloping. The rectangular window produces the most scalloping but also narrower peaks and lower noise-floor. A third sinusoid with amplitude -16 dB would be noticeable in the upper
by a factor of 2.01/1.5, which can be expressed in decibels as: . Therefore, even at maximum scalloping, the net processing gain
of a Hann window exceeds that of a BlackmanHarris window by: 1.27 +0.83 -1.42 = 0.68 dB. And when we happen to incur no scalloping (due to a fortuitous signal frequency),
the Hann window is 1.27 dB more sensitive than BlackmanHarris. In general (as mentioned earlier), this is a deterrent to using high-dynamic-range windows in low-dynamic-
range applications.
[edit]Filter design
Main article: Filter design
Windows are sometimes used in the design of digital filters, in particular to convert an "ideal" impulse response of infinite duration, such as a sinc function, to a finite impulse
response (FIR) filter design. That is called the window method.[4][5]
[edit]Window examples
Terminology:
represents the width, in samples, of a discrete-time, symmetrical window function. When N is an odd number, the non-flat windows have a singular maximum point.
When N is even, they have a double maximum.
o A common desire is for an asymmetrical window called DFT-even[6] or periodic, which has a single maximum but an even number of samples (required by the FFT
algorithm). Such a window would be generated by the Matlab function hann(512,'periodic'), for instance. Here, that window would be generated by N=513 and
o
The rectangular window (sometimes known as the boxcar or Dirichlet window) is the simplest window, equivalent to replacing all but N values of a data sequence by zeros,
making it appear as though the waveform suddenly turns on and off. Other windows are designed to moderate these sudden changes because discontinu ities have undesirable
effects on the discrete-time Fourier transform (DTFT) and/or the algorithms that produce samples of the DTFT. [7][8]
[edit]Hann (Hanning) window
[note 1]
o
Note that:
o
The ends of the cosine just touch zero, so the side-lobes roll off at about 18 dB per octave. [9]
The Hann and Hamming windows, both of which are in the family known as "raised cosine" or "generalized Hamming" windows, are respectively
named after Julius von Hann and Richard Hamming. This window is commonly called a "Hanning Window". [10] [11]
[edit]Hamming window
[note 1]
o
Note that:
o
[edit]Tukey window
The Tukey window,[6][14] also known as the tapered cosine window, can be regarded as a cosine lobe of width that is
[note 1]
o
also known as sine window
o
used in Lanczos resampling
for the Lanczos window, sinc(x) is defined as sin(x)/(x)
also known as a sinc window, because:
o is the main lobe of a normalized sinc function
o [edit]Triangular windows
o
Can be seen as the convolution of two half-sized rectangular windows (for N even), giving it a main
lobe width of twice the width of a regular rectangular window. The nearest lobe is -26 dB down from the
main lobe.[15]
[edit]Bartlett window
o
[edit]Gaussian windows
o
[edit]BartlettHann window
o
o
o
o [edit]Blackman windows
o
By common convention, the unqualified term Blackman
window refers to =0.16, as this most closely approximates
the "exact
Blackman",[20] with
,
and
[21]
These exact values place zeros at the third and fourth
sidelobes.[22]
[edit]Kaiser windows
Note that:
o
[edit]Low-resolution (high-dynamic-
range) windows
[edit]Nuttall window, continuous first derivative
[note 1]
o
[edit]BlackmanHarris window
[note 1]
o
[edit]BlackmanNuttall window
[note 1]
o
[edit]Flat top window
[note 1]
o
[edit]Other windows
[edit]Bessel window
[edit]Dolph-Chebyshev window
Minimizes the Chebyshev norm of the side-lobes for a
given main lobe width.[27]
[edit]Hann-Poisson window
A Hann window multiplied by a Poisson window, which has
no side-lobes, in the sense that the frequency response
drops off forever away from the main lobe. It can thus be
used in hill climbing algorithms like Newton's method.[28]
[edit]Exponential or Poisson window
o
where is the time constant of the function. The
exponential function decays as e = 2.71828 or
approximately 8.69 dB per time constant. [30] This means
that for a targeted decay of D dB over half of the window
length, the time constant is given by
o
[edit]Rife-Vincent window
[edit]DPSS or Slepian window
The DPSS (digital prolate spheroidal sequence)
or Slepian window is used to maximize the energy
concentration in the main lobe. [31]
[edit]Comparison of windows
[edit]Overlapping windows
When the length of a data set to be transformed is larger
than necessary to provide the desired frequency
resolution, a common practice is to subdivide it into smaller
sets and window them individually. To mitigate the "loss" at
the edges of the window, the individual sets may overlap in
time. See Welch method of power spectral analysis and
the Modified discrete cosine transform.
[edit]Two-dimensional windows
Two-dimensional windows are utilized in, e.g., image
processing. They can be constructed from one-
dimensional windows in either of two forms. [32]
The separable
form,
form, involves
the
radius
ssss