Sie sind auf Seite 1von 40

Digital signal

From Wikipedia, the free encyclopedia

Jump to: navigation, search Main article: Signal (electronics) This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2007) A digital signal is a physical signal that is a representation of a sequence of discrete values (a quantified discrete-time signal), for example of an arbitrary bit stream, or of a digitized (sampled and analog-to-digital converted) analog signal. The term digital signal can refer to 1. a continuous-time waveform signal used in any form of digital communication. 2. a pulse train signal that switches between a discrete number of voltage levels or levels of light intensity, also known as a a line coded signal, for example a signal found in digital electronics or in serial communications using digital baseband transmission, or a pulse code modulation (PCM) representation of a digitized analog signal. A signal that is generated by means of a digital modulation method (digital passband transmission), produced by a modem, is in the first case considered as a digital signal, and in the second case as converted to an analog signal.

[edit] Waveforms in digital systems

A digital signal waveform: (1) low level, (2) high level, (3) rising edge, and (4) falling edge. Main article: Digital In computer architecture and other digital systems, a waveform that switches between two voltage levels representing the two states of a Boolean value (0 and 1) is referred to as a digital signal, even though it is an analog voltage waveform, since it is interpreted in terms of only two levels. The clock signal is a special digital signal that is used to synchronize digital circuits. The image shown can be considered the waveform of a clock signal. Logic changes are triggered either by the rising edge or the falling edge. The given diagram is an example of the practical pulse and therefore we have introduced two new terms that are:

Rising edge: the transition from a low voltage (level 1 in the diagram) to a high voltage (level 2). Falling edge: the transition from a high voltage to a low one.

Although in a highly simplified and idealised model of a digital circuit we may wish for these transitions to occur instantaneously, no real world circuit is purely resistive and therefore no circuit can instantly change voltage levels. This means that during a short, finite transition time the output may not properly reflect the input, and indeed may not correspond to either a logically high or low voltage.

[edit] Logic voltage levels

Hobbyist frequency counter circuit built almost entirely of TTL logic chips. Main article: logic level The two states of a wire are usually represented by some measurement of an electrical property: Voltage is the most common, but current is used in some logic families. A threshold is designed for each logic family. When below that threshold, the wire is "low," when above "high." Digital circuits establish a "no man's area" or "exclusion zone" that is wider than the tolerances of the components. The circuits avoid that area, in order to avoid indeterminate results. It is usual to allow some tolerance in the voltage levels used; for example, 0 to 2 volts might represent logic 0, and 3 to 5 volts logic 1. A voltage of 2 to 3 volts would be invalid, and occur only in a fault condition or during a logic level transition. However, few logic circuits can detect such a condition and most devices will interpret the signal simply as high or low in an undefined or device-specific manner. Some logic devices incorporate schmitt trigger inputs whose behaviour is much better defined in the threshold region, and have increased resilience to small variations in the input voltage. The levels represent the binary integers or logic levels of 0 and 1. In active-high logic, "low" represents binary 0 and "high" represents binary 1. Active-low logic uses the reverse representation. Examples of binary logic levels: Technology L voltage H voltage Notes CMOS 0 V to VCC/2 VCC/2 to VCC VCC = supply voltage TTL 0 V to 0.8 V 2 V to VCC VCC is 4.75 V to 5.25 V ECL -1.175 V to -VEE 0.75 V to 0 V VEE is about -5.2 V. VCC=Ground

[edit] See also


Digital signal processing NyquistShannon sampling theorem WhittakerShannon interpolation formula Intersymbol interference in digital communication

[show]

v t e

Line coding (digital baseband transmission)

[show]

v t e

Telecommunications (general)

[show]

v t e

Telecommunications in Africa

[show]

v t e

Telecommunications in Asia

[show]

v t e

Telecommunications in Europe
o

[show]

v t e

Telecommunications in North America

[show]

v t e

Telecommunications in Oceania

[show]

v t e

Telecommunications in South America


Retrieved from "http://en.wikipedia.org/w/index.php?title=Digital_signal&oldid=478566652" View page ratings

Rate this page


What's this? Trustworthy Objective Complete Well-written

I am highly knowledgeable about this topic (optional) Submit ratings Saved successfully Your ratings have not been submitted yet Categories:

Digital signal processing Hidden categories: Wikipedia indefinitely move-protected pages Articles lacking sources from August 2007 All articles lacking sources
Personal tools

Log in / create account

Namespaces

Article Talk

Variants Views Actions Search


Special:Search

Read Edit View history

Navigation

Main page Contents Featured content Current events Random article Donate to Wikipedia

Interaction Toolbox

Help About Wikipedia Community portal Recent changes Contact Wikipedia

What links here Related changes

Upload file Special pages Permanent link Cite this page Rate this page

Print/export

Create a book Download as PDF Printable version

Languages

Catal Deutsch Eesti Espaol Kurd Polski Portugus Simple English Slovenina Trke This page was last modified on 24 February 2012 at 07:06. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Digital signal processing


From Wikipedia, the free encyclopedia Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
(May 2008)

Digital signal processing (DSP) is concerned with the representation of discrete time, discrete frequency, or other discrete domain signals by a sequence of numbers or symbols and the processing of these signals. Digital signal processing and analog signal processing are subfields of signal processing. DSP includes subfields like: audio and speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, etc. The goal of DSP is usually to measure, filter and/or compress continuous real-world analog signals. The first step is usually to convert the signal from an analog to a digital form, by sampling and then digitizing it using an analog-to-digital converter (ADC), which turns the analog signal into a stream of numbers. However, often, the required output signal is another analog output signal, which requires a digital-to-analog converter (DAC). Even if this process is more complex than analog processing and has a discrete value range, the application of computational power to digital signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression.[1] DSP algorithms have long been run on standard computers, on specialized processors called digital signal processor on purpose-built hardware such as application-specific integrated circuit (ASICs). Today there are additional technologies used for digital signal processing including more powerful general purpose microprocessors, field-programmable gate arrays (FPGAs), digital signal controllers (mostly for industrial apps such as motor control), and stream processors, among others.[2]

Contents
[hide]

1 Signal sampling 2 DSP domains o 2.1 Time and space domains o 2.2 Frequency domain o 2.3 Z-plane analysis o 2.4 Wavelet

3 Applications 4 Implementation 5 Techniques 6 Related fields 7 References 8 Further reading

[edit] Signal sampling


Main article: Sampling (signal processing)

With the increasing use of computers the usage of and need for digital signal processing has increased. To use an analog signal on a computer, it must be digitized with an analog-todigital converter. Sampling is usually carried out in two stages, discretization and quantization. In the discretization stage, the space of signals is partitioned into equivalence classes and quantization is carried out by replacing the signal with representative signal of the corresponding equivalence class. In the quantization stage the representative signal values are approximated by values from a finite set. The NyquistShannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency of the signal; but requires an infinite number of samples. In practice, the sampling frequency is often significantly more than twice that required by the signal's limited bandwidth.

[edit] DSP domains


In DSP, engineers usually study digital signals in one of the following domains: time domain (one-dimensional signals), spatial domain (multidimensional signals), frequency domain, and wavelet domains. They choose the domain to process a signal in by making an informed guess (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal. A sequence of samples from a measuring device produces a time or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain information, that is the frequency spectrum. Autocorrelation is defined as the crosscorrelation of the signal with itself over varying intervals of time or space.
[edit] Time and space domains Main article: Time domain

The most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Digital filtering generally consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters; for example:

A "linear" filter is a linear transformation of input samples; other filters are "non-linear". Linear filters satisfy the superposition condition, i.e. if an input is a weighted linear combination of different signals, the output is an equally weighted linear combination of the corresponding output signals.

A "causal" filter uses only previous samples of the input or output signals; while a "noncausal" filter uses future input samples. A non-causal filter can usually be changed into a causal filter by adding a delay to it. A "time-invariant" filter has constant properties over time; other filters such as adaptive filters change in time. A "stable" filter produces an output that converges to a constant value with time, or remains bounded within a finite interval. An "unstable" filter can produce an output that grows without bounds, with bounded or even zero input. A "finite impulse response" (FIR) filter uses only the input signals, while an "infinite impulse response" filter (IIR) uses both the input signal and previous samples of the output signal. FIR filters are always stable, while IIR filters may be unstable.

Filters can be represented by block diagrams, which can then be used to derive a sample processing algorithm to implement the filter with hardware instructions. A filter may also be described as a difference equation, a collection of zeroes and poles or, if it is an FIR filter, an impulse response or step response. The output of a digital filter to any given input may be calculated by convolving the input signal with the impulse response.
[edit] Frequency domain Main article: Frequency domain

Signals are converted from time or space domain to the frequency domain usually through the Fourier transform. The Fourier transform converts the signal information to a magnitude and phase component of each frequency. Often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared. The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. In addition to frequency information, phase information is often needed. This can be obtained from the Fourier transform. With some applications, how the phase varies with frequency can be a significant consideration. Filtering, particularly in non-realtime work can also be achieved by converting to the frequency domain, applying the filter and then converting back to the time domain. This is a fast, O(n log n) operation, and can give essentially any filter shape including excellent approximations to brickwall filters. There are some commonly used frequency domain transformations. For example, the cepstrum converts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the frequency components with smaller magnitude while retaining the order of magnitudes of frequency components.

Frequency domain analysis is also called spectrum- or spectral analysis.


[edit] Z-plane analysis Main article: Z-transform

Whereas analog filters are usually analysed in terms of transfer functions in the s plane using Laplace transforms, digital filters are analysed in the z plane in terms of Z-transforms. A digital filter may be described in the z plane by its characteristic collection of zeroes and poles. The z plane provides a means for mapping digital frequency (samples/second) to real and imaginary z components, were for continuous periodic signals and ( is the digital frequency). This is useful for providing a visualization of the frequency response of a digital system or signal.
[edit] Wavelet Main article: Discrete wavelet transform

An example of the 2D discrete wavelet transform that is used in JPEG2000. The original image is high-pass filtered, yielding the three large images, each describing local changes in brightness (details) in the original image. It is then low-pass filtered and downscaled, yielding an approximation image; this image is high-pass filtered to produce the three smaller detail images, and low-pass filtered to produce the final approximation image in the upper-left.

In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information (location in time).

[edit] Applications

The main applications of DSP are audio signal processing, audio compression, digital image processing, video compression, speech processing, speech recognition, digital communications, RADAR, SONAR, seismology and biomedicine. Specific examples are speech compression and transmission in digital mobile phones, room correction of sound in hi-fi and sound reinforcement applications, weather forecasting, economic forecasting, seismic data processing, analysis and control of industrial processes, medical imaging such as CAT scans and MRI, MP3 compression, computer graphics, image manipulation, hi-fi loudspeaker crossovers and equalization, and audio effects for use with electric guitar amplifiers.

[edit] Implementation
Depending on the requirements of the application, digital signal processing tasks can be implemented on general purpose computers (e.g. super computers, mainframe computers, or personal computers) or with embedded processors that may or may not include specialized microprocessors call digital signal processors. Often when the processing requirement is not real-time, processing is economically done with an existing general-purpose computer and the signal data (either input or output) exists in data files. This is essentially no different than any other data processing, except DSP mathematical techniques (such as the FFT) are used, and the sampled data is usually assumed to be uniformly sampled in time or space. For example: processing digital photographs with software such as Photoshop. However, when the application requirement is real-time, DSP is often implemented using specialised microprocessors such as the DSP56000, the TMS320, or the SHARC. These often process data using fixed-point arithmetic, though some more powerful versions use floating point arithmetic. For faster applications FPGAs[3] might be used. Beginning in 2007, multicore implementations of DSPs have started to emerge from companies including Freescale and Stream Processors, Inc. For faster applications with vast usage, ASICs might be designed specifically. For slow applications, a traditional slower processor such as a microcontroller may be adequate. Also a growing number of DSP applications are now being implemented on Embedded Systems using powerful PCs with a Multi-core processor.

[edit] Techniques

Bilinear transform Discrete Fourier transform Discrete-time Fourier transform Filter design LTI system theory Minimum phase Transfer function Z-transform Goertzel algorithm s-plane

[edit] Related fields

Analog signal processing Automatic control Computer Engineering Computer Science Data compression Dataflow programming Electrical engineering Fourier Analysis Information theory Machine Learning Real-time computing Stream processing Telecommunication Time series Wavelet

[edit] References
1. ^ James D. Broesch, Dag Stranneby and William Walker. Digital Signal Processing: Instant access. Butterworth-Heinemann. p. 3. 2. ^ Dag Stranneby and William Walker (2004). Digital Signal Processing and Applications (2nd ed. ed.). Elsevier. ISBN 0750663448. http://books.google.com/books?id=NKK1DdqcDVUC&pg=PA241&dq=called+digital+signal+p rocessor+hardware+application-specific+integrated+circuit+generalpurpose+microprocessors+field-programmable+gate+arrays+dsp+asic+fpga+stream. 3. ^ JpFix. "FPGA-Based Image Processing Accelerator". http://www.jpfix.com/About_Us/Articles/FPGA-Based_Image_Processing_Ac/fpgabased_image_processing_ac.html. Retrieved 2008-05-10.

[edit] Further reading


Wikibooks has a book on the topic of Digital Signal Processing

Alan V. Oppenheim, Ronald W. Schafer, John R. Buck : Discrete-Time Signal Processing, Prentice Hall, ISBN 0-13-754920-2 Boaz Porat: A Course in Digital Signal Processing, Wiley, ISBN 0471149616 Richard G. Lyons: Understanding Digital Signal Processing, Prentice Hall, ISBN 0-13-108989-7 Jonathan Yaakov Stein, Digital Signal Processing, a Computer Science Perspective, Wiley, ISBN 0-471-29546-9 Sen M. Kuo, Woon-Seng Gan: Digital Signal Processors: Architectures, Implementations, and Applications, Prentice Hall, ISBN 0-13-035214-4 Bernard Mulgrew, Peter Grant, John Thompson: Digital Signal Processing - Concepts and Applications, Palgrave Macmillan, ISBN 0-333-96356-3 Steven W. Smith: Digital Signal Processing - A Practical Guide for Engineers and Scientists, Newnes, ISBN 0-7506-7444-X, ISBN 0-9660176-3-3 Paul A. Lynn, Wolfgang Fuerst: Introductory Digital Signal Processing with Computer Applications, John Wiley & Sons, ISBN 0-471-97984-8

James D. Broesch: Digital Signal Processing Demystified, Newnes, ISBN 1-878707-16-7 John G. Proakis, Dimitris Manolakis: Digital Signal Processing - Principles, Algorithms and Applications, Pearson, ISBN 0-13-394289-9 Hari Krishna Garg: Digital Signal Processing Algorithms, CRC Press, ISBN 0-8493-7178-3 P. Gaydecki: Foundations Of Digital Signal Processing: Theory, Algorithms And Hardware Design, Institution of Electrical Engineers, ISBN 0-85296-431-5 Gibson, John. Spectral Delay as a Compositional Resource. eContact! 11.4 Toronto Electroacoustic Symposium 2009 (TES) / Symposium lectroacoustique 2009 de Toronto (December 2009). Montral: CEC. Paul M. Embree, Damon Danieli: C++ Algorithms for Digital Signal Processing, Prentice Hall, ISBN 0-13-179144-3 Anthony Zaknich: Neural Networks for Intelligent Signal Processing, World Scientific Pub Co Inc, ISBN 981-238-305-0 Vijay Madisetti, Douglas B. Williams: The Digital Signal Processing Handbook, CRC Press, ISBN 0-8493-8572-5 Stergios Stergiopoulos: Advanced Signal Processing Handbook: Theory and Implementation for Radar, Sonar, and Medical Imaging Real-Time Systems, CRC Press, ISBN 0-8493-3691-0 Joyce Van De Vegte: Fundamentals of Digital Signal Processing, Prentice Hall, ISBN 0-13016077-6 Ashfaq Khan: Digital Signal Processing Fundamentals, Charles River Media, ISBN 1-58450281-9 Jonathan M. Blackledge, Martin Turner: Digital Signal Processing: Mathematical and Computational Methods, Software Development and Applications, Horwood Publishing, ISBN 1-898563-48-9 Bimal Krishna, K. Y. Lin, Hari C. Krishna: Computational Number Theory & Digital Signal Processing, CRC Press, ISBN 0-8493-7177-5 Doug Smith: Digital Signal Processing Technology: Essentials of the Communications Revolution, American Radio Relay League, ISBN 0-87259-819-5 Henrique S. Malvar: Signal Processing with Lapped Transforms, Artech House Publishers, ISBN 0-89006-467-9 Charles A. Schuler: Digital Signal Processing: A Hands-On Approach, McGraw-Hill, ISBN 0-07829744-3 James H. McClellan, Ronald W. Schafer, Mark A. Yoder: Signal Processing First, Prentice Hall, ISBN 0-13-090999-8 Artur Krukowski, Izzet Kale: DSP System Design: Complexity Reduced Iir Filter Implementation for Practical Applications, Kluwer Academic Publishers, ISBN 1-4020-7558-8 Kainam Thomas Wong [1]: Statistical Signal Processing lecture notes [2] at the University of Waterloo, Canada. John G. Proakis: A Self-Study Guide for Digital Signal Processing, Prentice Hall, ISBN 0-13143239-7 [show]

v t e

Digital systems

[show]

v t e

Digital signal processing


Retrieved from "http://en.wikipedia.org/w/index.php?title=Digital_signal_processing&oldid=481912280" View page ratings

Rate this page


What's this? Trustworthy

Objective

Complete

Well-written

I am highly knowledgeable about this topic (optional) Submit ratings Saved successfully Your ratings have not been submitted yet Categories:

Digital electronics Digital signal processing Telecommunication theory Radar signal processing

Hidden categories: Articles needing additional references from May 2008 All articles needing additional references
Personal tools

Log in / create account

Namespaces

Article Talk

Variants Views Actions Search


Special:Search

Read Edit View history

Navigation

Main page Contents Featured content Current events Random article Donate to Wikipedia

Interaction Toolbox

Help About Wikipedia Community portal Recent changes Contact Wikipedia

What links here Related changes Upload file Special pages Permanent link Cite this page Rate this page

Print/export

Create a book Download as PDF Printable version

Languages

Catal Dansk Deutsch Espaol Franais Italiano Magyar Bahasa Melayu Norsk (bokml) Norsk (nynorsk) Polski Portugus Slovenina Suomi Svenska Trke

Ting Vit This page was last modified on 14 March 2012 at 20:46. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Digital signal processing


From Wikipedia, the free encyclopedia Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
(May 2008)

Digital signal processing (DSP) is concerned with the representation of discrete time, discrete frequency, or other discrete domain signals by a sequence of numbers or symbols and the processing of these signals. Digital signal processing and analog signal processing are subfields of signal processing. DSP includes subfields like: audio and speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, etc. The goal of DSP is usually to measure, filter and/or compress continuous real-world analog signals. The first step is usually to convert the signal from an analog to a digital form, by sampling and then digitizing it using an analog-to-digital converter (ADC), which turns the analog signal into a stream of numbers. However, often, the required output signal is another analog output signal, which requires a digital-to-analog converter (DAC). Even if this process is more complex than analog processing and has a discrete value range, the application of computational power to digital signal processing allows for many advantages over analog

processing in many applications, such as error detection and correction in transmission as well as data compression.[1] DSP algorithms have long been run on standard computers, on specialized processors called digital signal processor on purpose-built hardware such as application-specific integrated circuit (ASICs). Today there are additional technologies used for digital signal processing including more powerful general purpose microprocessors, field-programmable gate arrays (FPGAs), digital signal controllers (mostly for industrial apps such as motor control), and stream processors, among others.[2]

Contents
[hide]

1 Signal sampling 2 DSP domains o 2.1 Time and space domains o 2.2 Frequency domain o 2.3 Z-plane analysis o 2.4 Wavelet 3 Applications 4 Implementation 5 Techniques 6 Related fields 7 References 8 Further reading

[edit] Signal sampling


Main article: Sampling (signal processing)

With the increasing use of computers the usage of and need for digital signal processing has increased. To use an analog signal on a computer, it must be digitized with an analog-todigital converter. Sampling is usually carried out in two stages, discretization and quantization. In the discretization stage, the space of signals is partitioned into equivalence classes and quantization is carried out by replacing the signal with representative signal of the corresponding equivalence class. In the quantization stage the representative signal values are approximated by values from a finite set. The NyquistShannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency of the signal; but requires an infinite number of samples. In practice, the sampling frequency is often significantly more than twice that required by the signal's limited bandwidth.

[edit] DSP domains

In DSP, engineers usually study digital signals in one of the following domains: time domain (one-dimensional signals), spatial domain (multidimensional signals), frequency domain, and wavelet domains. They choose the domain to process a signal in by making an informed guess (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal. A sequence of samples from a measuring device produces a time or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain information, that is the frequency spectrum. Autocorrelation is defined as the crosscorrelation of the signal with itself over varying intervals of time or space.
[edit] Time and space domains Main article: Time domain

The most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Digital filtering generally consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters; for example:

A "linear" filter is a linear transformation of input samples; other filters are "non-linear". Linear filters satisfy the superposition condition, i.e. if an input is a weighted linear combination of different signals, the output is an equally weighted linear combination of the corresponding output signals. A "causal" filter uses only previous samples of the input or output signals; while a "noncausal" filter uses future input samples. A non-causal filter can usually be changed into a causal filter by adding a delay to it. A "time-invariant" filter has constant properties over time; other filters such as adaptive filters change in time. A "stable" filter produces an output that converges to a constant value with time, or remains bounded within a finite interval. An "unstable" filter can produce an output that grows without bounds, with bounded or even zero input. A "finite impulse response" (FIR) filter uses only the input signals, while an "infinite impulse response" filter (IIR) uses both the input signal and previous samples of the output signal. FIR filters are always stable, while IIR filters may be unstable.

Filters can be represented by block diagrams, which can then be used to derive a sample processing algorithm to implement the filter with hardware instructions. A filter may also be described as a difference equation, a collection of zeroes and poles or, if it is an FIR filter, an impulse response or step response. The output of a digital filter to any given input may be calculated by convolving the input signal with the impulse response.
[edit] Frequency domain Main article: Frequency domain

Signals are converted from time or space domain to the frequency domain usually through the Fourier transform. The Fourier transform converts the signal information to a magnitude and

phase component of each frequency. Often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared. The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. In addition to frequency information, phase information is often needed. This can be obtained from the Fourier transform. With some applications, how the phase varies with frequency can be a significant consideration. Filtering, particularly in non-realtime work can also be achieved by converting to the frequency domain, applying the filter and then converting back to the time domain. This is a fast, O(n log n) operation, and can give essentially any filter shape including excellent approximations to brickwall filters. There are some commonly used frequency domain transformations. For example, the cepstrum converts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the frequency components with smaller magnitude while retaining the order of magnitudes of frequency components. Frequency domain analysis is also called spectrum- or spectral analysis.
[edit] Z-plane analysis Main article: Z-transform

Whereas analog filters are usually analysed in terms of transfer functions in the s plane using Laplace transforms, digital filters are analysed in the z plane in terms of Z-transforms. A digital filter may be described in the z plane by its characteristic collection of zeroes and poles. The z plane provides a means for mapping digital frequency (samples/second) to real and imaginary z components, were for continuous periodic signals and ( is the digital frequency). This is useful for providing a visualization of the frequency response of a digital system or signal.
[edit] Wavelet Main article: Discrete wavelet transform

An example of the 2D discrete wavelet transform that is used in JPEG2000. The original image is high-pass filtered, yielding the three large images, each describing local changes in brightness (details) in the original image. It is then low-pass filtered and downscaled, yielding an approximation image; this image is high-pass filtered to produce the three smaller detail images, and low-pass filtered to produce the final approximation image in the upper-left.

In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information (location in time).

[edit] Applications
The main applications of DSP are audio signal processing, audio compression, digital image processing, video compression, speech processing, speech recognition, digital communications, RADAR, SONAR, seismology and biomedicine. Specific examples are speech compression and transmission in digital mobile phones, room correction of sound in hi-fi and sound reinforcement applications, weather forecasting, economic forecasting, seismic data processing, analysis and control of industrial processes, medical imaging such as CAT scans and MRI, MP3 compression, computer graphics, image manipulation, hi-fi loudspeaker crossovers and equalization, and audio effects for use with electric guitar amplifiers.

[edit] Implementation
Depending on the requirements of the application, digital signal processing tasks can be implemented on general purpose computers (e.g. super computers, mainframe computers, or personal computers) or with embedded processors that may or may not include specialized microprocessors call digital signal processors.

Often when the processing requirement is not real-time, processing is economically done with an existing general-purpose computer and the signal data (either input or output) exists in data files. This is essentially no different than any other data processing, except DSP mathematical techniques (such as the FFT) are used, and the sampled data is usually assumed to be uniformly sampled in time or space. For example: processing digital photographs with software such as Photoshop. However, when the application requirement is real-time, DSP is often implemented using specialised microprocessors such as the DSP56000, the TMS320, or the SHARC. These often process data using fixed-point arithmetic, though some more powerful versions use floating point arithmetic. For faster applications FPGAs[3] might be used. Beginning in 2007, multicore implementations of DSPs have started to emerge from companies including Freescale and Stream Processors, Inc. For faster applications with vast usage, ASICs might be designed specifically. For slow applications, a traditional slower processor such as a microcontroller may be adequate. Also a growing number of DSP applications are now being implemented on Embedded Systems using powerful PCs with a Multi-core processor.

[edit] Techniques

Bilinear transform Discrete Fourier transform Discrete-time Fourier transform Filter design LTI system theory Minimum phase Transfer function Z-transform Goertzel algorithm s-plane

[edit] Related fields


Analog signal processing Automatic control Computer Engineering Computer Science Data compression Dataflow programming Electrical engineering Fourier Analysis Information theory Machine Learning Real-time computing Stream processing Telecommunication Time series Wavelet

[edit] References

1. ^ James D. Broesch, Dag Stranneby and William Walker. Digital Signal Processing: Instant access. Butterworth-Heinemann. p. 3. 2. ^ Dag Stranneby and William Walker (2004). Digital Signal Processing and Applications (2nd ed. ed.). Elsevier. ISBN 0750663448. http://books.google.com/books?id=NKK1DdqcDVUC&pg=PA241&dq=called+digital+signal+p rocessor+hardware+application-specific+integrated+circuit+generalpurpose+microprocessors+field-programmable+gate+arrays+dsp+asic+fpga+stream. 3. ^ JpFix. "FPGA-Based Image Processing Accelerator". http://www.jpfix.com/About_Us/Articles/FPGA-Based_Image_Processing_Ac/fpgabased_image_processing_ac.html. Retrieved 2008-05-10.

[edit] Further reading


Wikibooks has a book on the topic of Digital Signal Processing

Alan V. Oppenheim, Ronald W. Schafer, John R. Buck : Discrete-Time Signal Processing, Prentice Hall, ISBN 0-13-754920-2 Boaz Porat: A Course in Digital Signal Processing, Wiley, ISBN 0471149616 Richard G. Lyons: Understanding Digital Signal Processing, Prentice Hall, ISBN 0-13-108989-7 Jonathan Yaakov Stein, Digital Signal Processing, a Computer Science Perspective, Wiley, ISBN 0-471-29546-9 Sen M. Kuo, Woon-Seng Gan: Digital Signal Processors: Architectures, Implementations, and Applications, Prentice Hall, ISBN 0-13-035214-4 Bernard Mulgrew, Peter Grant, John Thompson: Digital Signal Processing - Concepts and Applications, Palgrave Macmillan, ISBN 0-333-96356-3 Steven W. Smith: Digital Signal Processing - A Practical Guide for Engineers and Scientists, Newnes, ISBN 0-7506-7444-X, ISBN 0-9660176-3-3 Paul A. Lynn, Wolfgang Fuerst: Introductory Digital Signal Processing with Computer Applications, John Wiley & Sons, ISBN 0-471-97984-8 James D. Broesch: Digital Signal Processing Demystified, Newnes, ISBN 1-878707-16-7 John G. Proakis, Dimitris Manolakis: Digital Signal Processing - Principles, Algorithms and Applications, Pearson, ISBN 0-13-394289-9 Hari Krishna Garg: Digital Signal Processing Algorithms, CRC Press, ISBN 0-8493-7178-3 P. Gaydecki: Foundations Of Digital Signal Processing: Theory, Algorithms And Hardware Design, Institution of Electrical Engineers, ISBN 0-85296-431-5 Gibson, John. Spectral Delay as a Compositional Resource. eContact! 11.4 Toronto Electroacoustic Symposium 2009 (TES) / Symposium lectroacoustique 2009 de Toronto (December 2009). Montral: CEC. Paul M. Embree, Damon Danieli: C++ Algorithms for Digital Signal Processing, Prentice Hall, ISBN 0-13-179144-3 Anthony Zaknich: Neural Networks for Intelligent Signal Processing, World Scientific Pub Co Inc, ISBN 981-238-305-0 Vijay Madisetti, Douglas B. Williams: The Digital Signal Processing Handbook, CRC Press, ISBN 0-8493-8572-5 Stergios Stergiopoulos: Advanced Signal Processing Handbook: Theory and Implementation for Radar, Sonar, and Medical Imaging Real-Time Systems, CRC Press, ISBN 0-8493-3691-0

Joyce Van De Vegte: Fundamentals of Digital Signal Processing, Prentice Hall, ISBN 0-13016077-6 Ashfaq Khan: Digital Signal Processing Fundamentals, Charles River Media, ISBN 1-58450281-9 Jonathan M. Blackledge, Martin Turner: Digital Signal Processing: Mathematical and Computational Methods, Software Development and Applications, Horwood Publishing, ISBN 1-898563-48-9 Bimal Krishna, K. Y. Lin, Hari C. Krishna: Computational Number Theory & Digital Signal Processing, CRC Press, ISBN 0-8493-7177-5 Doug Smith: Digital Signal Processing Technology: Essentials of the Communications Revolution, American Radio Relay League, ISBN 0-87259-819-5 Henrique S. Malvar: Signal Processing with Lapped Transforms, Artech House Publishers, ISBN 0-89006-467-9 Charles A. Schuler: Digital Signal Processing: A Hands-On Approach, McGraw-Hill, ISBN 0-07829744-3 James H. McClellan, Ronald W. Schafer, Mark A. Yoder: Signal Processing First, Prentice Hall, ISBN 0-13-090999-8 Artur Krukowski, Izzet Kale: DSP System Design: Complexity Reduced Iir Filter Implementation for Practical Applications, Kluwer Academic Publishers, ISBN 1-4020-7558-8 Kainam Thomas Wong [1]: Statistical Signal Processing lecture notes [2] at the University of Waterloo, Canada. John G. Proakis: A Self-Study Guide for Digital Signal Processing, Prentice Hall, ISBN 0-13143239-7 [show]

v t e

Digital systems

[show]

v t e

Digital signal processing

Retrieved from "http://en.wikipedia.org/w/index.php?title=Digital_signal_processing&oldid=481912280" View page ratings

Rate this page


What's this? Trustworthy

Objective

Complete

Well-written

I am highly knowledgeable about this topic (optional) Submit ratings Saved successfully Your ratings have not been submitted yet Categories:

Digital electronics Digital signal processing Telecommunication theory Radar signal processing

Hidden categories: Articles needing additional references from May 2008 All articles needing additional references

Personal tools

Log in / create account

Namespaces

Article Talk

Variants Views Actions Search


Special:Search

Read Edit View history

Navigation

Main page Contents Featured content Current events Random article Donate to Wikipedia

Interaction Toolbox

Help About Wikipedia Community portal Recent changes Contact Wikipedia

What links here Related changes Upload file Special pages Permanent link Cite this page Rate this page

Print/export

Create a book Download as PDF Printable version

Languages

Catal Dansk Deutsch Espaol Franais Italiano Magyar Bahasa Melayu Norsk (bokml) Norsk (nynorsk) Polski Portugus Slovenina Suomi Svenska Trke Ting Vit This page was last modified on 14 March 2012 at 20:46. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

WhittakerShannon interpolation formula


From Wikipedia, the free encyclopedia

Jump to: navigation, search This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2009) The WhittakerShannon interpolation formula or sinc interpolation is a method to reconstruct a continuous-time bandlimited signal from a set of equally spaced samples.

Contents
[hide]

1 Definition 2 Validity condition 3 Interpolation as convolution sum 4 Convergence 5 Stationary random processes 6 See also

[edit] Definition
The interpolation formula, as it is commonly called, dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the NyquistShannon sampling theorem by Claude Shannon in 1949. It is also commonly called Shannon's interpolation formula and Whittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it the Cardinal series. The sampling theorem states that, under certain limiting conditions, a function x(t) can be recovered exactly from its samples, x[n] = x(nT), by the WhittakerShannon interpolation formula:

where T = 1/fs is the sampling interval, fs is the sampling rate, and sinc(x) is the normalized sinc function.

[edit] Validity condition

Spectrum of a bandlimited signal as a function of frequency. The two-sided bandwidth RN = 2B is known as the Nyquist rate for the signal. If the function x(t) is bandlimited, and sampled at a high enough rate, the interpolation formula is guaranteed to reconstruct it exactly. Formally, if there exists some B 0 such that 1. the function x(t) is bandlimited to bandwidth B; that is, it has a Fourier transform for |f| > B; and 2. the sampling rate, fs, exceeds the Nyquist rate, twice the bandwidth: fs > 2B. Equivalently:

then the interpolation formula will exactly reconstruct the original x(t) from its samples. Otherwise, aliasing may occur; that is, frequencies at or above fs/2 may be erroneously reconstructed. See Aliasing for further discussion on this point.

[edit] Interpolation as convolution sum


The interpolation formula is derived in the NyquistShannon sampling theorem article, which points out that it can also be expressed as the convolution of an infinite impulse train with a sinc function:

This is equivalent to filtering the impulse train with an ideal (brick-wall) low-pass filter.

[edit] Convergence
The interpolation formula always converges absolutely and locally uniform as long as

By the Hlder inequality this is satisfied if the sequence spaces with 1 < p < , that is

belongs to any of the

This condition is sufficient, but not necessary. For example, the sum will generally converge if the sample sequence comes from sampling almost any stationary process, in which case the sample sequence is not square summable, and is not in any space.

[edit] Stationary random processes


If x[n] is an infinite sequence of samples of a sample function of a wide-sense stationary process, then it is not a member of any or Lp space, with probability 1; that is, the infinite sum of samples raised to a power p does not have a finite expected value. Nevertheless, the interpolation formula converges with probability 1. Convergence can readily be shown by computing the variances of truncated terms of the summation, and showing that the variance can be made arbitrarily small by choosing a sufficient number of terms. If the process mean is nonzero, then pairs of terms need to be considered to also show that the expected value of the truncated terms converges to zero. Since a random process does not have a Fourier transform, the condition under which the sum converges to the original function must also be different. A stationary random process does have an autocorrelation function and hence a spectral density according to the Wiener Khinchin theorem. A suitable condition for convergence to a sample function from the process is that the spectral density of the process be zero at all frequencies equal to and above half the sample rate.

[edit] See also


Aliasing, Anti-aliasing filter, Spatial anti-aliasing Fourier transform Rectangular function Sampling (signal processing) Signal (electronics) Sinc function, Sinc filter

Retrieved from "http://en.wikipedia.org/w/index.php?title=Whittaker%E2%80%93Shannon_interpolation_fo rmula&oldid=484437910" View page ratings

Rate this page


What's this? Trustworthy Objective Complete

Well-written

I am highly knowledgeable about this topic (optional) Submit ratings Saved successfully Your ratings have not been submitted yet Categories: Digital signal processing Signal processing Fourier analysis Hidden categories: Articles lacking sources from December 2009 All articles lacking sources Use dmy dates from September 2010
Personal tools

Log in / create account

Namespaces

Article Talk

Variants Views Actions Search


Special:Search

Read Edit View history

Navigation

Main page Contents Featured content Current events Random article Donate to Wikipedia

Interaction Toolbox

Help About Wikipedia Community portal Recent changes Contact Wikipedia

What links here Related changes Upload file Special pages Permanent link Cite this page Rate this page

Print/export

Create a book Download as PDF Printable version

Languages

Italiano Ting Vit This page was last modified on 28 March 2012 at 21:24. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Intersymbol interference
From Wikipedia, the free encyclopedia

(Redirected from Intersymbol interference in digital communication) Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
(November 2007)

In telecommunication, intersymbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have similar effect as noise, thus making the communication less reliable. ISI is usually caused by multipath propagation or the inherent non-linear frequency response of a channel causing successive symbols to "blur" together. The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible. Ways to fight intersymbol interference include adaptive equalization and error correcting codes.
[1]

Contents
[hide]

1 Causes o 1.1 Multipath propagation o 1.2 Bandlimited channels 2 Effects on eye patterns 3 Countering ISI 4 See also 5 References 6 Further reading 7 External links

[edit] Causes
[edit] Multipath propagation Main article: Multipath propagation

One of the causes of intersymbol interference is what is known as multipath propagation in which a wireless signal from a transmitter reaches the receiver via many different paths. The causes of this include reflection (for instance, the signal may bounce off buildings), refraction (such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting and ionospheric reflection. Since all of these paths are different lengths - plus some of these effects will also slow the signal down - this results in the different versions of the signal

arriving at different times. This delay means that part or all of a given symbol will be spread into the subsequent symbols, thereby interfering with the correct detection of those symbols. Additionally, the various paths often distort the amplitude and/or phase of the signal thereby causing further interference with the received signal.
[edit] Bandlimited channels

Another cause of intersymbol interference is the transmission of a signal through a bandlimited channel, i.e., one where the frequency response is zero above a certain frequency (the cutoff frequency). Passing a signal through such a channel results in the removal of frequency components above this cutoff frequency; in addition, the amplitude of the frequency components below the cutoff frequency may also be attenuated by the channel. This filtering of the transmitted signal affects the shape of the pulse that arrives at the receiver. The effects of filtering a rectangular pulse; not only change the shape of the pulse within the first symbol period, but it is also spread out over the subsequent symbol periods. When a message is transmitted through such a channel, the spread pulse of each individual symbol will interfere with following symbols. As opposed to multipath propagation, bandlimited channels are present in both wired and wireless communications. The limitation is often imposed by the desire to operate multiple independent signals through the same area/cable; due to this, each system is typically allocated a piece of the total bandwidth available. For wireless systems, they may be allocated a slice of the electromagnetic spectrum to transmit in (for example, FM radio is often broadcast in the 87.5 MHz - 108 MHz range). This allocation is usually administered by a government agency; in the case of the United States this is the Federal Communications Commission (FCC). In a wired system, such as an optical fiber cable, the allocation will be decided by the owner of the cable. The bandlimiting can also be due to the physical properties of the medium - for instance, the cable being used in a wired system may have a cutoff frequency above which practically none of the transmitted signal will propagate. Communication systems that transmit data over bandlimited channels usually implement pulse shaping to avoid interference caused by the bandwidth limitation. If the channel frequency response is flat and the shaping filter has a finite bandwidth, it is possible to communicate with no ISI at all. Often the channel response is not known beforehand, and an adaptive equalizer is used to compensate the frequency response.

[edit] Effects on eye patterns


For more details on eye patterns, see eye pattern.

One way to study ISI in a PCM or data transmission system experimentally is to apply the received wave to the vertical deflection plates of an oscilloscope and to apply a sawtooth wave at the transmitted symbol rate R (R = 1/T) to the horizontal deflection plates. The resulting display is called an eye pattern because of its resemblance to the human eye for binary waves. The interior region of the eye pattern is called the eye opening. An eye pattern provides a great deal of information about the performance of the pertinent system.

1. The width of the eye opening defines the time interval over which the received wave can be sampled without error from ISI. It is apparent that the preferred time for sampling is the instant of time at which the eye is open widest. 2. The sensitivity of the system to timing error is determined by the rate of closure of the eye as the sampling time is varied. 3. The height of the eye opening, at a specified sampling time, defines the margin over noise.

An eye pattern, which overlays many samples of a signal, can give a graphical representation of the signal characteristics. The first image below is the eye pattern for a binary phase-shift keying (PSK) system in which a one is represented by an amplitude of -1 and a zero by an amplitude of +1. The current sampling time is at the center of the image and the previous and next sampling times are at the edges of the image. The various transitions from one sampling time to another (such as one-to-zero, one-to-one and so forth) can clearly be seen on the diagram. The noise margin - the amount of noise required to cause the receiver to get an error - is given by the distance between the signal and the zero amplitude point at the sampling time; in other words, the further from zero at the sampling time the signal is the better. For the signal to be correctly interpreted, it must be sampled somewhere between the two points where the zero-to-one and one-to-zero transitions cross. Again, the further apart these points are the better, as this means the signal will be less sensitive to errors in the timing of the samples at the receiver. The effects of ISI are shown in the second image which is an eye pattern of the same system when operating over a multipath channel. The effects of receiving delayed and distorted versions of the signal can be seen in the loss of definition of the signal transitions. It also reduces both the noise margin and the window in which the signal can be sampled, which shows that the performance of the system will be worse (i.e. it will have a greater bit error ratio).

The eye diagram of a binary PSK system

The eye diagram of the same system with multipath effects added

[edit] Countering ISI


There are several techniques in telecommunication and data storage that try to work around the problem of intersymbol interference.

Design systems such that the impulse response is short enough that very little energy from one symbol smears into the next symbol.

Consecutive raised-cosine impulses, demonstrating zero-ISI property


Separate symbols in time with guard periods. Apply an equalizer at the receiver, that, broadly speaking, attempts to undo the effect of the channel by applying an inverse filter. Apply a sequence detector at the receiver, that attempts to estimate the sequence of transmitted symbols using the Viterbi algorithm.

[edit] See also

Nyquist ISI criterion

[edit] References
1. ^ Digital Communications by Simon Haykin, McMaster University

[edit] Further reading


William J. Dally and John W. Poulton (1998). Digital Systems Engineering. Cambridge University Press. pp. 280285. ISBN 0-521-59292-5. Herv Benoit (2002). Digital Television. Focal Press. pp. 9091. ISBN 0-240-51695-8.

[edit] External links


Definition of ISI from Federal Standard 1037C Intersymbol Interference concept [1]

Retrieved from "http://en.wikipedia.org/w/index.php?title=Intersymbol_interference&oldid=483322968" View page ratings

Rate this page


What's this? Trustworthy

Objective

Complete

Well-written

I am highly knowledgeable about this topic (optional) Submit ratings Saved successfully Your ratings have not been submitted yet Categories: Telecommunication theory Wireless networking Television terminology Hidden categories: Articles needing additional references from November 2007 All articles needing additional references
Personal tools

Log in / create account

Namespaces

Article Talk

Variants Views

Actions Search

Read Edit View history

Special:Search

Navigation

Main page Contents Featured content Current events Random article Donate to Wikipedia

Interaction Toolbox

Help About Wikipedia Community portal Recent changes Contact Wikipedia

What links here Related changes Upload file Special pages Permanent link Cite this page Rate this page

Print/export

Create a book Download as PDF Printable version

Languages

Catal Deutsch Eesti Espaol Franais Italiano

This page was last modified on 22 March 2012 at 07:00. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Das könnte Ihnen auch gefallen