0 Stimmen dafür0 Stimmen dagegen

11 Aufrufe6 Seitenon Mathematical Theory of Communication

Dec 06, 2013

© Attribution Non-Commercial (BY-NC)

PDF, TXT oder online auf Scribd lesen

on Mathematical Theory of Communication

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

11 Aufrufe

on Mathematical Theory of Communication

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

- Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
- Hidden Figures Young Readers' Edition
- The Law of Explosive Growth: Lesson 20 from The 21 Irrefutable Laws of Leadership
- The E-Myth Revisited: Why Most Small Businesses Don't Work and
- The Wright Brothers
- The Power of Discipline: 7 Ways it Can Change Your Life
- The Other Einstein: A Novel
- The Kiss Quotient: A Novel
- State of Fear
- State of Fear
- The 10X Rule: The Only Difference Between Success and Failure
- Being Wrong: Adventures in the Margin of Error
- Algorithms to Live By: The Computer Science of Human Decisions
- The Black Swan
- Prince Caspian
- The Art of Thinking Clearly
- A Mind for Numbers: How to Excel at Math and Science Even If You Flunked Algebra
- The Last Battle
- The 6th Extinction
- HBR's 10 Must Reads on Strategy (including featured article "What Is Strategy?" by Michael E. Porter)

Sie sind auf Seite 1von 6

-1

Paper review

Jin Woo Shin, Sang Joon Kim

Introduction

This paper opened the new area - the information theory. Before this paper, most people believed that the only way to make the error probability of transmission as small as desired is to reduce the data rate (such as a long repetition scheme). However, surprisingly this paper revealed that it does not need to reduce the data rate for achieving that much of small errors. It proved that we can get some positive data rate that has the same small error probability and also there is an upper bound of the data rate, which means we cannot achieve the data rate with any encoding scheme that has small enough error probability over the upper bound. For reviewing this noble paper, rst, we summarize the paper at our point of view and then discuss several topics concerned with what the paper includes.

The paper was categorized as ve parts - PART I: Discrete Noiseless systems, PART II: The Discrete Channel With Noise, PART III: Mathematical Preliminaries, PART IV: The Continuous Channel, PART V: The Rate For A Continuous Source. We reconstruct this structure as four parts. We combine PART I and PART III to 2.1 Preliminary. PART I contained the important concepts for demonstrating the following parts such as entropy and ergodic source. PART III extended the concept of entropy to the continuous case for the last parts of paper. PART II,IV,V dealt with discrete source and discrete channel case, discrete source and continuous channel case and continuous source and continuous channel case respectively. We renamed each PARTs and summarize them with this point of view.

2.1

Preliminary

There are three essential concepts which are used for developing the idea of the paper. One is entropy to measure the uncertainty of the source and another is ergodic property to apply the AEP theorem. The last is capacity to measure how much the channel allow to transfer the information. 2.1.1 Entropy

To measure the quantity of information of a discrete source, the paper dened entropy. To dene the measure H (p1 , p2 , , pn ) reasonably, it should satisfy the following three requirements: 1. H should be continuous in pi .

1 2. If all pi = n then, H is monotonically increasing function of n. 3. H is the weighted sum of two H s divided from the original H .

-2

And the paper proved that the only function that satises the above three properties is:

n

H = K

i=1

pi log pi

h=

**we use h for representing the entropy in continuous case to avoid confusing that with the entropy H in discrete case. 2.1.2 Ergodic source

In the Markov process, we call the graph is ergodic if it satises the following properties: 1. IRREDUCIBLE: the graph does not contain the case of two isolated nodes A and B, i.e. every node can reach to every node. 2. APERIODIC: the GCD of the lengths of all cycles in the graph is 1. If the source is ergodic then, with any initial conditions, Pj (N ) approach the equilibrium probability when N and for j . Equilibrium probability is as following: Pj =

i

Pi pi (j )

2.1.3

Capacity

The paper dened capacity as two dierent ways and in Theorem 12 at page 25, it proved these two denitions are identical. Two dierent denitions are as following:

(T ) 1. C = limT log N T where N (T ) is the number of possible signals and T is time duration. 2. C = M ax(H (x) H (x|y ))

2.2

Discrete source and Discrete Channel case means that the source information is discrete value and the received data that are sent through the channel are also discrete values. In this case the paper proved the famous theorem (Theorem 11 at the page 22) that there is a rate upper bound that we can achieve the error-free transmission. The detail theorem is as following: Theorem 1. If the discrete source entropy H is less than or equal to the channel capacity C then there exists a code that can be transmitted over the channel with arbitrarily small amount of errors. If H > C then there is no method of encoding which gives equivocation less than H C . For proving this theorem, the paper used the assumption that the source is ergodic. With this assumption, we can use the AEP theorem and the result is the above theorem. There are several examples that make sure the result of the theorem and the last one is interesting for us. The last example is a simple encoding function whose rate is equal to the capacity of the given channel, i.e. the most ecient encoding scheme in that channel. The reason why this code is interesting is that it is [7,4,3] Hamming code - the basic linear code.

-3

2.3

2.3.1

The capacity of a continuous channel

In a continuous channel, the input or transmitted signals will be continuous functions of time f (t) belonging to a certain set, and the output or received signals will be perturbed versions of these.(Although the title of this section includes discrete source, a continuous channel assumes a continuous source in general.) The main dierence from the discrete case is the domain size of the input and output of a channel becomes innite. The capacity of a continuous channel is dened in a way analogous to that for a discrete channel, namely C = max(h(x) h(x|y )) = max

P (x) P (x)

P (x, y ) log

Of course, h in the above denition is a dierential entropy. 2.3.2 The transmission rate of a discrete source using a continuous channel

If the logarithmic base used in computing h(x) and h(x|y ) is two then C is the maximum number of binary digits that can be sent per second over the channel with arbitrarily small error, just as in the discrete case. We can check this fact in two aspects. At rst, this can be seen physically by dividing the space of signals into a large number of small cells suciently small so that the probability density P (y |x) of signal x being perturbed to point y is substantially constant over a cell. If the cell are considered as distinct points the situation os essentially the same as a discrete channel. Secondly, in the mathematical side, it can be shown that if u is the message, x is the signal, y is the received signal and v is the recovered message, then h(x) h(x|y ) H (u) H (u|v ) regardless of what operations are performed on u to obtain x or on y to obtain v . This means no matter how we encode the binary digits to obtain the signal, no matter how we decode the received signal to recover the message, the transmission rate for the binary digits does not exceed the channel capacity we dened above.

2.4

2.4.1

Fidelity

In the case of a continuous source, an innite number of binary digits needs for exact specication of a source. This means that transmitting the output of a continuous source with exact recovery at the receiving point requires is impossible using a channel with a certain amount of noise and a nite capacity. Therefore, practically, we are not interested in exact transmission when we have a continuous source, but only in transmission to within a certain tolerance. The question is, can we assign a transmission rate to a continuous source when we require only a certain delity of recovery, measured in a suitable way. A criterion of delity can be represented as follows. (P (x, y )) = P (x, y )(x, y ) dxdy

P (x, y ) is a probability distribution over the input and output signals, and (x, y ) can be interpreted as a distance between x and y , in other words, it measures how undesirable it is to receive y when x is transmitted. We can choose a suitable (x, y ), for example, (x, y ) = 1 T

T 0

[x(t) y (t)]2 dt

-4

,where T is the duration of the messages taken suciently large. In discrete case, (x, y ) can be dened as the number of symbols in the sequence y diering from the corresponding symbols in x divided by the total number of symbols in x. 2.4.2 The transmission rate for a continuous source to a delity evaluation P (x, y )(x, y ) dxdy )

For given a continuous source(i.e P (x) xed) and a delity valuation (i.e xed), we dene the rate of generating information as follows. R = min (h(x) h(x|y )) = min

P (y |x)

P (y |x)

P (x, y ) log

This means that the minimum is taken over all possible channel. The justication of this denition lies in the following result. Theorem 2. If a source has a rate R1 for a delity valuation 1 , it is possible to encode the output of the source and transmit it over a channel of capacity C with delity near v1 as desired provided R1 C . This is not possible if R1 > C .

3

3.1

Discussion

Practical approach to Shannon limit

This paper amazingly provided the upper limit of data rate that can be achieved by any encoding schemes. However, it only proved the existence of the encoding scheme that has the data rate under the capacity. Of course, there are several kinds of provided encoding scheme in the paper but they are not practical. Therefore, nding a good practical encoding scheme is another issue. There are many attempts to nd an encoding scheme to get closed to the capacity. Currently, there are two coding schemes that are most ecient. One is Turbo code, and the other is LDPC(Low Density Parity Check) code. The encoding schemes of these two codes are quite dierent. Turbo code is a kind of convolutional code, LDPC is a linear code. However, the decoding algorithms are very similar. They use the iterative decoding algorithm, which reduces the decoding computational complexity. These codes have a very high performance(near Shannon limit) but the block size should be large enough to get the performance. As this paper revealed, the message size should go to innity to get the rate that has the negligible error probability. Practically, for get Pe = 107 in Low SNR (< 2.5dB ), the block size should be larger than several tens of thousands. However, practically, the above kinds of large message block cannot be applied because of the limitation of decoding and encoding processing power. Therefore it is necessary to nd a code that has high performance with reasonable block size.

3.2

The fundamental assumption in the paper is that the source information is ergodic. With this assumption, the paper proved the AEP property and capacity theorems. Therefore, one curiosity is arisen that what happens if the source is not ergodic?. If the information is not ergodic, it is reducible or periodic. If AEP property holds with this source(not ergodic), shannons capacity theorem also satises in this case because capacity theorem is not based on ergodic source but on AEP property. Therefore, to nd a source that is not ergodic and holds AEP property is one of meaningful works. Following example is one of these sources.

-5

1 2

1 2

It is not ergodic because of reduciblity. However it has a unique stationary probability, P = [01], therefore it holds AEP property.

3.3

For dealing with a continuous source, Shannon introduced the delity concept, which is also called Rate Distortion. This concept can also apply to the case of a discrete source as follows. Denition 3. The information rate R of a discrete source for a given distortion(delity) is dened as follows. R(I ) ( ) = min (H (x) H (x|y ))

P (y |x)

x,y

Consider the case when = 0. In this case, the channel between X and Y becomes a noiseless channel. Therefore, H (X |Y ) = 0, and R = H (X ). Thus, the rate of a continuous source that Shannon dened in this paper turns out to be the generalized concept of entropy(=the rate of a discrete source) of a discrete source. In other words, entropy of a discrete source is the rate with 0 distortion. Shannon returned to it and dealt with it exhaustively in his 1959 paper(Coding theorems for a discrete source with a delity criterion), which proved the rst rate distortion theorem. To state it, dene some notions. Denition 4. A (2nR , n) rate distortion code consists of an encoding function, fn : X n {1, 2, . . . , 2nR } and a decoding function, gn : {1, 2, . . . , 2nR } X n

, and the distortion associated with the code is dened as follows. = (fn , gn ) =

xn

. Informally, means the expected distance between original messages and recovered messages. Denition 5. For a given , R is said to be a acheivable rate if there exists a sequence of (2nR , n) rate distortion codes (fn , gn ) with limn (fn , gn ) Theorem 6. For an i.i.d source X with distribution P (x) and bounded distortion function (x, y ), R(I ) ( ) is the minimum acheivable rate at distortion . This main theorem implies that we can compress a source X up to R(I ) ( ) ratio when allowing distortion.

-6

Conclusion

The reason why we selected this paper for our term project thought the most contents which are contained in this paper already have been studied in the class is that we want to grab the original idea of Shannon when he opened the information theory. The organization of the paper is intuitively clear and there are a lot of useful examples that help understand the notions and concepts. The last part about continuous source is the most dicult and confusing to understand and it may be because it has not been dealt in the class. We estimate that the notion of entropy and the assumption of ergodic process are the key of this paper because all fruits are from this root. After this paper came out, many scientists are involved in this eld and there may be a great improvement from the start point. However, even if the achievement in the information theory has become plentiful it is still under the fundamental notion entropy and ergodic. It is a great opportunity to read this paper rigorously. We can get the fundamental idea of information theory and we hope to know more about what we discussed one chapter above.

- Comparison of LMS and Neural NetworkHochgeladen vonarundhupam
- resume (2)Hochgeladen vonRohan Sanghvi
- ETPETQENHochgeladen vonDiego Cordova
- Weaver - Recent Contributions to the Mathematical Theory of CommunicationHochgeladen vonMauro Sellitto
- Seg - Mrf3 New Manual RelayHochgeladen vonBagus Permadi
- R1-050507Hochgeladen vonFahmi Azman
- Bennatan2005.pdfHochgeladen vonnomore891
- Jindal ThesisHochgeladen vonsriblue7
- Experiment 2Hochgeladen vondezteniehopez
- CIS275-Winter2014-SyllabusHochgeladen vonJessica Swanson
- Ryan-CVHochgeladen vonprofessorchaos1
- 4 Performance network codingHochgeladen vonengrsohailaslam
- ee101_dgtl_1Hochgeladen vonPECMURUGAN
- Multirate Digital Signal ProcessingHochgeladen vonSathya Ayhtas
- Qapsk SolutionHochgeladen vonihab411
- Channel_capacity.pdfHochgeladen vonyuj o
- 06042349Hochgeladen vonpraba821
- Latency 3Hochgeladen vonMuhammad Ihsan Jambak
- resume 2015Hochgeladen vonapi-268390085
- Adroit Reality fashion flow of.odtHochgeladen vonAnonymous 8erOuK4i
- System IdentificationHochgeladen vondunerto
- 6605BE 2071 Bhadra New Regular II PartHochgeladen vonGaurab Rai Lama
- 29 Nov 2012 Regarding Final Odd Semester Examination Schedule 2012-13Hochgeladen vonChanpreet Singh
- ese576Hochgeladen vonKayol Mayer
- OM RET 670Hochgeladen vonVishwanath Todurkar
- Comparison of Simulation Tools ATP-EMTP and MATLABHochgeladen vonIka Yume
- ir2318l-smHochgeladen vonPaul Albu
- imagerunner_7095Hochgeladen vonakipredater
- Kurikulum Induk Teknik Elektro 2013 v51Hochgeladen vonUlrich Wake
- DigSilent TechRef_LogdipHochgeladen vonАлишер Галиев

- On Numerical Methods of Calculating the Capacity of Continuous-Input Discrete-Output Memoryless Channels_Chang and Fan_IC_86_1990.pdfHochgeladen vonSinshaw Bekele
- Steganography and Steganalysis an Overview 553Hochgeladen vonPallavi Madasu
- Ch 6_DFT Based Channel Estimation Methods for MIMO-OfDM SystemsHochgeladen vonSinshaw Bekele
- 2008-04-30 Project Title Date Submitted Source(s)Hochgeladen vonumair-shahid-5092
- Prospects and Challenges for Developing Securities Markets in Ethiopia an Analytical ReviewHochgeladen vonSinshaw Bekele
- Weaver Recent Contributions to the Mathematical Theory of CommunicationHochgeladen vonSinshaw Bekele
- BULAK10_Performance Enhancement in LTE-A Relay Network via Relay Site Planning.pdfHochgeladen vonSinshaw Bekele
- Technical Paper on LTE AdvancedHochgeladen vonAzam Jiva
- The Sphere Packing Problem, 1998 Shannon Lecture, N. J. a. SloaneHochgeladen vonSinshaw Bekele
- Chapter 5_Information Theory and CodingHochgeladen vonSinshaw Bekele
- Channel estimation for LTE Uplink system by Perceptron neural networkHochgeladen vonJohn Berg
- Vlsi Implementation of Channel Estimation for Mimo-Ofdm Transceiver_ajassp.2013Hochgeladen vonSinshaw Bekele
- A Channel Estimation Method for MIMO-OfDM SystemsHochgeladen vonSinshaw Bekele
- A Novel Channel Estimation Algorithm for 3GPP LTE Downlink System Using Joint Time-Frequency Two-Dimensional Iterative Wiener FilterHochgeladen vonSinshaw Bekele
- Channel EstimationHochgeladen vonMohamed Fazil
- InTech-Discrete Wavelet Transform Based Wireless Digital Communication SystemsHochgeladen vonSinshaw Bekele
- Improvements in Multicarrier Modulation SystemsHochgeladen vonSinshaw Bekele
- Simplified Em Channel Estimation in Lte Systems_icassp2011_reviewHochgeladen vonSinshaw Bekele
- Error FloorsHochgeladen vonJessica Smith
- Channel Estimation for LTE Uplink in High Doppler SpreadHochgeladen vonJulita Ciccone
- Two Iterative Channel Estimation Algorithms in Single-Input Multiple-output (SIMO) LTE Systems_2012 TETTHochgeladen vonSinshaw Bekele
- Beyond the Shannon’s BoundHochgeladen vonSinshaw Bekele
- The Capacity Region of the Gaussian Multiple-Input Multiple-Output Broadcast Channel_Weingarten & Steinberg & Shamai_IEEE Tran., Vol. 52, No. 9, Sept. 2006.pdfHochgeladen vonSinshaw Bekele
- Concatenation of a Discrete Memoryless Channel and a Quantizer_Kurkoski Yagi_IEEE ITW 2010.pdfHochgeladen vonSinshaw Bekele

- MicroeconometricsHochgeladen vonaleber1962
- skittles reportHochgeladen vonapi-271901299
- Inferential Statistics GuideHochgeladen vonkararra
- SEM vs M RegressionHochgeladen vonRISTAARISRIYANTO
- AssingnmentHochgeladen vonVenkat Krishna
- Hypothesis Testing T-testHochgeladen vonRyan Putong
- 13Hochgeladen vonArnav Guddu
- Normal DistributionHochgeladen vonSalomón CB
- Homework 1Hochgeladen vonJesha Kihampa
- hedging performanceHochgeladen vonsinghal_prateek1988
- Effect of Aggregate Inhomogeneity on Mechanical Properties of Asphalt MixturesHochgeladen vonSasha Henderson
- EconometricsHochgeladen vonIago Seleme
- MBA103MB0040 – Statistics for ManagementHochgeladen vonhsWS
- EJERCICIOS DE ECONOMETRIAHochgeladen vonEsaú CM
- Discriminant Function AnalysisHochgeladen vonKőmivesTimea
- basic-algebra7.pdfHochgeladen vonHiken No Insaf Alfian
- Lec-29 earth quakeHochgeladen vonRaghavendra Gowda
- STAT-Ramil.docxHochgeladen vonmelsy_8
- Laprak KM07Hochgeladen vonJubel
- Revise AsapHochgeladen vonAleriz Chang
- cox60Hochgeladen vonYasser Naguib
- December 09 10Hochgeladen vonRush Kirubi
- Inferential StatisticsHochgeladen vonxandercage
- CH 5 HP TESTINGHochgeladen vonWonde Biru
- Forecasting and Regression BergHochgeladen vonEhli Hibre
- Application of Taguchi Method In Health And Safety (Fire Extinguishing Experiment)Hochgeladen vonAnonymous 7VPPkWS8O
- 820873349.pdfHochgeladen vonMohsen Frag
- Independent Samples t Test - Activity Sheet.docxHochgeladen vonMax Santos
- 9709_w14_qp_72Hochgeladen vonZee
- Expected Shortfall BacktestHochgeladen vonMateusz Piwoński

## Viel mehr als nur Dokumente.

Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.

Jederzeit kündbar.