Sie sind auf Seite 1von 10

Fayoum University

Faculty of Engineering
Department of Electrical Engineering
Communication ECE 407
Assignment No.1

Dr. Tamer Barakat


Eng/ Abdelrahman Gamal
By/ Muhamad Mustafa Mahmud Dakheel

For a given Binary Symmetric Channel, if the input probabilities are given by the

following transition matrix of a given channel, ( ) = and ( ) = . And if were


given the following transition matrix,

Calculate:
1. The output probabilities.
2. Source entropy H(x).
3. Destination entropy H(y).
4. Conditional entropy H(x/y).
5. System entropy H(x, y).
6. Average mutual information I(x; y).
7. Gain of information of x2 due to the reception of y2.
1. The output probabilities,
2 1
1 2
13
(1 ) = ( ) +
( )=
3 3
10 3
45
1 1
9 2
32
(2 ) = ( ) + ( ) =
3 3
10 3
45
2. Source entropy,
1
2
3
2
() = log 2 (3) + log 2 ( ) = log 2 3 0.918
3
3
2
3

3. Destination entropy,
() =

13
45
32
45
log 2 ( ) + log 2 ( ) 0.8673
45
13
45
32

4. Conditional entropy,
2 1
(3) (3) 10
(1 1 ) =
=
13
13
( )
45
1 2
(10) (3)
3
(2 1 ) =
=
13
13
( )
45
1 1
( )( )
5
(1 2 ) = 3 3 =
32
32
( )
45
9 2
(10) (3) 27
(2 2 ) =
=
32
32
( )
45
( ) = (

10 13
13
3 13
13
) ( ) log 2 ( ) + ( ) ( ) log 2 ( )
13 45
10
13 45
3
5 32
32
27 32
32
+ ( ) ( ) log 2 ( ) + ( ) ( ) log 2 ( ) 0.66974
32 45
5
32 45
27

5. System entropy,
(, ) = () + () 1.537
6. Average mutual information,
(; ) = () () 0.24826
7. Gain of information of x2 due to the reception of y2.
27
(2 2 )
32) 0.33985
(2 ; 2 ) = log 2 (
) = log 2 (
2
(2 )
3

A cascade of two channels is shown:

2nd Channel

1st Channel
Show that,

() ()
(|) = (, ) log
,

(|) = (, ) log
,

1
(|)
1
(|)

We can consider the third joint variable and sum all over it, so nothing changes,
(|) = (, , ) log
,,

(|) = (, , ) log
,,

1
(|)
1
(|)

Looking for the difference between the two conditional entropies,


(|) (|) = (, , ) log
,,

(|)
(|)

We also know that (, , ) = (|, )(, ), and if we know the


intermediate stage then we can obtain the input depending only on the Y- stage, and
so, (|) = (|, )
(|) (|) = (, ) (|, ) log
,

(|, )
(|)

Using the inequality log 1 ,


(|) (|) (, ) (|, ) (1
,

(|)
)
(|, )

(|) (|) (, ) [(|, ) (|)]


,

(|) (|) 0
(|) (|)
With the equality happening if & only if (|) = (|)

A signal amplitude X is a random variable uniformly distributed over the range (-1, 1).
The output Y is also random variable uniformly distributed in the range (-2, 2).
Find,
A) Source entropy.
B) Destination entropy.

For the source entropy, the variable is uniformly distributed over the given
range; and so we apply the normalization property of probability function to get
the f(x) to equal 1/2
1

() =
1

1
log 2 2 = 1
2

For the destination, the same applies, and the f(x) will equal 1/4,
2

() =
2

1
log 2 4 = 2
4

For a continuous random variable X constrained over the range (-M, M), Show that the
entropy is maximum if X is uniformly distributed over the range (-M, M). Then, find
the corresponding maximum entropy.

We start by evaluating the entropy,

() = () log 2 ()

We now consider using Lagrange multipliers method to maximize the entropy


against the normalized probability constrain,

[() log 2 () + ()]

We then try to maximize the previous expression,

[() log 2 () + ()] = 0

After applying the variation or taking the functional derivative w.r.t P(X), and
normalizing the log function to e, thats just shifting our constant lambda with
some constant value,
log () 1 + = 0
() = 1
We now still have the normalization of probability to find lambda,

() = 1 = 1

2 1 = 1
1
1
1 =

() =
2
2
Which means that the variable X is uniformly distributed over the interval (-M,
M).

Consider a discrete memory-less source with source alphabet = { , , }, and


source statistics {. , . , . }.
1) Calculate the entropy of the source.
2) Calculate the entropy of the second-order extension of the source.

For the entropy of the source,


() = 0.7 log 2 (0.7) 0.3 log 2 (0.15)
For the entropy of the second-order extension, we calculate the probabilities
first,
{0 0 , 0 1 , 0 2 , 1 0 , 1 1 , 1 2 , 2 0 , 2 1 , 2 2 }
= {0.49,0.105,0.105,0.105,0.0225,0.0225,0.105,0.0225,0.0225}
And hence,
( 2 ) = 0.49 log 2 (0.49) 0.42 log 2 (0.105) 0.09 log 2 (0.0225)

Consider the following binary erasure channel


[

Find the channel capacity.

We start by finding the joint probability for each output,


(0 , 0 ) = 0 (1)
(1 , 0 ) = 0 ()
(2 , 0 ) = 0
(0 , 1 ) = 0
(1 , 1 ) = 1 ()
(2 , 1 ) = 1 (1)
Then we find the mutual information,
1
1
(; ) = (1) [0 log 2 ( ) + 1 log 2 ( )]
0
1
Yet we know that 0 + 1 = 1,
1
1
(; ) = (1) [0 log 2 ( ) + (1 0 ) log 2 (
)]
0
1 0
The maximum capacity then will occur for maximum entropy of P0, which is
accomplished at P0 = 0.5,
1 1
= (1) [ + ] = 1
2 2

Consider the following channel,


[

Find ( ) that will achieve maximum capacity, and then find the capacity.

We calculate the mutual information,


1
9 5
2
3 5
(; ) = 0 log 2 ( 0 ) 0 log 2 ( + 0 )
3
4 4
3
8 8
3
5
1
5
(1 0 ) log 2 (1 0 ) (1 0 ) log 2 (1 + 0 )
4
9
4
3
Maximizing the mutual information by making the variation zero, or just using
numerical methods, we find the probability required 0.493 almost 0.5, and the
maximum capacity at which is 0.1302 while at 0.5 it yields a capacity of 0.13019
which is almost the same.
We can also use the formula (; ) = () (|) = () = (|), it also
yields the same value for capacitance.

An M-ary symmetric channel is one with the transition matrix,

Find the channel capacity.

Well, we know that the capacity is just the mutual information with the inputs
having the same probability,
1

So, () = and hence,


1 1

1
(|)
(; )| = ( ) (|) log 2 (
)

()
=0 =0

We have,
1

() = ()(|) =
=0

1
1
1
( + ( 1) (
)) =

Then, calculating the capacity,


1
1
(1 )
= [ log 2 () + (2 ) (
) log 2 (
)]

1
1
(1 )
= [ log 2 () + (1 ) log 2 (
)]
1
Further simplification would yield,
1
= [log 2 + log 2 + (1 ) log 2 (
)]
1

10

Das könnte Ihnen auch gefallen