PROF. DR. MAHMOOD 2018-01-06 20
Chapter Two
1- Channel:
In telecommunications and computer networking, a communication channel
or channel, refers either to a physical transmission medium such as a wire, or to
a logical connection over a multiplexed medium such as a radio channel. A channel is
used to convey an information signal, for example a digital bit stream, from one or
several senders (or transmitters) to one or several receivers. A channel has a certain
capacity for transmitting information, often measured by its bandwidth in Hz or its data
rate in bits per second.
2- Symmetric channel:
The symmetric channel have the following condition:
a- Equal number of symbol in X&Y, i.e. P(Yβ£X) is a square matrix.
b- Any row in P(Yβ£X) matrix comes from some permutation of other rows.
For example the following conditional probability of various channel types as shown:
Β
a- π( π β£ π ) = [ | ] is a BSC, because it is square matrix and 1st row is the |
0 .9 | 0 .1 |
0.1 0.9permutation of 2nd row.
b- π( π β£ π ) = [0 00..05 05 .9 0 00..05 05 .9 0 00..05 05 .9 ] is TSC, because it is square matrix and each
row is a permutation of others.
Β
c- π( π β£ π ) = [ | ] is a non-symmetric since since it is not square | Β |
0 .8 | 0 .1 | 0 .1 |
0.1 0.8 0.1although each row is permutation of others.
Β
d- π( π β£ π ) = [0 0 ..8 1 | 0 0 ..1 7 | 0 0 ..1 2 ] is a non-symmetric although it is square since 2nd |
0.1 0.1 0.8row is not permutation of other rows.
PROF. DR. MAHMOOD 2018-01-06 21
2.1- Binary symmetric channel (BSC)
It is a common communications channel model used in coding theory and information
theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver
receives a bit. It is assumed that the bit is usually transmitted correctly, but that it will
be "flipped" with a small probability (the "crossover probability").
A binary symmetric channel with crossover probability p denoted by BSCp, is a
channel with binary input and binary output and probability of error p; that is, if X is the
transmitted random variable and Y the received variable, then the channel is
characterized by the conditional probabilities:
Pr( π = 0 β£ π = 0 ) = 1 - π
Pr( π = 0 β£ π = 1 ) = π
Pr( π = 1 β£ π = 0 ) = π
Pr( π = 1 β£ π = 1 ) = 1 - π
2.2- Ternary symmetric channel (TSC):
The transitional probability of TSC is:
π( π β£ π ) =
π₯1
π₯2
π₯3 [
1ππ¦-π1 2ππ | 1-ππ¦π22ππ | π¦π π3π π |
ππ | ππ | 1- 2ππ |
]
The TSC is symmetric but not very practical since practically π₯1 and π₯3 are not affected
so much as π₯2. In fact the interference between π₯1 and π₯3 is much less than the
interference between π₯1 and π₯2 or π₯2 and π₯3.
1 1
0
1-P
P P
1-P
0
PROF. DR. MAHMOOD 2018-01-06 22
1-2Pe
Hence the more practice but nonsymmetric channel has the trans. prob.
π( π β£ π ) =
π₯1
π₯2
π₯3 [1ππ¦0π1- ππ 1ππ-ππ¦π221ππ- ππ¦ 0ππ3π]
Where π₯1 interfere with π₯2 exactly the same as interference between π₯2 and π₯3, but π₯1
and π₯3 are not interfere.
3- Discrete Memoryless Channel:
The Discrete Memoryless Channel (DMC) has an input X and an output Y. At any given
time (t), the channel output Y= y only depends on the input X = x at that time (t) and it
does not depend on the past history of the input. DMC is represented by the conditional
probability of the output Y = y given the input X = x, or P(Yο―X).
1-2Pe
Β
1-2Pe |
X1 |
Pe
Pe
Y1
Y2
X3 Y3
X2
1-2Pe
Pe
Pe
1-Pe
X2
Β
1-Pe |
X1 |
X3
Y2
Y3
Y1
X
Channel P(Yο―X) |
Y |
PROF. DR. MAHMOOD 2018-01-06 23
4- Binary Erasure Channel (BEC):
The Binary Erasure Channel (BEC) model are widely used to represent channels or links
that βlossesβ data. Prime examples of such channels are Internet links and routes. A
BEC channel has a binary input X and a ternary output Y.
Note that for the BEC, the probability of βbit errorβ is zero. In other words, the
following conditional probabilities hold for any BEC model:
Pr( π = "ππππ π’ππ" β£ π = 0 ) = π
Pr( π = "ππππ π’ππ" β£ π = 1 ) = π
Pr( π = 0 β£ π = 0 ) = 1 - π
Pr( π = 1 β£ π = 1 ) = 1 - π
Pr( π = 0 β£ π = 1 ) = 0
Pr( π = 1 β£ π = 0 ) = 0
5- Special Channels:
a- Lossless channel: It has only one nonzero element in each column of the
transitional matrix P(Yβ£X).
π( π β£ π ) =
π₯1
π₯2
π₯3 [30/04π¦1 0 10π¦/24 1/π¦3030 π¦240/03 π¦5100 ]
This channel has H(Xβ£Y)=0 and I(X, Y)=H(X) with zero losses entropy.
Pe
1-Pe
Β
1-Pe | Β |
X1 | Y1 |
X2
Erasure
Y2
PROF. DR. MAHMOOD 2018-01-06 24
b- Deterministic channel: It has only one nonzero element in each row, the
transitional matrix P(Yβ£X), as an example:
π( π β£ π ) =
π₯1
π₯2
π₯3
[
π¦1 π¦2 π¦3
1 0 0
1 0 0
0 0 1
0 1 0
0 1 0 ]
This channel has H(Yβ£X)=0 and I(Y, X)=H(Y) with zero noisy entropy.
c- Ideal channel: It has only one nonzero element in each row and column, the
transitional matrix P(Yβ£X), i.e. it is an identity matrix, as an example:
Β
π( π β£ π ) = π₯2 |
[ | Β |
0 | 1 | 0 |
Β
π₯1 | Β | Β |
π¦1 | π¦2 | π¦3 |
1 | 0 | 0 |
π₯3 001]
This channel has H(Yβ£X)= H(Xβ£Y)=0 and I(Y, X)=H(Y)=H(X).
d- Noisy channel: No relation between input and output:
H( X β£ Y ) = π»(π), H( Y β£ X ) = π»(π)
πΌ(π, π) = 0, πΆ = 0
π»(π, π) = π»(π) + π»(π)
6- Shannonβs theorem:
a- A given communication system has a maximum rate of information C known as
the channel capacity.
b- If the information rate R is less than C, then one can approach arbitrarily small
error probabilities by using intelligent coding techniques.
c- To get lower error probabilities, the encoder has to work on longer blocks of
signal data. This entails longer delays and higher computational requirements.
Thus, if R β€ C then transmission may be accomplished without error in the presence
of noise. The negation of this theorem is also true: if R > C, then errors cannot be
avoided regardless of the coding technique used.
PROF. DR. MAHMOOD 2018-01-06 25
Consider a bandlimited Gaussian channel operating in the presence of additive
Gaussian noise:
The Shannon-Hartley theorem states that the channel capacity is given by:
πΆ = π΅πππ2 (1 + ππ )
Where C is the capacity in bits per second, B is the bandwidth of the channel in Hertz,
and S/N is the signal-to-noise ratio.
7- Channel Capacity (Discrete channel)
This is defined as the maximum of I(X,Y):
πΆ = πβπππππ πππππππ‘π¦ = max[πΌ(π, π)] πππ‘π /π π¦ππππ
Physically it is the maximum amount of information each symbol can carry to the
receiver. Sometimes this capacity is also expressed in bits/sec if related to the rate of
producing symbols r:
π
(π, π) = π Γ πΌ(π, π) πππ‘π / sec ππ π
(π, π) = πΌ(π, π)/ πΜ
a- Channel capacity of Symmetric channels:
The channel capacity is defined as max[πΌ(π, π)]:
πΌ(π, π) = π»(π) - π»( π β£ π )
πΌ(π, π) = π»(π) + β β π(π₯π, π¦π)
π
π=1
π
π=1
log2π(π¦π β£ π₯π)
But we have
PROF. DR. MAHMOOD 2018-01-06 26
π(π₯π, π¦π) = π(π₯π)π(π¦π β£ π₯π) ππ’π‘ ππ ππππ£π πππ’ππ‘πππ π¦ππππππ :
πΌ(π, π) = π»(π) + β β π(π₯π)π(π¦π β£ π₯π)
π
π=1
π
π=1
log2π(π¦π β£ π₯π)
If the channel is symmetric the quantity:
β π(π¦π β£ π₯π)log2π(π¦π β£ π₯π) = πΎ
π
π=1
Where K is constant and independent of the row number i so that the equation
becomes:
πΌ(π, π) = π»(π) + πΎ β π(π₯π)
π
π=1
Hence πΌ(π, π) = π»(π) + πΎ for symmetric channels
Max of πΌ(π, π) = max[π»(π) + πΎ] = max[π»(π)] + πΎ
When Y has equiprobable symbols then max[π»(π)] = πππ2π
Then
πΌ(π, π) = πππ2π + πΎ
Or
πΆ = πππ2π + πΎ
Example 9:
For the BSC shown:
Find the channel capacity and efficiency if πΌ(π₯1) = 2πππ‘π
Solution:
Β
π( π β£ π ) = [ | ] |
0 .7 | 0 .3 |
0.3 0.70.7
Β
0.7 |
X1 |
Y1
X2 Y2
PROF. DR. MAHMOOD 2018-01-06 27
Since the channel is symmetric then
πΆ = πππ2π + πΎ and π = π
π€βπππ π πππ π πππ ππ’ππππ πππ€ πππ ππππ’ππ πππππ π‘ππ£πππ¦
πΎ = 0.7πππ20.7 + 0.3πππ20.3 = -0.88129
πΆ = 1 - 0.88129 = 0.1187 πππ‘π /π π¦ππππ
The channel efficiency π = πΌ(π,π)
πΆ
πΌ(π₯1) = -πππ2π(π₯1) = 2
π(π₯1) = 2-2 = 0.25 π‘βππ π(π) = [0.25 0.75]π
And we have π(π₯π, π¦π) = π(π₯π)π(π¦π β£ π₯π) so that
Β
π(π, π) = [ | ]=[ |
0 .7 Γ 0 .25 | 0 .3 Γ 0 .25 |
0.3 Γ 0.75 0.7 Γ 0.750 0..175 225 0 0..075 525]
π(π) = [0.4 0.6] β π»(π) = 0.97095 πππ‘π /π π¦ππππ
πΌ(π, π) = π»(π) + πΎ = 0.97095 - 0.88129 = 0.0896 πππ‘π /π π¦ππππ
Then π = 0.0896
0.1187
= 75.6%
To find the channel redundancy:
π
= 1 - π = 1 - 0.756 = 0.244 ππ 24.4%
1- Channel capacity of nonsymmetric channels:
We can find the channel capacity of nonsymmetric channel by the following
steps:
a- Find I(X, Y) as a function of input prob:
πΌ(π, π) = π(π(π₯1), π(π₯2) β¦ β¦ β¦ , π(π₯π))
And use the constraint to reduce the number of variable by 1.
b- Partial differentiate I(X, Y) with respect to (n-1) input prob., then equate
these partial derivatives to zero.
c- Solve the (n-1) equations simultaneously then find
π(π₯1), π(π₯2) β¦ . . . . , π(π₯π) that gives maximum I(X, Y).
PROF. DR. MAHMOOD 2018-01-06 28
d- Put resulted values of input prob. in the function given in step 1 to find πΆ =
max[πΌ(π, π)].
Example 10:
Find the channel capacity for the channel having the following transition:
Β
π( π β£ π ) = [ | ] |
0 .7 | 0 .3 |
0.1 0.9Assume that π(π₯1) = π β
0.862
Solution: First not that the channel is not symmetric since the 1st row is not
permutation of 2nd row.
a- Let π(π₯1) = 0.862 , π‘βππ π(π₯2) = 1 - π = 0.138.
π(π, π) = π(π) Γ π( π β£ π )
Β
β΄ | π(π, π) = [ | ] |
0.1(1-0.862) | 0.9(1-0.862) | Β |
0.7Γ0.862 | 0.3Γ0.862 | Β |
= [00..0138 6034 00..2586 1242]
From above results π(π) = [0.6172 0.3828]
π»(π) = - β π(π¦π)
π
π=1
log2π(π¦π)
β΄ π»(π) = 0.960 πππ‘π /π π¦ππππ
We have
π»(π β£ π) = - β β π(π₯π, π¦π)
π
π=1
π
π=1
log2π(π¦π β£ π₯π)
π»(π β£ π) = -[0.6034 ππ0.7 + 0.2586ππ0.3 + 0.0138ππ0.1 + 0.1242ππ0.9]/ππ
PROF. DR. MAHMOOD 2018-01-06 29
H( Y β£ X ) = 0.8244 bits/symbol
πΆ = max[πΌ(π, π)] = π»(π) - H( Y β£ X )
= 0.96021 - 0.8244 = 0.1358 πππ‘π /π π¦ππππ.
Review questions:
A binary source sending π₯1 with a probability of 0.4 and π₯2 with 0.6 probability
through a channel with a probabilities of errors of 0.1 for π₯1 and 0.2 for π₯2.Determine:
1- Source entropy.
2- Marginal entropy.
3- Joint entropy.
4- Conditional entropy π»(πο΄π).
5- Losses entropy π»(πο΄π).
6- Transinformation.
Solution:
1- The channel diagram:
Or π(πο΄X) = [0 0..9 2 0 0..1 8]
π»(π) = - β π(π₯π)
π
π=1
πππ2π(π₯π)
π»(π) = -
[0.4 ln(0.4) + 0.6 ln(0.6)]
ππ2 = 0.971
πππ‘π
π π¦ππππ
2- π(π, π) = π(πο΄X) Γ π(π)
β΄ π(π, π) = [0 0..9 2 Γ Γ 0 0..4 6 0 0..1 8 Γ Γ 0 0..4 6] = [0 0..36 12 0 0..04 48]
β΄ π(π) = [0.48 0.52]
0.9
0.8
0.1
0.2
0.6
0.4 π₯1
π₯2
π¦1
π¦2
PROF. DR. MAHMOOD 2018-01-06 30
π»(π) = - β π(π¦π)
π
π=1
πππ2π(π¦π)
π»(π) = -
[0.48 ln(0.48) + 0.52 ln(0.52)]
ln(2) = 0.999 πππ‘π /π π¦ππππ
3- π»(π, π)
π»(π, π) = - β β π(π₯π, π¦π)
π
π=1
π
π=1
log2π(π₯π, π¦π)
π»(π, π) = -
[0.36 ln(0.36) + 0.04 ln(0.04) + 0.12 ln(0.12) + 0.48 ln(0.48)]
ln(2)
= 1.592 πππ‘π /π π¦ππππ
4- π»(πο΄X)
π»(πο΄X) == - β β π(π₯π, π¦π)
π
π=1
π
π=1
log2π(π¦πο΄π₯π)
π»(πο΄X) = - [0.36 ln(0.9) + 0.12 ln(0.2ln ) +(20).04 ln(0.1) + 0.48 ln(0.8)]
= 0.621
πππ‘π
π π¦ππππ
Or π»(πο΄X) = π»(π, π) - π»(π) = 1.592 - 0.971 = 0.621 πππ‘π
π π¦ππππ
5- π»(πο΄Y) = π»(π, π) - π»(π) = 1.592 - 0.999 = 0.593 πππ‘π /π π¦ππππ
6- πΌ(π, π) = π»(π) - π»(πο΄Y) = 0.971 - 0.593 = 0.378 bits/symbol
2- Cascading of Channels
If two channels are cascaded, then the overall transition matrix is the product of the two
transition matrices.
p(z / x) ο½ p(y / x).p(z / y)
matrix
(nο΄ k)
matrix
(nο΄ m)
matrix
(mο΄ k)
Β
Channel 1 |
. . |
Β
Channel 1 |
. . |
1 m
. .
1 n
1 k
PROF. DR. MAHMOOD 2018-01-06 31
For the series information channel, the overall channel capacity is not exceed any of
each channel individually.
πΌ(π, π) β€ πΌ(π, π) & πΌ(π, π) β€ πΌ(π, π)
Example:
Find the transition matrix p(z / x) for the cascaded channel shown.
οΉοΊο»
ο©οͺο«
ο½
0.3 0 0.7
0.8 0.2 0
p(y / x) ,
οΉοΊοΊοΊο»
ο©οͺοͺοͺο«
ο½
1 0
1 0
0.7 0.3
p(z / y)
οΉοΊο»
ο©οͺο«
ο½
οΉοΊοΊοΊο»
ο©οͺοͺοͺο«
οΉοΊο»
ο©οͺο«
ο½
0.91 0.09
0.76 0.24
1 0
1 0
0.7 0.3
0.3 0 0.7
0.8 0.2 0
p(z / x)
0.7
0.8
0.3
0.2
0.7
0.3
1
1Β
Β