EC > GATE 2022 > Information Theory
Let H(X) denote the entropy of a discrete random variable X taking K possible distinct real values. Which of the following statements is/are necessarily true?
A
H(X) ≤ log2 K bits
B
H(X) ≤ H(2X)
C
H(X) ≤ H(X2)
D
H(X) ≤ H(2X)

Correct : a,b,d

The correct answers are A, B, and D.
Option A — H(X) ≤ log2K : TRUE
This is a fundamental property of entropy. Entropy is maximum when all K outcomes are equally likely, giving H(X) = log2K. For any other distribution it is strictly less. So this always holds.

Option B and D — H(X) ≤ H(2X) : TRUE
Multiplying X by 2 is a one-to-one mapping, every distinct value of X maps to a unique value of 2X, with no merging of outcomes. Because of this, H(2X) = H(X) exactly. So H(X) ≤ H(2X) holds with equality, making it necessarily true.
Option C — H(X) ≤ H(X2) : NOT necessarily true
Squaring is not a one-to-one function. If X takes values like -1 and +1, both map to the same value 1 under X2, reducing the number of distinct outcomes and therefore reducing entropy. So H(X2) can be less than H(X), and this inequality does not always hold.

Similar Questions

The random variable X takes values in {-1,0,1} with probabilities P(X=-1)=P(X=1)=α and P(X=0)=1-2α, where 0
#210 MCQ
X and Y are Bernoulli random variables taking values in {0,1}. The joint probability mass function of the random variables is given by:P(X=1,Y=1)=0.56P(X=0,Y=0)...
#216 NAT
The transition diagram of a discrete memoryless channel with three input symbols and three output symbols is shown in the figure. The transition probabilities a...
#270 Fill in the Blanks

Related Topics

GATE EC 2022 GATE EC Question 31 Entropy H(X) Discrete Random Variable Information Theory Entropy Bounds log2K H(2X) H(X squared) One-to-One Function Entropy Properties GATE Electronics 2022 Shannon Entropy GATE Information Theory GATE EC Solved

Unique Visitor Count

Total Unique Visitors

Loading......