Let $X_1,X_2,\dots ,X_n$ be a discrete random process and $$a_n=\frac{1}{n}H(X_1,X_2,\dots ,X_n)$$Suppose further that we have for all $n\in \mathbb{N}$ $$a_n = a_{n+1}$$ I think this implies that $X_1,X_2,\dots,X_n$ is an IID random process but I don't know how to prove it. Obviously, the equality condition implies for all $n\in \mathbb{N}$ $$H(X_1,X_2,\dots,X_n) = nH(X_1)$$ Does this last equality imply that entropies are equal? I.e. $$H(X_1) = H(X_2)=\dots=H(X_n)$$ For example for $n=2$ we have $$H(X_1,X_2)\le H(X_1)+H(X_2) \implies 2H(X_1)\le H(X_1)+H(X_2) \implies H(X_1)\le H(X_2)$$ The pattern doesn't seem to continue for $n=3$: $$H(X_1,X_2,X_3)\le H(X_1)+H(X_2)+H(X_3) \implies 3H(X_1)\le H(X_1)+H(X_2)+H(X_3) \implies 2H(X_1)\le H(X_2)+H(X_3)$$ Edit: The answer by Misha Lavrov shows that in general the random process isn't IID. If we add the assumption that $X_1,X_2,\dots ,X_n$ is a stationary process, can we reach the conclusion that RVs are independent and hence IID? By stationary I mean $$\text{Pr}\{X_1 = x_1, X_2 = x_2,...,X_n = x_n\} = \text{Pr}\{X_{1+l} = x_1, X_{2+l} = x_2,...,X_{n+l} = x_n\}$$ for all integer $l$ and $n$.
2026-03-28 14:55:17.1774709717
The implication of $H(X_1,X_2,\dots,X_n) = nH(X_1)$
75 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in STOCHASTIC-PROCESSES
- Interpreting stationary distribution $P_{\infty}(X,V)$ of a random process
- Probability being in the same state
- Random variables coincide
- Reference request for a lemma on the expected value of Hermitian polynomials of Gaussian random variables.
- Why does there exists a random variable $x^n(t,\omega')$ such that $x_{k_r}^n$ converges to it
- Compute the covariance of $W_t$ and $B_t=\int_0^t\mathrm{sgn}(W)dW$, for a Brownian motion $W$
- Why has $\sup_{s \in (0,t)} B_s$ the same distribution as $\sup_{s \in (0,t)} B_s-B_t$ for a Brownian motion $(B_t)_{t \geq 0}$?
- What is the name of the operation where a sequence of RV's form the parameters for the subsequent one?
- Markov property vs. transition function
- Variance of the integral of a stochastic process multiplied by a weighting function
Related Questions in INFORMATION-THEORY
- KL divergence between two multivariate Bernoulli distribution
- convexity of mutual information-like function
- Maximizing a mutual information w.r.t. (i.i.d.) variation of the channel.
- Probability of a block error of the (N, K) Hamming code used for a binary symmetric channel.
- Kac Lemma for Ergodic Stationary Process
- Encryption with $|K| = |P| = |C| = 1$ is perfectly secure?
- How to maximise the difference between entropy and expected length of an Huffman code?
- Number of codes with max codeword length over an alphabet
- Aggregating information and bayesian information
- Compactness of the Gaussian random variable distribution as a statistical manifold?
Related Questions in ENTROPY
- Relation between Shanon entropy via relation of probabilities
- How to maximise the difference between entropy and expected length of an Huffman code?
- Appoximation of Multiplicity
- Two questions about limits (in an exercise about the axiomatic definition of entropy)
- Computing entropy from joint probability table
- Joint differential entropy of sum of random variables: $h(X,X+Y)=h(X,Y)$?
- What is the least prime which has 32 1-bits?
- Eggs, buildings and entropy
- Markov chains, entropy and mutual information
- Entropy and Maximum Mutual Information
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
In general, $X_1, \dots, X_n$ don't have to be either independent or identically distributed.
First of all, even if we did have $H(X_1) = H(X_2) = \dots = H(X_n)$, that doesn't tell us anything definite about the distributions of $X_1, \dots, X_n$. We can find lots of examples of different distributions with the same entropy.
Second, we do not know that the entropies of $X_1, \dots, X_n$ are equal; we only know that $H(X_1)$ is equal to $H(X_2 \mid X_1)$ which is equal to $H(X_3 \mid X_1,X_2)$ and so on; in general, $$H(X_1) = H(X_n \mid X_1, \dots, X_{n-1}).$$ But the unconditional entropy of $X_n$ can be very different. For example, suppose that $Y_1, Y_2, \dots$ is a sequence of independent uniform samples from $\{0,1\}$, and for all $n$, $X_n = Y_1 + Y_2 + \dots + Y_n$. Then $$H(X_1, \dots, X_n) = H(Y_1, \dots, Y_n) = n$$ (in bits) because either of the $n$-tuples $(X_1, \dots, X_n)$ and $(Y_1, \dots, Y_n)$ determines the other. However, the unconditional $H(X_n)$ keeps growing: the distribution of $X_n$ is $\text{Binomial}(n,\frac12)$ and so $H(X_n)$ grows roughly as $\frac12 \log_2 n$.
(The entropy $H(X_n)$ also does not have to keep increasing with $n$. For example, we could follow the pattern above for $X_1, \dots, X_{99}$ but then decide that $X_{100} = Y_{100}$, for a sudden drop in entropy.)
Intuitively, the condition in the question only says that the total entropy in the system increases linearly: nothing more and nothing less. The random variable $X_n$ only uses the same new amount of randomness, but can also depend on $X_1, \dots, X_{n-1}$ to increase its entropy.
One thing we can say is that if $X_1, \dots, X_n$ are identically distributed, then the entropy condition tells us that they are independent.
In this case, we know $H(X_1) = H(X_n)$ for all $n$, so an earlier equation becomes $$H(X_n) = H(X_n \mid X_1, \dots, X_{n-1}).$$ In general, $H(X) = H(X \mid Y)$ only if $X$ and $Y$ are independent; this can be proven by looking carefully at the definitions, but intuitively it holds because $H(X) = H(X \mid Y)$ tells us: "nothing about $X$ is learned by knowing $Y$". So in our case, $X_n$ is independent from $(X_1, \dots, X_{n-1})$ for each $n$, which means that all the random variables in our process are independent.
(In particular, for a stationary process, we know in particular that $X_1, X_2, \dots$ are identically distributed, and therefore they are IID.)