Fourier transform of random binary vector

1.8k Views Asked by At

Consider a uniformly chosen random binary vector $V$ with $n$ elements. That is we say $V_i = 0$ with probability $1/2$ and $V_i=1$ with probability $1/2$. What is the probability distribution of the Fourier transform of $V$?

I have searched online but have not managed to find an answer.

3

There are 3 best solutions below

1
On

Well, discrete Fourier transform is a linear transform of a random variables. You can write $W = FV$. Because $F$ is invertible, $V=F^{-1}W$ and you get that $f_W(w)=f_V(F^{-1}w)/\det(F)$. This assumes you know the pdf of V. Is this what you were looking for?

1
On

Supposing your binary vector has $N$ elements, so that $v\in[0,1]^{N\times 1}$, i.e. $v$ is a single column vector with $N$ rows.

To obtain the Fourier Transform vector, which we denote as $V\in \mathbb{C}^{N\times 1}$, i.e. it is a vector with complex elements, we have

$\large V[k]=\sum_{n=0}^{N-1}v[n]e^{-i\frac{2\pi nk}{N}}$ (Eq. $1$)

for $n,k \in \{0,1,..,N-1\}$

We can set $\omega=e^{-i\frac{2\pi}{N}}$ (the $N$th root of unity) so that (Eq. $1$) can be expressed as

$\large V[k]=\sum_{n=0}^{N-1}v[n]\omega^{nk}$

This relationship can be (quite conveniently) represented as a matrix equation

$V=Fv$

where $F$ is an $N\times N$ symmetric Vandermonde matrix of the form

$$ F=\left( \begin{array}{ccccc} \omega^{(0)(0)} & \omega^{(0)(1)} & \omega^{(0)(2)} & \cdots & \omega^{(0)(N-1)} \\ \omega^{(1)(0)} & \omega^{(1)(1)} & \omega^{(1)(2)} & \cdots & \omega^{(1)(N-1)} \\ \omega^{(2)(0)} & \omega^{(2)(1)} & \omega^{(2)(2)} & \cdots & \omega^{(2)(N-1)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \omega^{(N-1)(0)} & \omega^{(N-1)(1)} & \omega^{(N-1)(2)} & \cdots & \omega^{(N-1)(N-1)} \end{array} \right)$$

Matrix $F$ is easy to invert,

$F^{-1}=\frac{1}{N}F^*$

where $*$ denotes the Complex Conjugate, i.e.

$$F^*= \left( \begin{array}{ccccc} \omega^{-(0)(0)} & \omega^{-(0)(1)} & \omega^{-(0)(2)} & \cdots & \omega^{-(0)(N-1)} \\ \omega^{-(1)(0)} & \omega^{-(1)(1)} & \omega^{-(1)(2)} & \cdots & \omega^{-(1)(N-1)} \\ \omega^{(-2)(0)} & \omega^{-(2)(1)} & \omega^{-(2)(2)} & \cdots & \omega^{-(2)(N-1)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \omega^{-(N-1)(0)} & \omega^{-(N-1)(1)} & \omega^{-(N-1)(2)} & \cdots & \omega^{-(N-1)(N-1)} \end{array} \right)$$

so that

$v=\frac{1}{N}F^*V$

It is more convenient to normalise the Fourier transform matrix $F$ by $\frac{1}{\sqrt{N}}$, so that

$(\frac{1}{\sqrt{N}}F)^{-1}=(\frac{1}{\sqrt{N}}F^*)$

and the determinant of both $\frac{1}{\sqrt{N}}F$ and $\frac{1}{\sqrt{N}}F^*$ equal $1$.

This leads to the normalised forward transform being

$V=\frac{1}{\sqrt{N}}Fv$

and the reverse normalised transform being

$v=\frac{1}{\sqrt{N}}F^*V$

Assume that the probability density function (pdf) of $v$ is $f(v)$, and denote the pdf of the resulting Fourier transform vector $V$ as $g(V)$.

By the law of transformation of variables for probability

$$g(V)=f(\frac{1}{\sqrt{N}}F^*V)\times \frac{1}{|det(\frac{1}{\sqrt{N}}F)|}=f(\frac{1}{\sqrt{N}}F^*V)$$

as $det(\frac{1}{\sqrt{N}}F)=1$

We are given addition information on the pdf of vector $v$. Each element of $v$, $v[k]$, for $k\in\{0,1,2,..,N-1\}$ can be either $0$ or $1$ with probability $0.5$.

To make things easier, we additionally assume that the elements $v[k]$ are identically and independently distributed (i.i.d.), with the following probability mass function (pmf)

$$v[k]\sim 0.5\delta(0)+0.5\delta(1)$$

where $\delta()$ is the Dirac Delta function.

Examining vector $V$, which is the $N$-point Fourier Transform of $v$, if we examine $V[0]$, or bin $0$, which is the Direct Current (DC) component, we have

$$V[0]=\sum_{n=0}^{N-1}v[n]$$

If we let $N=2$, the distribution of $V[0]$ will be the convolution of the densities of $v[0]$ and $v[1]$, which will be $$V[0]\sim \frac{1}{2^2}(0.5^2\delta(0) + 2(0.5)^2\delta(1)+0.5^2\delta(2))$$

For $N=3$, we have $$V[0]\sim \frac{1}{2^3}(0.5^3\delta(0) + 3(0.5)^3\delta(1)+3(0.5)^3\delta(2)+0.5^3\delta(3))$$

As we make $N$ larger, it turns out that $V[0]$ has a Binomial distribution, with parameters $N$ and $p=0.5$, so that (for $0\leq m \leq N$)

$$P(V[0]=m)={N \choose m}p^m(1-p)^{N-m}={N \choose m}0.5^N$$

As $N$ tends to infinity, the distribution of $V[0]$ will tend to a Gaussian distribution with mean $Np=N/2$ and variance $Np(1-p)=N/4$.

Note that if you applied a normalising factor $1/\sqrt{N}$ to the transform, the density of $V[0]$ would be squeezed on the x-axis, with the variance reducing to $1/4$ and the mean reducing to $\sqrt{N}/2$.

So far we have considered the special case of $V[0]$, the DC component. If we consider the other (non-zero) components, we have

$$V[k]=\sum_{n=0}^{N-1}v[n]e^{-i\frac{2\pi nk}{N}}=\sum_{n=0}^{N-1}v[n](\cos(\frac{2\pi nk}{N})-i\sin(\frac{2\pi nk}{N}))$$

What follows lacks rigour, but is based on some intuition based on empirical testing.

For large $N$, the real and imaginary components are both Gaussian distributed, with both at $0$ mean - this arises from the convolution of the pmfs of the individual components of $v$ as they are scaled by the complex exponentials then summed together.

The zero mean arises from the symmetric nature of the sine and cosine functions about the x-axis, as (for frequency bin $p$) there are either $p$ (for $p\leq N/2$) or $N-p$ ((for $p> N/2$)) integer cycles of the sinusoid functions.

Examining each of the trigonometric function, the root mean square of each function is $1/\sqrt{2}$. In addition, for the original binary vector $v[k]$, $P(v[k]=0)=P(v[k]=1)=0.5$, so that the variance is $Np(1-p)=N/4$. Based on these facts, the variance of both the real and imaginary component of $V[k],k\neq 0$ is $(N/4)\times(1/(\sqrt{2}))^2$, which is $N/8$.

It turns out that the distribution of the non-zero (non-DC) real and imaginary Fourier coefficients are the same.

2
On

The DC component is a one-dimensional random walk with step size 1/2 and drift 1/2, of $n$ steps, so is binomially distributed with mean $n/2$ and variance $n/4$.

The other components are two-dimensional random walks of $n$ steps. Splitting these into real and imaginary components, we find that the mean step size is $\sqrt{2}/2$. The real part has mean $n/4 \cdot \sqrt{2}/2$ and variance $n/8 \cdot \sqrt{2}/2$. The imaginary part has mean $0$ (via symmetry) and variance $n/8 \cdot \sqrt{2}/2$. (In each quadrant, say the first, half of the vectors are left of $\pi/4$ and half are right of it. These can be paired to make approximately $\pi/4$-directed sums. There is at most one unpaired vector, leading to the $1/n$ comment below.)

There is a small correction of order $1/n$ to the mean step size for even $n$ versus odd $n$. Since interesting FFTs have large $n$s, thus large variances, this should be negligible (contrast $O(n)$ with $O(1/n)$).

Edited to correct pre-coffee-brain errors:

  • The DC component is not a random walk with step size 1.
  • "Half-way up the circle" is $\sqrt{2}/2$, not $1/2$.
  • Justifying those estimates would be good.