Partition of unity & intuition behind it.

137 Views Asked by At

Is there some intuition behind this identity?
Looks like it should be, but I can't figure it out. $$ (1-\alpha)^{n} + \sum_{i=1}^{n}\alpha(1-\alpha)^{n-i} = 1 $$ $$ \alpha \in [0; 1]$$ Would be grateful for your help!

2

There are 2 best solutions below

1
On

If $\alpha \in (0,1)$ you can have the geometric point of view where $1$ is the volume of the $n$-dimensional unit hypercube, and you reach it by starting with a smaller $n$-hypercube of size $1-\alpha$ (then its volume is $(1-\alpha)^n$) and with each value of $i$ from $1$ to $n$ you add the "side part" to transform it into a parallelepiped of size $1$ in $i$ dimensions and size $(1-\alpha)$ in the $(n-i)$ remaining ones, until you reached the unit hypercube.

3
On

Consider the tossing of a biased coin with probability of heads $\alpha$ and that of tails $1-\alpha$. You keep tossing the coin until you toss a heads.

The probability of you tossing the coin exactly $k$ times before your first heads is $\alpha(1-\alpha)^{k}$. The probability that you toss the coin at least $k$ times before your first heads is $(1-\alpha)^k$ (The first $k$ tosses are tails essentially). Since you will toss a heads at some point of time, $$ (1-\alpha)^{n} + \sum_{i=1}^{n}\alpha(1-\alpha)^{n-i} = 1 $$

The first term represents the probability that you will toss the coin at least $n$ times and the second term sums up the probabilities for all $k\leq n$ that you will toss the coin exactly $k$ times.

Edit: As Arthur said, it is easier to follow the argument if you think of it as "giving up after the $n^{\operatorname{th}}$ tails" as opposed to tossing $n$ tails before the first heads. The argument remains similar for the most part.