Polya's urn model - limit distribution

4k Views Asked by At

Let an urn contain $w$ white and $b$ black balls. Draw a ball randomly from the urn and return it together with another ball of the same color. Let $b_n$ be the number of black balls and $w_n$ the number of white balls after the $n$-th draw-and-replacement. Let $X_n$ be the relative proportion of white balls after the $n$-th draw-and-replacement.

I start with $b=w=1$, so the total number of balls after the $n$-th draw-and-replacement is $n+2$. Now I want to find the limit distribution of $X_n$; I already showed that $X_n$ is a martingale and that it converges a.s. It is

$$X_n = \dfrac{w_n}{n+2} \quad\text{for}\quad n \in \mathbb{N}_0. $$

I've read that the limit distribution is a beta distribution, but I don't know how to get there.
I could write $w_n$ as the sum of $Y_i$ where $Y_i$ is $0$, if the $i$-th ball is black and $1$, if the $i$-th ball is black. Then I'd have

$$ w_n = 1+\sum_{i=1}^{n} Y_i. $$

Does this help? How can I proceed?

Thanks! :)

2

There are 2 best solutions below

8
On

Refer to this?


Assuming $B_n$ is uniform on $\{0,1,...,n\}$ (proven by induction):

$$M_{\Theta}(t) = E[\exp(t\Theta)]$$

$$= E[\exp(t\lim \frac{B_n + 1}{n+2})]$$

$$= E[\lim\exp(t \frac{B_n + 1}{n+2})]$$

$$= \lim E[\exp(t \frac{B_n + 1}{n+2})]$$

$$= \lim \frac{1}{n+1}[\exp(t \frac{1}{n+2}) + \exp(t \frac{2}{n+2}) + ... + \exp(t \frac{n+1}{n+2})]$$

Case 1: $t \ne 0$

$$= \lim \frac{a(n)}{(n+1)(1-a(n))} (1-a(n)^{n+1}), \ \text{where} \ a(n) := e^{\frac{t}{n+2}}$$

$$= \lim \frac{a(n)}{(n+1)(1-a(n))} \lim (1-a(n)^{n+1})$$

$$= \lim \frac{a(n)}{(n+1)(1-a(n))} (1-e^t)$$

$$= \frac{1-e^t}{-t}$$

$$= \frac{e^t-1}{t}$$

Case 2: $t = 0$

$$= \lim \frac{1}{n+1}[\exp((0) \frac{1}{n+2}) + \exp((0) \frac{2}{n+2}) + ... + \exp((0) \frac{n+1}{n+2})]$$

$$= \lim \frac{1}{n+1} (1)(n+1) = 1$$

This is the mgf of $Unif(0,1)$

2
On

Suppose that we have started with $w$ white balls and $b$ black balls. Then

\begin{align*} \mathbb{P}(w_n = w+k) &= \binom{n}{k} \frac{w(w+1)\dots(w+k-1)b(b+1)\dots(b+n-k-1)}{(w+b)(w+b+1)\dots(w+b+n-1)} \\ &= \frac{1}{B(w, b)} \binom{n}{k} \frac{\Gamma(w+k)\Gamma(b+n-k)}{\Gamma(w+b+n)} \\ &= \frac{1}{B(w, b)} \frac{k^{w-1} (n-k)^{b-1}}{n^{w+b-1}} \frac{E_k(w)E_{n-k}(b)}{E_n(b+w)} , \end{align*}

where $\Gamma(\cdot)$ is the gamma function, $B(\alpha, \beta) = \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$ is the beta function, and

$$ E_n(z) := \frac{\Gamma(n+z)}{n!n^{z-1}}. $$

Note that $E_n(z) \to 1$ as $n\to\infty$. So, if we write $p_k = k/n$, then the m.g.f. of $X_n$ is explicitly given by

\begin{align*} \mathbb{E}[e^{\lambda X_n}] = \frac{1}{B(w, b)} \sum_{k=0}^{n} \exp\biggl( \lambda \frac{p_k + w/n}{1 + (w+b)/n} \biggr) p_k^{w-1}(1 - p_k)^{b-1} \frac{1}{n} \cdot \frac{E_k(w)E_{n-k}(b)}{E_n(b+w)}. \end{align*}

Letting $n \to \infty$, this converges to

\begin{align*} \mathbb{E}[e^{\lambda X_{\infty}}] = \frac{1}{B(w, b)} \int_{0}^{1} e^{\lambda p} p^{w-1}(1 - p)^{b-1} \, \mathrm{d}p. \end{align*}

From this, we read out that the distribution of $X_{\infty}$ has the density

$$ f(p) = \frac{1}{B(w, b)} p^{w-1}(1 - p)^{b-1} \mathbf{1}_{(0,1)}(p), $$

proving that the limit distribution is $\operatorname{Beta}(w, b)$.