I have been doing a course on coursera on stochastic processes. The first process that was introduced was Renewal process, defined as follows;
$$S_n = S_{n-1} + \xi_n$$
where $\xi_n$ are IID.
We define $F^{n*}$ as convolution of $n$ IID variables, C.D.F, $F(x) = P\{\xi <= x\}$, and $F(0) = 0$. The claim is,
$$\sum_{n=1}^\infty F^{n *} < \infty.$$
I would have thought that this has something to do with CLT. However, I cannot seem to take the next step. I cannot seem to make any kind of intuitive reasoning why this should be so. The fact that these are positive random variables does not seem to be much of a clue.
Any thoughts, Any clues ideas would be of great help. I much appreciate it and thank you folks in advance.
The proof is really not that difficult, It is a bit more elaborate than the accepted solution. I think there are a couple of interesting things going on here. I thought it will be useful for other people that are learning this.
Given $S_n = S_{n-1} + \xi_n$ where $\xi_i$ are i.i.d. distribution $\mathbf{F}$, and $\mathbf{F}(0)$, we have (1) $\mathcal{U}(t) = \sum_1^\infty \mathbf{F}^*_n(t) < \infty$, (2) $\mathbf{E} \mathbf{N}_t = \mathcal{U}(t)$. Where $\mathbf{F}^*_n(t)$ is the convolution of $n$ i.i.d. variables.
The proof goes as follows.
To prove the identity (1) we do the following, $\sum_1^\infty \mathbf{F}^*_n(t) = \lim_{n \rightarrow \infty } \sum_1^n \mathbf{F}_i^*$, note that $\mathbf{F} : \mathbb{R^+} \rightarrow [0,1]$, $\mathbf{F}^*_n(t) \ge \mathbf{F}^*_{n+1}(t), \ \forall t$ and $ \mathbf{F}^*_n(t): \mathbb{R^+} \rightarrow [0,1]$. Clearly the sequence of functions $\mathbf{F}^*_n$ is monotone decreasing, and $\sum_1^\infty \mathbf{F}^*_n(t) \le \sum_1^\infty \mathbf{F}^n(t)$. Notice that, $\sum_1^\infty \mathbf{F}^n(t) = \mathbf{F}(t) + \mathbf{F}^2(t) + \mathbf{F}^3(t) + \cdots$ (independence of $\xi_i$, and $\mathbf{F}(\alpha) = \mathbf{P}(\xi_i < \alpha)$) and that $0 \le \mathbf{F} (t) \le 1$. This is a geometric series that converges for $0 \le \mathbf{F}(t) < 1$. Consider now a point $t_0$ such that $\mathbf{F}(t_0) = 1$. The interesting aspect of the convolution is that, there is an $r$ such that $\mathbf{F}^*_r(t_0) < 1$. This follows from the fact that $\mathbf{F}(t_0) = 1$ implies random variable is concentrated in a region $[0, t_0]$. The sum of $n$ such i.i.d. random variables is concentrated in the region $[0, nt_0]$. This implies that there is a $r < n$ for some $n$ such that $\mathbf{F}^*_r(t_0) < 1$. Using the fact that $\mathbf{F}^*_n(t) \ge \mathbf{F}^*_{n+1}(t), \ \forall t$ we can write the following, $\mathcal{U}(t_0) = \sum_1^\infty \mathbf{F}^*_n(t_0) = \sum_1^r \mathbf{F}^*_n(t_0) + \sum_{r+1}^\infty \mathbf{F}^*_k(t_0) \le \sum_1^r \mathbf{F}^*_n(t_0) + \sum_{r+1}^\infty (\mathbf{F}^*_r(t_0))^k$, which converges.
The identity (2) is fairly trivial to prove. Let us look at $\{\mathbf{N}_t = n\} = \{\mathbf{N}_t \ge n\} - \{ \mathbf{N}_t > n\} = \{\mathbf{N}_t \ge n\} - \{ \mathbf{N}_t >= n + 1\} = \{\mathbf{S}_n \le t\} - \{\mathbf{S}_{n+1} \le t\}$. Taking probabilities we get, $\mathbf{P}(\{\mathbf{N}_t = n\}) = \mathbf{P}(\{\mathbf{S}_n \le t\} - \{\mathbf{S}_{n+1} \le t\}) = \mathbf{P}(\{\mathbf{S}_n \le t\}) - \mathbf{P}(\{\mathbf{S}_{n+1} \le t\}) = \mathbf{F}^*_n(t) - \mathbf{F}^*_{n+1}(t)$
We use the fact that, $\{S_n = t\} \bigcup \{S_{n+1} \le t\} = \{ S_n \le t\}$, and $\{S_n = t\} \bigcap \{S_{n+1} \le t\} = \emptyset$ implies sets are mutually exclusive.
$\mathbf{E} \mathbf{N}_t$ = $\sum_1^\infty k \mathbf{P}(\{\mathbf{N}_t = k\}) = \sum_1^\infty k (\mathbf{F}^*_k(t) - \mathbf{F}^*_{k+1}(t)) = \mathbf{F}^*_1 - \mathbf{F}^*_2 + 2 \mathbf{F}^*_2 - 2 \mathbf{F}^*_3 + 3 \mathbf{F}^*_3 + \cdots = \sum_1^\infty \mathbf{F}^*_k (t) = \mathcal{U}(t) $