Claim from an Actuarial Textbook: limits imply the existence of mean and variance

421 Views Asked by At

This is from Actuarial Mathematics for Life Contingent Risks, 2nd ed., by Dickson et al. Some definitions (not directly from the book):

Definitions/Notation. $T_x$ is defined to be the future lifetime of a life age $x \geq 0$. We also define the cumulative distribution function of $T_x$, denoted either $F_{T_x}$ or $F_x$, as $$F_{T_x}(t) = F_{x}(t) = \mathbb{P}\{T_x \leq t\}\text{.}$$ The survival function of $T_x$, denoted $S_x$, is defined as $$S_{x}(t) = 1 - F_{x}(t)\text{.}$$ It should also make sense that $T_x$ takes on only nonnegative values; i.e., $T_x \geq 0$. So, of course,$$\mathbb{E}\left[T_x\right] = \int\limits_{0}^{\infty}tf_{x}(t)\text{ d}t$$ where $f_{x}$ is the probability density function of $T_x$.

Throughout this textbook, it is assumed that $S_{x}$ is differentiable for all $t > 0$. The text also makes the following assumptions:

Assumption 2.2: $\lim_{t \to \infty}tS_{x}(t) = 0$

Assumption 2.3: $\lim_{t \to \infty}t^2S_{x}(t) = 0$

"These last two assumptions ensure that the mean and variance of the distribution of $T_x$ exist."

Now here's the main question: why is this true? I can no longer find where I asked this before, but I recall that the converse is actually true (i.e., what the authors are stating here is indeed false), but never was able to find justification for why.

I also know for a fact that IF $\mathbb{E}[T_x]$ exists that $$\mathbb{E}[T_x] = \int_{0}^{\infty}S_{x}(t) \text{ d}t\text{,}$$ but this is of course, not helpful, since it assumes that $\mathbb{E}[T_x]$ exists to begin with.

FYI: I am including in this question in case we need tools from measure-theoretic probability to solve this question. Unfortunately, I don't know the topic very well.

2

There are 2 best solutions below

4
On BEST ANSWER

The conditions given by the OP are not sufficient (as suspected by the OP in the question).

A well-known formula in probability theory states that for nonnegative random variables $Y$: $$ E[Y] = \int_0^\infty P(Y>y) dy , $$ see Integral of CDF equals expected value. This formula holds even if one of the sides above takes the value $+ \infty$.

Now we may use that formula to express the mean of the nonnegative random variable $T_x$: $$ E[T_x] = \int_0^\infty S_x(t) dt.$$ So the mean is finite iff the integral on the right hand side is finite. Let $c= 2\log 2$. Then, the example $S_x(t) = c((t+2) \log (t+2))^{-1}$ for $t\in [0,\infty)$ shows: \begin{align} \lim_{t\to \infty} tS_x(t) & = c\lim_{t \to \infty} \frac{t}{(t+2)\log (t+2)} = 0 \\ \int_0^\infty S_x(t)dt & = c\int_0^\infty ((t+2) \log (t+2))^{-1} dt = c\int_2^\infty (s \log s)^{-1} ds = c[\log (\log t)]^\infty_2 = \infty. \end{align} So, we see that Assumption 2.2 is not sufficient for the existence of the mean. We used $s=t+2$ in the integral.

Likewise the situation with the variance and the function \begin{equation} S_x(t) = c'\big((t+2)^2 \log (t+2) \big)^{-1}, t \in [0,\infty) . \end{equation} Here, $c' = 4 \log 2$. It is known that the variance exists iff the second moment exists. The derivation of the first steps for the second moment is then as follows. $$ E[T_x^2] = \int_0^\infty P(T_x^2>t) dt = \int_0^\infty S_x(\sqrt{t})dt.$$ Almost the same calculation as above shows that Assumption 2.3 is not sufficient for the existence of the variance: \begin{align} \lim_{t\to \infty} t^2S_x(t) & = c'\lim_{t \to \infty} \frac{t^2}{(t+2)^2\log (t+2)} = 0 \\ \int_0^\infty S_x(\sqrt{t})dt & = c'\int_0^\infty \big((\sqrt{t}+2)^2 \log (\sqrt{t}+2)\big)^{-1} dt = c'\int_2^\infty \big((s \log (\sqrt{s})\big)^{-1} dt \\ & = 2c'[\log (\log t)]^\infty_2 = \infty. \end{align} We used the substitution $\sqrt{s} = \sqrt{t} +2$ in the last line.

6
On

I will give a general rule relating the tail behavior to the existence of certain moments of a random variable. This rule could then be used to refine assumptions 2.2 and 2.3.

Before, first note that your IF claim is fine, though if the mean does not exist the equality you mentioned still holds: \begin{align} \mathbb{E}\left[T_x\right] &= \int_{0}^{\infty}t dF_x(t)\\ &=\int_{0}^{\infty}\int_{0}^{\infty}1_{t>y}f_x(t)dydt\\ &=\int_{0}^{\infty}\int_{0}^{\infty}1_{t>y}dF_x(t)dy\\ &=\int_{0}^{\infty}Pr(T_x>y)dy\\ &=\int_{0}^{\infty}S_x(t)dt\\ \end{align} I am allowed to change the order of integration by Tonelli's theorem.

In general, note that one could establish that if $\displaystyle t^aS_x(t)\rightarrow 0$ for some $\displaystyle a>0$ then $\displaystyle E|T_x|^b<\infty$ for $\displaystyle b<a$: To see this note that (using integrations by part) \begin{align} \int_{0}^{n}t^b dF_x(t)&=-n^bPr(T_x>n)+\int_{0}^{n}bt^{b-1}S_x(t)dt \end{align} now by $\displaystyle t^aS_x(t)\rightarrow 0$, for some $\displaystyle \epsilon >0$ we can choose $\displaystyle N=N(\epsilon)$ such that $\displaystyle Pr(T_x>t)<\frac{\epsilon}{t^a}$. Therefore $\displaystyle-t^bPr(T_x>t) \rightarrow 0$. Thus \begin{align} \int_{0}^{\infty}t^b dF_x(t)&=\int_{0}^{N}bt^{b-1}S_x(t)dt+\int_{N}^{\infty}bt^{b-1}S_x(t)dt\\ &\leq \int_{0}^{N}bt^{b-1}dt + \int_{N}^{\infty}bt^{b-1}\frac{\epsilon}{t^a}dt\\ &<\infty \end{align}