Uniqueness of solution to quantile minimization problem

472 Views Asked by At

I read here: http://librarum.org/book/11685/31 (p. 51, Ex. 3) that quantiles are solutions to certain minimization problem. Here is the proof: http://www.math.ucla.edu/~tom/MathematicalStatistics/Sec18.pdf. It isn't stated explicitely that the problem can't have other solutions than those described. So I was wondering whether this is true and how to proove it. I can see it when $P(Z=b) = 0$.

1

There are 1 best solutions below

1
On BEST ANSWER

The answer is, yes, one can prove that there is no other estimator $b$ that minimizes $E(L(\theta,b)).$

To summarize the problem, we define $$ L(\theta, a) = \begin{cases} k_1 |\theta - a| & \text{if } a \le \theta, \\ k_2 |\theta - a| & \text{if } a > \theta, \end{cases} $$ where $k_1 > 0$ and $k_2 > 0.$ We let $p = k_1/(k_1 + k_2)$ and let $b$ be a $p$th quantile of the random variable $\theta$; that is, $P(\theta \le b) \ge p$ and $P(\theta \ge b) \ge 1 - p.$ We want to show that for any value $a$, $$E(L(\theta,a) - L(\theta,b)) \ge 0.$$

I questioned the proof in http://www.math.ucla.edu/~tom/MathematicalStatistics/Sec18.pdf until I realized that $$L(\theta,a) = k_1 |\theta - a| I(\theta \ge a) + k_2 |\theta - a| I(\theta < a)$$ is equivalent to $$L(\theta,a) = k_1 |\theta - a| I(\theta > a) + k_2 |\theta - a| I(\theta \le a),$$ (because in either case, $|\theta - a| = 0$ when $\theta = a$), and in addition, $P(\theta \le b) \ge p$ implies $P(\theta > b) \le 1 - p$ and $P(\theta \ge b) \ge 1 - p$ implies $P(\theta < b) \le p.$ Then indeed, for the case $a > b$, we can use the second formulation of $L(\theta,a)$ to conclude that $$E(L(\theta,a) - L(\theta,b)) \ge (a - b)(k_2 P(\theta \le b) - k_1 P(\theta > b)).$$ From $a - b > 0,$ $k_1 > 0,$ $k_2 > 0,$ $P(\theta \le b) \ge p,$ $P(\theta > b) \le 1 - p,$ and $k_2 p - k_1 (1 - p) = k_2 \frac{k_1}{k_1 + k_2} - k_1 \frac{k_2}{k_1 + k_2} = 0,$ it then follows that $$E(L(\theta,a) - L(\theta,b)) \ge (a - b)(k_2 p - k_1 (1 - p)) = 0.$$ For the case $b > a$, we use the first formulation of $L(\theta,a)$ to find that $$E(L(\theta,a) - L(\theta,b)) \ge (b - a)(k_1 P(\theta \ge b) - k_2 P(\theta < b)),$$ which (together with other facts already given) implies that $$E(L(\theta,a) - L(\theta,b)) \ge (b - a)(k_1 (1 - p) - k_2 p) = 0.$$ None of the steps in the proof assumes either that $P(\theta = b) = 0$ or that $P(\theta = b) > 0,$ so the proof is equally valid in either of these two cases.

An immediate consequence is that if $a$ is also a $p$th quantile of $\theta,$ then $$E(L(\theta,a) - L(\theta,b)) = 0.$$

Now suppose $a$ is greater than any $p$th quantile of $\theta,$ that is, $P(\theta \ge a) < 1 - p.$ Again, let $b$ be a $p$th quantile of $\theta,$ so $a > b.$ The exact expected value of $L(\theta,a) - L(\theta,b)$ in this case is $$\begin{align} E(L(\theta,a) - L(\theta,b)) & = E(-k_1(a - b) I(\theta \ge a) \\ & \qquad\quad + ((k_1 + k_2)(a - \theta) - k_1(a - b)) I(b < \theta < a) \\ & \qquad\quad + k_2(a - b) I(\theta \le b)) \\ & = (a - b)(k_2 P(\theta \le b) - k_1 P(\theta > b)) \\ & \qquad + (k_1 + k_2) E((a - \theta) I(b < \theta < a)). \end{align}$$ Either $P(b < \theta < a) = 0$ or $P(b < \theta < a) > 0.$ If $P(b < \theta < a) = 0,$ then $$\begin{align} E(L(\theta,a) - L(\theta,b)) & = E(k_2(a - b) I(\theta \le b) - k_1(a - b) I(\theta \ge a)) \\ & = (a - b)(k_2 P(\theta \le b) - k_1 P(\theta \ge a)) \\ & > (a - b)(k_2 p - k_1 (1 - p)) = 0. \end{align}$$ But if $P(b < \theta < a) > 0,$ then $E((a - \theta) I(b < \theta < a)) > 0,$ and $$\begin{align} E(L(\theta,a) - L(\theta,b)) & = (a - b)(k_2 P(\theta \le b) - k_1 P(\theta > b)) \\ & \qquad + (k_1 + k_2) E((a - \theta) I(b < \theta < a)) \\ & > (a - b)(k_2 p - k_1 (1 - p)) = 0. \end{align}$$ So we see that no such value $a$ can minimize $E(L(\theta,a).$

On the other hand, suppose $b > a$, where again $a$ is not a $p$th quantile of $\theta$. Then $P(\theta \le a) < p.$ The exact expected value of $L(\theta,a) - L(\theta,b)$ is $$\begin{align} E(L(\theta,a) - L(\theta,b)) & = E(k_1(b - a) I(\theta \ge b) \\ & \qquad\quad + ((k_1 + k_2)(b - \theta) - k_2(b - a)) I(a < \theta < b) \\ & \qquad\quad - k_2(b - a) I(\theta \le a)) \\ & = (b - a)(k_1 P(\theta \ge b) - k_2 P(\theta < b)) \\ & \qquad + E((k_1 + k_2)(b - \theta) I(a < \theta < b)). \end{align}$$ Again, either $P(b < \theta < a) = 0$ or $P(b < \theta < a) > 0,$ and in either case $$E(L(\theta,a) - L(\theta,b)) > 0.$$ So in fact no value $a$ that is not a $p$th quantile of $\theta$ can minimize $E(L(\theta,a).$ This proof still has not made any assumption about whether $P(\theta = b)$ is positive.

In summary, $a$ minimizes $E(L(\theta,a))$ if and only if $a$ is a $p$th quantile of $\theta.$ I think that's what you meant by "uniqueness" of the solution.