Tail difference of quantiles of (symmetric) distribution functions

464 Views Asked by At

Assume, for example, $z_\alpha$ are $\Phi^{-1}(\alpha)$ quantiles from standard normal distribution, $\alpha > 0$.

If we are interested in the sum$$z_\alpha + z_{1 - \alpha}$$ for standard normal distribution it is equal to 0, if I'm not mistaken; and $2\mu$ for $\mathcal{N}(\mu,1)$.

Do we have this same or similar result hold for other distributions? I'm assuming $z_\alpha + z_{1-\alpha}=0$ should hold for all symmetric around 0 distributions, i.e., normal, student, cauchy etc. with appropriate parameters.

However, for other distributions, such as exponential, chi-squared and other - does the sum add up to anything meaningful/known, or is there nothing interesting about the sum?

1

There are 1 best solutions below

3
On BEST ANSWER

First, the subscript-notation $z_c$ (for 'percentage points') refers to the number $z_c$ such that $P(Z > z_c) = c,$ where $Z$ is standard normal and $0 < c < 1.$ This notation has been used in some printed tables of the normal distribution. Similar subscript notation has been used for other commonly tabled distributions such as t and chi-squared.

Most modern software packages implement functions for the CDF and their inverses, called 'quantile functions'. For example, if $Phi$ is the CDF of standard normal then $Phi(1.96) = 0.975$ and $\Phi^{-1}(.975) = 1.96.$

In R statistical software $\Phi$ is denoted by pnorm and $\Phi^{-1}$ by qnorm. For example, with a little more accuracy than one sees in printed tables, one has:

pnorm(1.96);  qnorm(.975)
[1] 0.9750021
[1] 1.959964

So if we let $\alpha = .05,$ then for standard normal we have $z_.05 = \Phi^{-1}(.95) \approx 1.645$. in R:

 qnorm(.95)
 [1] 1.644854

For $\alpha = .01, .02, .05, .10,$ we have $z_\alpha = -z_{1-\alpha}$ and $z_\alpha + z_{1-\alpha} = 0,$ as you say. By symmetry, in R we have a 5-place table:

 al = c(.01, .02, .05, .10)
 L = qnorm(al);  U = qnorm(1-al); S = L + U
 round(cbind(al, L, U, S), 5)

        al        L       U S
 [1,] 0.01 -2.32635 2.32635 0
 [2,] 0.02 -2.05375 2.05375 0
 [3,] 0.05 -1.64485 1.64485 0
 [4,] 0.10 -1.28155 1.28155 0

Similarly, for distribution $\mathsf{T}(\nu = 15),$ also symmetrical about $0,$ we have the five-place table below, which you can compare to row $\nu=15$ of a printed t table.

al = c(.01, .02, .05, .10)
L = qt(al, 15);  U = qt(1-al, 15); S = L + U
round(cbind(al, L, U, S), 5)

       al        L       U S
[1,] 0.01 -2.60248 2.60248 0
[2,] 0.02 -2.24854 2.24854 0
[3,] 0.05 -1.75305 1.75305 0
[4,] 0.10 -1.34061 1.34061 0

However, for an asymmetrical distribution such as $\mathsf{Chisq}(\nu = 15),$ the upper and lower cut-off points do not sum to $0,$ even if they are 'probability-symmetric', cutting the same probability from each tail of the distribution.

al = c(.01, .02, .05, .10)
L = qchisq(al, 15);  U = qchisq(1-al, 15); S = L + U
round(cbind(al, L, U, S), 5)

       al       L        U        S
[1,] 0.01 5.22935 30.57791 35.80726
[2,] 0.02 5.98492 28.25950 34.24441
[3,] 0.05 7.26094 24.99579 32.25673
[4,] 0.10 8.54676 22.30713 30.85389

Maybe at this time of day I lack some perspective or imagination, but I do not immediately see any interesting pattern in the 'sum' column. [Perhaps it is worthwhile noting that the mean of this distribution is $\mu = 15$ and that the values in the 'sum' column are very roughly $2\mu.]$

Finally, you asked about the distribution $\mathsf{Norm}(\mu = 2. \sigma=1),$ which is symmetrical about $\mu \ne 0,$ and for which the 'sum' column is $2\mu = 4.$

al = c(.01, .02, .05, .10)
L = qnorm(al, 2, 1);  U = qnorm(1-al, 2, 1); S = L + U
round(cbind(al, L, U, S), 5)

       al        L       U S
[1,] 0.01 -0.32635 4.32635 4
[2,] 0.02 -0.05375 4.05375 4
[3,] 0.05  0.35515 3.64485 4
[4,] 0.10  0.71845 3.28155 4