I'm reading the classical Gelman's Bayesian Data Analysis and on page 54 he states
I don't know if I understood this correctly, could someone give me a real example of a pivotal quantity and why this concept is important?
I'm reading the classical Gelman's Bayesian Data Analysis and on page 54 he states
I don't know if I understood this correctly, could someone give me a real example of a pivotal quantity and why this concept is important?
a pivotal quantity, not only in Bayesian Statistics but also in Classical statistics is a function depending both on the data $\mathbf{x}$ and on the parameter ($\theta$) but with a distribution that does not depends on the parameter.
This quantity is very important because you can calculate, for example, confidence intervals, derive hypothesis tests and so on....
Example 1 - Classical Statistics:
Suppose you have the following density
$$f_X(x)=\frac{1}{\theta}e^{-x/\theta}$$
considering that this density can be written as
$$\frac{1}{\theta}\psi\left(\frac{x}{\theta}\right)$$
this means that $X$ belongs to the Scale family and as a consequence
$$T=\frac{X}{\theta}$$
is a pivotal quantity.
You can use $T$ to get a confidence interval because you can calculate
$$a<T<b$$
as $T$ has a distribution that does not depend on the parameter finding as a CI
$$\frac{b}{X}<\theta<\frac{a}{X}$$
Sure you know that, if $X\sim N(\mu,\sigma^2)$, the following quantity
$$Z=\frac{X-\mu}{\sigma}$$
is a Standard Gaussian...that is a quantity depending on the parameters, $\mu$ and $\sigma^2$ but with a distribution that is always the same $\forall \mu,\sigma^2$ thus it is a pivotal quantity...and sure you know how useful is $Z$ in many statistical calculations
Example 2 - Bayesian
Consider the Statistical model
$$X\sim N(\theta;1)$$
In this case $X-\theta$ is free of $\theta$ thus it suggest use to choose an improper prior that is uniform over the range $(-\infty;+\infty)$
$$h(\theta)=C$$
This is not a proper density but the posterior is! In fact using Bayes' rule we get
$$h(\theta|\mathbf{x})\propto h(\theta)\text{exp}\left\{-\frac{1}{2}\sum_i(x_i-\theta)^2 \right\}=h(\theta)\text{exp}\left\{-\frac{n}{2}(\theta-\overline{x})^2 \right\}$$
Normalizing you get
$$h(\theta|\mathbf{x})=\sqrt{\frac{n}{2\pi}}\text{exp}\left\{ -\frac{n}{2}(\theta-\overline{x})^2 \right\}$$
that is
$$h(\theta|\mathbf{x})\sim N\left(\theta;\frac{1}{n}\right)$$