Sufficient statistic proof question

52 Views Asked by At

In the textbook 'Statistical Inference - an integrated approach' by Migon et. al. there is the following theorem:

$\underline{Theorem ~2.3}$

If $\mathbf{T} = \mathbf{T}(\mathbf{X})$ is a sufficient statistic for a parameter $\theta$, then $$ p(\theta ~ |~\mathbf{X}= x) = p(\theta ~ |~\mathbf{T}= t)),$$ for all priors $p(\theta)$.

The start of the proof of this theorem has me confused. It starts as follows; I will drop the bold characters and just use the lower case. The proof has the first lines,

$p(x ~ |~\theta) = p(x,t ~ | ~ \theta)$, if $t = \mathbf{T}(x)$ and $0$ if $t\neq \mathbf{T}(x)$. So,

$p(x ~|~ \theta ) = p(x~|~t,\theta)p(t ~ |~\theta)$

It is the logic of moving from this first line to the second which has me confused, firstly because it appears that the first line is saying that the value taken by $p(x|\theta)$ is dependent on whether $t = \mathbf{T}(x)$, but t doesn't appear in $p(x|\theta)$. Secondly is the restriction that $t = \mathbf{T}(x)$ which is needed for the first equality to hold carried into the expression on the second line?

1

There are 1 best solutions below

0
On

It seems that it is just an application of the multiplication rule, i.e., for events $A$, $B$ and $C$ you have $$ P(A \cap B|C)= P(A|B, C)P(B|C). $$ Where $T(X)=t$ is indeed carried forward into the second equation because it means that your condition is some realization of the $T(X)$ random variable.