Bayesian Regression : Marginal Posterior Demonstration

73 Views Asked by At

(n.b. : I am learning Bayesian Statistics)

In the case of a normal linear regression : $$y=X\theta+\epsilon$$ $$\ y=(y_1,...,y_n) : \text{Observed data }, \quad\theta=(\theta_1,...,\theta_p) : \text{ parameters }, \quad\epsilon\sim N(0,\sigma I)$$ The likelihood is : $$l(\theta,\sigma^2\mid y)= \frac{1}{(2\pi\sigma^2)^{n/2}}e^{-\frac{1}{2\sigma^2}(y-X\theta)^T(y-X\theta)} $$ A non informative prior is used $$ \pi(\theta,\sigma^2)=\frac{1}{\sigma^2} $$

The marginal prior $$\pi(\theta\mid y)=\int l(\theta,\sigma^2\mid y)\cdot\pi(\theta,\sigma^2)\,d\sigma$$

Until here, it is clear. But after, I read :

the marginal prior is a scale and shifted multivariate t-distribution with $n-p$ degrees of freedom, the mean is $(X^TX)^{-1}X^Ty$, and the covariance matrix is $\frac{n-p}{n-p-2}\cdot s^2\cdot(X^TX)^{-1}$

without explanation.

Could someone give me the demonstration or where can I find it ?

Thanks for help and answer.