Bayes Estimator with Two Parameters

428 Views Asked by At

In a Bayesian approach to simple linear regression, suppose the intercept $\Theta_1$ and slope $\Theta_2$ of the regression line are a-priori independent with $\Theta_1\sim N(0,\tau^2_1)$ and $\Theta_2\sim N(0,\tau^2_2)$ Given $\Theta_1=\theta_1$ and $\Theta_2=\theta_2$, data $Y_1,...,Y_n$ (responses in the regression model) are independent with $Y_i\sim N(\theta_1+\theta_2x_i, \sigma^2)$, with the $x_i$ being the predictors. The variance $\sigma^2$ is known, and $x_1,...,x_n$ are constants summing to $0$.

Find the Bayes estimators of $\Theta_1$ and $\Theta_2$ under squared error loss.

Here's where I'm confused; the expressions for the PDFs of the posterior distribution of each estimator are dependent on the other unknown quantity; hence so should their expected values (which is the Bayes estimator).

For example, holding $\theta_2$ constant, I calculate that:

$$f_{\Theta_1}\propto \frac{\exp((\sum\theta_1+\theta_2x_i-y_1)^2/2\sigma^2)}{(2\pi\sigma^2)^{n/2}}\cdot\frac{\exp(-\theta_1^2/(2\tau_1^2))}{\sqrt{2\pi\tau_1^2}}$$

But clearly this quantity is dependent on $\theta_2$! So clearly I've done something wrong here, but I don't know how to do this properly. What are the Bayes estimators for both $\Theta_1$ and $\Theta_2$?

1

There are 1 best solutions below

2
On

The prior distribution of the pair $(\Theta_1,\Theta_2)$ is $$ \text{constant} \cdot \frac 1 {\tau_1\tau_2} \exp\left( \frac{-1} 2 \left( \frac{\theta_1^2}{\tau_1^2} + \frac{\theta_2^2}{\tau_2^2} \right) \right) \, d\theta_1 \, d\theta_2. $$ The likelihood is $$ L(\theta_1,\theta_2) = \text{constant} \cdot \exp\left( \sum_{i=1}^n (\theta_1 + \theta_2 x_i - y_i)^2 \right). $$ \begin{align} (\theta_1 + \theta_2 x_i - y_i)^2 & = \theta_1^2 + \theta_2^2 x_i^2 + y_i^2 + 2\theta_1\theta_2 x_i - 2\theta_1 y_i - 2\theta_2 x_i y_i \\[10pt] \sum_{i=1}^n (\theta_1 + \theta_2 x_i - y_i)^2 & = n\theta_1^2 + \theta_2^2 SSXX + \text{constant} + 2\theta_1\theta_2\cdot0 - 2\theta_1 n \overline y - 2\theta_2 \mathbb x \cdot\mathbb y \end{align} You'll have some algebra to do, but when you multiply the likelihood by the prior you should get $$ \text{constant} \cdot \exp(\text{quadratic form in $\theta_1,\theta_2$}) \, d\theta_1\,d\theta_2. $$ This will be a normal distribution, and the mean and variance can be read off from the form of this function. The mean should be a vector with two scalar components and the variance should be a $2\times2$ positive-definite matrix giving the two variances of scalar random variables and their covariance.