A few weeks ago I was asked to solve the following homework problem. The correct solution was posted after the assignment was submitted, but I do not understand the solution and am hoping for some help. This is the problem statement:
Consider a linear model $y=X\beta + \epsilon$ with $$ X= \begin{matrix} 1_n & 1_n & 0_n \\ 1_m & 0_m & 1_m \\ \end{matrix} $$
$\beta=(\beta_1, \beta_2, \beta_3$) and E($\epsilon$)=0.
Find the restricted least squares solution of $\beta$ when $\beta_2=\beta_3$, and when $\beta_1=\beta_2$
The solution statement says that if there were no restriction on $\beta$, then our function would be:
$$S(\beta_1, \beta_2, \beta_3) = \sum_{j=1}^n(Y_{1,j} - \beta_1 - \beta_2)^2 + \sum_{j=1}^m( Y_{2,j} - \beta_1 - \beta_3)^2 $$
and that, for the restriction $\beta_2=\beta_3$, the function would become:
$$S(\beta_1, \beta_2) = \sum_{j=1}^n(Y_{1,j} - \beta_1 - \beta_2)^2 + \sum_{j=1}^m( Y_{2,j} - \beta_1 - \beta_2)^2$$ The solution states that we should then use calculus to solve for the $\beta_1$ and $\beta_2$ which minimize their respective derivatives.
My understanding is that the derivatives are: $$\frac{dS}{d\beta_1}= \frac{dS}{d\beta_2}= -2\sum_{j=1}^n(Y_{1,j} - \beta_1 - \beta_2) -2\sum_{j=1}^m( Y_{2,j} - \beta_1 - \beta_2)$$ and this is where I get hung up.
These are my main questions:
Since their integrals are the same, does this mean that when $\beta_2=\beta_3$, that implies $\beta_1=\beta_2$ as well?
How can we solve for both values when we only have one equation? I suspect I am missing something elementary.