I have a hypothetical situation, where the parameter function $\lambda^T\beta=\alpha_1-\alpha_2$ for the model
$$y_{ij}=\mu+\alpha_i+\epsilon_{ij}, i=1,2, j=1,\cdots n_i$$
is estimated on two independent samples. So $\beta^T=(\mu,\alpha_1,\alpha_2)$ and $\lambda^T=(0,1,-1).$ The first sample has $n_1=2$ and $n_2=5$, the second sample has $n_1=10$ and $n_2 = 20$. From the first sample we know the estimate $\lambda^T\hat{\beta}$ and from the second sample we have the mean squared error $\text{(MSE}_2\text{)}$. If I would like to test the hypothesis $H_0: \lambda^T\beta = 0,$ I would use the test statistic
$$t := \frac{\lambda^T\hat{\beta}}{\sqrt{\text{MSE}_2\cdot \lambda^T(X^TX)^{-}\lambda}}.$$
For calculating $t$, am I right to use the design matrix from the first sample, from where we have the estimate for the parameter function? I assume this because for deriving the formula of the statistic we need to use the fact that $$\frac{\lambda^T\hat{\beta}}{\sqrt{\lambda^T(X^TX)^{-}\lambda\sigma^2}}\sim N(0,1).$$
Am I right to assume this? A second topic of interest for me is how would one show that under $H_0$ the statistic $t$ indeed has t-distribution?
What is the definition of $MSE_2$ in your question?
As much as I know, the statistic that has a t-distribution in the case of linear model is the following (i reuse your notations): $$ t := \frac{\lambda^T \hat{\beta}}{\sqrt{\hat{\sigma}^2\lambda^T(X^TX)^{-1}\lambda }}, $$ with $\hat{\sigma}^2 = \frac{||X(X^TX)^{-1}X^TY||^2}{n-p}$ with $p = 2$ in our case because we are interested in 2 parameters.
Then, you can show that it has a t-distribution under H0 as follows:
You know that $\hat{\beta} \sim N_3(\beta, (X^TX)^{-1}\sigma^2)$ with $N_d(0,I_d)$ the multivariate standard normal distribution in $d$ dimension and $I_d$ the identity matrix in $d$ dimension.
Then $\lambda^T\hat{\beta} \sim N(\lambda^T\beta, \sigma^2 \lambda^T(X^TX)^{-1}\lambda) = N(\lambda^T\beta, \sigma_\lambda^2)$ with $\sigma_\lambda^2 = \sigma^2 \lambda^T(X^TX)^{-1}\lambda$ to simplify the notation. And, under the null hypothesis, we have $\frac{\lambda^T\hat{\beta}}{\sigma_\lambda} \sim N(0, 1)$.
Then, we have that $\frac{\hat{\sigma}_\lambda^2}{\sigma_\lambda^2} = \frac{\hat{\sigma}^2 \lambda^T(X^TX)^{-1}\lambda}{\sigma^2 \lambda^T(X^TX)^{-1}\lambda} = \frac{\hat{\sigma}^2}{\sigma^2}$.
You can show that $\frac{\hat{\sigma}^2}{\sigma^2} \sim \chi^2_{n-2}$ with $\chi^2_{n-2}$ the chi-squared distribution with $n-2$ degrees of freedom and that it is independent of $\frac{\lambda^T\hat{\beta}}{\sigma_\lambda}$
Finally we have that $ t := \frac{\lambda^T \hat{\beta}}{\sqrt{\hat{\sigma}^2\lambda^T(X^TX)^{-1}\lambda }} = \frac{\lambda^T\hat{\beta}}{\sigma_\lambda} / \frac{\hat{\sigma}_\lambda}{\sigma_\lambda} $ follows a t-distribution with n-2 degrees of freedom by definition.
I summarized here what is explained in the lecture note
Lutz Dümbgen. Linear Models and Regression. University of Bern, June 4, 2019.
I do not think you can find it, but if you want I can share with you the document.