At my uni on statistics course I was given some tasks to solve and here is one I have been struggling with for quite a while:
Let $\hat{\beta}$ be the least squares estimator in linear model $Y = X \beta + \epsilon$ with $p$ predictors and $n$ observations, $$\text{let }I(a) = \dfrac{(a^T\hat{\beta})^2}{s^2a^T(X^TX)^{-1}a}\text{ and }I = \sup_{a \in \mathbb{R}^p}I(a)\text{, where }s^2=SSE/(n-p)\text{ and }a\in \mathbb{R}^p.$$ Now I am supposed to interpret $I(a)$ value and come up with $H_0$ hypothesis which can be tested using $I$ as test statistic. To be honest I have no idea how to do both of it. I was trying to look at $a^T\hat{\beta}$ as on prediction of trained model for $a$, but the denominator of $I(a)$ doesn't really make sense to me. I also tried substituting $\hat{\beta}$ with $(X^TX)^{-1}X^TY$ and then replacing $Y$ with $X \beta + \epsilon$ but this also didn't lead me anywhere. I have, of course, searched for different tests which are preformed in linear model but nothing similar to $I$ statistic came out.