Suppose I am doing OLS regression. I have a data set $\lbrace \mathbf{X}, \mathbf{y}\rbrace$.
$$\hat{\boldsymbol{\beta}} = \text{argmin}_{\boldsymbol{\beta}}\lbrace (\mathbf{y}-\mathbf{X}\boldsymbol{\beta})^T(\mathbf{y}-\mathbf{X}\boldsymbol{\beta}) \rbrace $$
I am at a minimum since my Hessian is $2\mathbf{XX^T}$, which is positive definite. My minimum is
$$(\mathbf{y}-\mathbf{X}\boldsymbol{\hat{\beta}})^T(\mathbf{y}-\mathbf{X}\boldsymbol{\hat{\beta}})$$
Then I perform a linear transformation on my data such that $T_1: \mathbf{X}\rightarrow T_1( \mathbf{X})$.
Does there always exist a transformation $T_2: \boldsymbol{\beta} \rightarrow T_2(\boldsymbol{\beta})$ that essentially acts as an inverse such that I return to the same value? i.e. where
$$(\mathbf{y}-\mathbf{X}\boldsymbol{\hat{\beta}})^T(\mathbf{y}-\mathbf{X}\boldsymbol{\hat{\beta}}) = (\mathbf{y}-T_1(\mathbf{X})T_2(\boldsymbol{\hat{\beta}}))^T(\mathbf{y}-T_1(\mathbf{X})T_2(\boldsymbol{\hat{\beta}}))$$
I was able to show this for a simple case but I was wondering if this is true generally and if so why.