Short version. If $R$ is a $d_1 \times d_2$ matrix such that $d_1 < d_2$ and $\mathop{\mathrm{rank}} R = d_1$, $r$ is a vector of length $d_1$, $\theta$ is a vector of length $d_2$, $\theta^*$ is a vector of length $d_2 - d_1$, and $R \theta = r$, how does find such matrix $A$ (of dimensions $d_2 \times (d_2 - d_1)$) in terms of $R$ and $r$ that $A \theta^* = \theta$? My end goal is to find $A$ in terms of $R$ and $r$ in order to transform $\theta^*$ into such $\theta$ that $R\theta = r$, but I am stuck with the equation $RA\theta^* = r$ and rank-deficient square matrices popping up in the process.
Detailed version. I am currently having a problem related to matrices and linear restrictions (with application to econometrics). Suppose there is a linear model with three equations:
$$ \begin{cases} Y_1 = \theta_{1,1} + \theta_{1,2} X_{1,2} + \theta_{1,3} X_{1,3} + U_1, \\ Y_2 = \theta_{2,1} + \theta_{2,2} X_{2,2} + \theta_{2,3} X_{2,3} + \theta_{2,4} X_{2,4} + U_2, \\ Y_3 = \theta_{3,1} + \theta_{3,2} X_{3,2} + U_3. \end{cases} $$
In other terms, this system can be rewritten as $$ \begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \end{pmatrix} = \begin{pmatrix} 1 & X_{1,2} & X_{1,3} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & X_{2,2} & X_{2,3} & X_{2,4} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & X_{3,2} \\ \end{pmatrix} \begin{pmatrix} \theta_{1,1} \\ \theta_{1,2} \\ \theta_{1,3} \\ \theta_{2,1} \\ \theta_{2,2} \\ \theta_{2,3} \\ \theta_{2,4} \\ \theta_{3,1} \\ \theta_{3,2} \end{pmatrix} + \begin{pmatrix} U_1 \\ U_2 \\ U_3 \end{pmatrix}, $$ or, in more parsimonious notation, $$ \underbrace{Y}_{3\times 1} = \underbrace{X}_{3\times9} \underbrace{\theta}_{9\times1} + \underbrace{U}_{3\times1} $$
The point of such a rewrite is obtain a single vector of parameters, $\theta$.
Suppose that a certain linear restriction about this parameter vector holds, $R \theta = r$. Example: $\theta_{1,2} = \theta_{2,2}$ and $\theta_{1,2} = \theta_{3,2}$. In this case, $$ R = \begin{pmatrix} 0 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ \end{pmatrix}, \quad r = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \quad \Rightarrow \quad R\theta - r = \begin{pmatrix} \theta_{1,2} - \theta_{2,2} \\ \theta_{1,2} - \theta_{3,2} \end{pmatrix} $$
Of course, for such a restriction, the matrix $R$ need not be unique, i.e. one could have $R = \begin{pmatrix} 0 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & -1 \\ \end{pmatrix}$, which would mean $\theta_{1,2} = \theta_{2,2}$ and $\theta_{2,2} = \theta_{3,2}$, which is the same thing.
Under this restriction, one could write such system as $$ \begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \end{pmatrix} = \begin{pmatrix} 1 & X_{1,2} & X_{1,3} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & X_{2,2} & X_{2,3} & X_{2,4} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & X_{3,2} \\ \end{pmatrix} \begin{pmatrix} \theta_{1,1} \\ \theta_{1,2} \\ \theta_{1,3} \\ \theta_{2,1} \\ {\color{red}\theta_{1,2}} \\ \theta_{2,3} \\ \theta_{2,4} \\ \theta_{3,1} \\ {\color{red}\theta_{1,2}} \end{pmatrix} + \begin{pmatrix} U_1 \\ U_2 \\ U_3 \end{pmatrix}, $$ or, equivalently, $$ \begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \end{pmatrix} = \begin{pmatrix} 1 & X_{1,2} & X_{1,3} & 0 & 0 & 0 & 0 \\ 0 & X_{2,2} & 0 & 1 & X_{2,3} & X_{2,4} & 0 \\ 0 & X_{3,2} & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix} \begin{pmatrix} \theta_{1,1} \\ \theta_{1,2} \\ \theta_{1,3} \\ \theta_{2,1} \\ \theta_{2,3} \\ \theta_{2,4} \\ \theta_{3,1} \end{pmatrix} + \begin{pmatrix} U_1 \\ U_2 \\ U_3 \end{pmatrix}. $$
In more parsimonious notation, it means that $$ Y = X A \theta^* = 0, $$ where $A$ is such matrix that $A\theta^* = \theta$ such that $R \theta = r$. I need $A$ since I need both $XA$ and $A\theta^*$ later.
In this case, $A$ has to be a $9\times 7$ matrix that ‘unwraps’ the shorter vector into a longer one such that the linear restriction holds (the number of such restrictions being $\dim \theta - \dim\theta^*$. It is easy to find it manually in this case: $$ A = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} $$
However, I an genuinely puzzled about finding the matrix $A$ given $R$ and $r$. It is clear that the number of rows of $A$ must be the same as the number of columns of $r$, and the number of columns of $A$ should be the number of columns of $R$ minus the number of rows of $R$.
I tried solving the following equations: $$ \begin{cases} A \theta^* = \theta \\ R \theta = r \end{cases} \Rightarrow\ R A \theta^* = r $$ However, the problem is, the matrix $R'R$ is not full-rank, so it cannot be inverted. I considered applying the Moore—Penrose inverse, but it yielded nothing: $$ A \theta^* = (R'R)^\dagger R'r $$
At this point, I am stuck. If there are no restrictions put on $\theta$, then $A$ is identity since $\theta^* = \theta$ and $A \theta^* = I \theta^*= \theta$. On the other hand, if one has $R$ and $r$, how does one construct $A$? I believe it should be unique since it acts upon a shorter vector and returns a larger vector for which a restriction holds.
I also tried doing the SVD decomposition of $R$ and trying to solve $UDV'\theta = r$ so that $A \theta^* = V'D^{-1} U' r$, but could not figure out how one would get a $9\times 7$ matrix from objects where no dimension is equal to 7.