Suppose $X_1\sim N(\mu_1, \sigma_1)$ and $X_2\sim N(\mu_2, \sigma_2)$ are dependent normal r.v.s, and $(X_1, X_2)$ is a bivariate normal. Let $U = a_1X_1 + a_2X_2$ and $V = b_1X_1 + b_2X_2$.
I would like to find the joint density of $(U, V)$. Using the change of variables technique, I solved for $X_1 = \frac{a_2v-a_1b_2u}{a_2b_1-a_1b_2}$ and $X_2 = \frac{a_1v-a_2b_1u}{a_1b_2-a_2b_1}$, compute the $\det(J) = \frac{a_1^2b_2-a_2^2b_1}{(a_1b_2-a_2b_1)^2}$. I can plug all of this back into the bivariate normal formula, but it seems extremely cumbersome to reduce to something manageable.
I tried to find the answer, but can't seems to find one. Is there an easier way to compute the joint density?
In general we have a vector of jointly Gaussian random variable $\mathbf{X}\sim \mathcal{N}(\mathbf{\mu}, \Sigma)$ where $\mathbf{\mu}\in\mathbb{R}^{n}$ is vector of means and $\Sigma\in\mathbb{R}^{n\times n}$ is the co-variance matrix. It is known that if we apply any linear transformation $A\in\mathbb{R}^{m\times n}$ to $\mathbf{X}$ we obtain a jointly Gaussian vector $\mathbf{Y}=A\mathbf{X}$, so the only things we need to know to have the distribution of $\mathbf{Y}$ is it's mean vector and co-variance matrix. \begin{align*} \mathbb{E}[\mathbf{Y}] &= \mathbb{E}[A\mathbf{X}]\\ &= A \mathbb{E}[\mathbf{X}]\\ &= A \mathbf{\mu} \end{align*} and \begin{align*} \mathbb{E}[(\mathbf{Y}-\mathbb{E}[\mathbf{Y}])(\mathbf{Y}-\mathbb{E}[\mathbf{Y}])^T]&=\mathbb{E}[A (\mathbf{X}-\mathbf{\mu})(\mathbf{X}-\mathbf{\mu})^T A^T]\\ &= A \mathbb{E}[(\mathbf{X}-\mathbf{\mu})(\mathbf{X}-\mathbf{\mu})^T] A^T\\ &= A \Sigma A^T \end{align*}
So that $\mathbf{Y}\sim\mathcal{N}(A\mathbf{\mu}, A\Sigma A^T)$.
This can easily be applied to your case ($n=m=2$)
EDIT : note that if $A$ is singular, the distribution $\mathcal{N}(A\mathbf{\mu}, A\Sigma A^T)$ is not defined, special care need to be taken in that case.