Problem summary
Given a common¹ set of shape functions defined at master element space $\{\psi_j(\xi,\eta)\}$ and associated to each corner $j$ of that element;
We can compute the affine transformation that maps parametric space coordinates $(\xi,\eta)$ into real space coordinates $(x,y,z)$ as an interpolation of corner coordinates $\mathbf{p_j}$:
$$\bf x(\xi,\eta) = \sum_j\psi_j p_j$$
In similar fashion, we can compute the gradient² of that map as a rectangular matrix and using the partial derivatives of shape functions:
$$\nabla_\xi \mathbf{x}= \sum_j \left[ \begin{aligned} \frac{\partial \psi_j}{\partial \xi}p_{jx} && \frac{\partial \psi_j}{\partial \eta}p_{jx} \\ \frac{\partial \psi_j}{\partial \xi}p_{jy} && \frac{\partial \psi_j}{\partial \eta}p_{jy} \\ \frac{\partial \psi_j}{\partial \xi}p_{jz} && \frac{\partial \psi_j}{\partial \eta}p_{jz} \end{aligned}\right]$$
Prove the Jacobian matrix of that transformation can be computed through the $QR$ decomposition of $\nabla_\xi\bf x$, where $[R]_{2\times 2}$ is the Jacobian matrix, and $[Q]_{3\times 2}$ is a unitary matrix whose columns are orthonormal vectors that exist in the plane of the mapped element.
Context:
So last semester I took a course in intro to Finite Element approximations. In the course, the professor instructed us to compute the Jacobian Matrix of the geometrical map from the parametric space (of the master element) to real space through the $QR$ decomposition of the 'gradient' of that map, in which
- $R$ is the Jacobian matrix we seek and
- $Q$ is a unitary matrix whose columns are orthonormal vectors in the plane of the mapped element.
He justified it by explaining that, in 3D elements, the $\nabla_\xi\bf x$ exactly matches the Jacobian matrix, but that is not the case when mapping 2D master elements to 3D. For those, the linear geometrical map is an affine transformation from their 2-dimensional parametric space (of a triangle or a quadrilateral) into the actual coordinates of the element, which exist in $\mathbb{R}^3$.
But he didn't prove that $R$ is indeed the so-called Jacobian matrix.
When I asked him for a proof, he seemed annoyed and brushed me off, which gave me the impression that this is supposed to be trivial and I was asking a dumb question. Yet, none of my classmates managed to prove it either. Now the course is over, I'm in recess, and decided to go back to the problem that's been bugging me for a few months.
Notes:
- By 'common' I mean the classic FEM linear shape functions. Valued 1 at their corresponding node and 0 on all other nodes.
- Pardon me if terms 'gradient' and 'Jacobian matrix' I used aren't exactly their usual formal definitions, but I'm matching the terms used by the professor. I've put some effort into trying to keep it unambiguous;
- I'm a last year engineering student so, if you could keep your proof within the realms of my elementary maths, I'd appreciate it. But I'll take any help I can get;
- This is not homework, so feel free to answer a complete proof. I'm just honestly trying to learn.
- There's a question here that seems to hint me in the right direction, but I haven't been able to complete this proof using it either.