In the vector space $\mathbb{R}^{3},$ given two systems of vectors
$$U= \left \{ u_{1}= \left ( 4, 2, 5 \right ), u_{2}= \left ( 2, 1, 3 \right ), u_{3}= \left ( 3, 1, 3 \right ) \right \}$$
$$V= \left \{ v_{1}= \left ( 5, 2, 1 \right ), v_{2}= \left ( 6, 2, 1 \right ), v_{3}= \left ( -1, 7, 4 \right ) \right \}$$
Prove that $U$ and $V$ are two bases of $\mathbb{R}^{3}.$ Find the transition matrice $P_{U\rightarrow V}$ and $P_{V\rightarrow U}.$
Let us check that whether $U$ is a linearly independent set. Consider the linear combination
$$x_{1}u_{1}+ u_{2}u_{2}+ x_{3}u_{3}= 0$$
This is equivalent to the matrix equation
$$\begin{bmatrix} 4 & 2 & 3\\ 2 & 1 & 1\\ 5 & 3 & 3 \end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3} \end{bmatrix}= 0$$
To find the solution, consider the augmented matrix.
Applying elementary row operations, we obtain
$$\left [ \begin{array}{rrr|r} 4 & 2 & 3 & 0\\ 2 & 1 & 1 & 0\\ 5 & 3 & 3 & 0 \end{array} \right ]\xrightarrow{R_{3}\leftrightarrow R_{1}}\left [ \begin{array}{rrr|r} 5 & 3 & 3 & 0\\ 2 & 1 & 1 & 0\\ 4 & 2 & 3 & 0 \end{array} \right ]\xrightarrow[-2R_{2}+ R_{3}]{3R_{2}- R_{1}}\left [ \begin{array}{rrr|r} 1 & 0 & 0 & 0\\ 2 & 1 & 1 & 0\\ 0 & 0 & 1 & 0 \end{array} \right ]\rightarrow\left [ \begin{array}{rrr|r} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{array} \right ]$$
It follows that the solution is $x_{1}= x_{2}= x_{3}= 0.$
Hence $U$ is linearly independent.
As $U$ consists of three linearly independent vectors in $\mathbb{R}^{3},$ it must be a basis of $\mathbb{R}^{3}.$
The same thing with $V.$
So, I have no plans to use $u$s and $v$s to find the transition matrice $P_{U\rightarrow V}$ and $P_{V\rightarrow U}.$
I need to the help. Thanks a real lot !
Edit. Given $x= \left ( 6, 2, -3 \right ),$ how to find the coordinate $x$ up to the basis $V.$
And how to use the change of coordinate formula to find the coordinate $x$ up to the basis $U.$
2026-02-24 01:54:41.1771898081
Prove that $U$ and $V$ are two bases of $\mathbb{R}^{3},$ find the transition matrice $P_{U\rightarrow V}$ and $P_{V\rightarrow U}$
78 Views Asked by user822157 https://math.techqa.club/user/user822157/detail At
1
You will need some kind of Gaussian elimination to find these matrices. There’s just no way around that.
Maybe the following way of thinking about it will help you. When you set up your matrix for Gaussian elimination, you usually want to pick one fixed basis, which I will call the representing basis and then fill each column with the coordinates of some vector in this representing basis. Usually, there is a basis that makes filling the matrix very easy. In your example, you might start with $$ \begin{bmatrix} 4 & 2 & 3 & 5 & 6 & -1 \\ 2 & 1 & 1 & 2 & 2 & 7 \\ 5 & 3 & 3 & 1 & 1 & 4 \end{bmatrix} $$ containing $u_1, u_2, u_3, v_1, v_2, v_3$ in that order. No calculation was required so far. (Which representing basis did I use?)
Now the secret is that row operations do not really change the vectors written in the columns but they change the (invisible) representing basis that we used to write down the coordinates instead. (Of course, this causes the coordinate vectors, i.e. the column of the matrix, to change.)
The trick is to use these operations to change the representing basis into something more useful. For example, if you can get the first column of the matrix to become $(1, 0, 0)^T$, you know that the vector in the first column has become the first element of the representing basis. But we still know what that vector is: it’s $u_1$, as it was from the start (remember: the represented vectors don’t change, only their coordinates).
If you can even get the first three columns to become the identity matrix, you know that the first three vectors are now the representing basis, i.e. the columns of your matrix are now represented in the basis $U$. As the remaining three columns contain the elements of $V$, you get the coordinates of $v_1$, $v_2$, $v_3$ in the basis $U$, which gives you $P_{U \to V}$ (I think, notation for the transition matrices is not entirely consistent among different authors).
A similar second calculation will give you $P_{V \to U}$.
Remarks.
Note that you can often do several things at once: For example, turning the first three columns into the identity matrix uses the same row operations you used to check that it’s a basis. Hence, that check is “included” for free.
If you want to think about solving systems of equations this way, you need the additional understanding that (linear) relations between the columns don’t change from row operations. For example, if the third column is the first column minus two times the second column, that will always stay true. The “classical” use of Gaussian elimination is to transform the representing matrix in such a way that the relations are easy to see. (In particular, you want to find linear combinations of all but the last column that result in the last column.)