I tried to solve this question but the answer is totally different, can you explain how to solve it
2026-04-05 00:20:15.1775348415
How do I find the matrix with respect to a different basis?
12.4k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in MATRICES
- How to prove the following equality with matrix norm?
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Powers of a simple matrix and Catalan numbers
- Gradient of Cost Function To Find Matrix Factorization
- Particular commutator matrix is strictly lower triangular, or at least annihilates last base vector
- Inverse of a triangular-by-block $3 \times 3$ matrix
- Form square matrix out of a non square matrix to calculate determinant
- Extending a linear action to monomials of higher degree
- Eiegenspectrum on subtracting a diagonal matrix
- For a $G$ a finite subgroup of $\mathbb{GL}_2(\mathbb{R})$ of rank $3$, show that $f^2 = \textrm{Id}$ for all $f \in G$
Related Questions in TRANSFORMATION
- $\int \ x\sqrt{1-x^2}\,dx$, by the substitution $x= \cos t$
- Functions on $\mathbb{R}^n$ commuting with orthogonal transformations
- How do you prove that an image preserving barycentric coordinates w.r.t two triangles is an affine transformation?
- Non-logarithmic bijective function from $\mathbb{R}^+$ into $\mathbb{R}$
- Where does this "magical" transformatiom come from?
- Calculate the convolution: $\frac{\sin(4t)}{\pi t}*( \cos(t)+\cos(6t) )$ using Fourier transform
- Find all $x \in\mathbb R^4$ that are mapped into the zero vector by the transformation $x \mapsto Ax$
- Linear transformation $f (ax+by)=$?
- Is a conformal transformation also a general coordinate transformation?
- Infinite dimensional analysis
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?


Call $\mathcal E = \{\mathbf e_1, \mathbf e_2, \mathbf e_3\}$ the standard basis for $\Bbb R^3$. Then any vector $\mathbf v \in \Bbb R^3$ can be written as $$\mathbf v = v^1\mathbf e_1 + v^2\mathbf e_2 + v^3\mathbf e_3$$ for some unique triple of numbers $(v^1, v^2, v^3)$ (note: those superscripts are not exponents, they're just indices). The standard basis is particularly useful because it is an orthonormal basis. That means that $\mathbf e_i \cdot \mathbf e_i = 1$ for all $i$ and $\mathbf e_i \cdot \mathbf e_j = 0$ for all $i\ne j$. This property allows a lot of simplifications when you start working through problems.
The matrix representation of the vector $\mathbf v$ with respect to (wrt) $\mathcal E$ is $$[\mathbf v]_{\mathcal E} = \pmatrix{v^1 \\ v^2 \\ v^3}$$
$\mathcal E$ isn't the only basis for $\Bbb R^3$ however. In fact there are an infinite number of bases. Let $\mathcal B= \{\mathcal b_1, \mathcal b_2, \mathcal b_3\}$ be some arbitrary basis of $\Bbb R^3$. Then by definition we can also expand the vector $\mathbf v$ in the basis $\mathcal B$: $$\mathbf v = \nu^1\mathbf b_1 + \nu^2\mathbf b_2 + \nu^3\mathbf b_3$$ where in general $(v^1, v^2, v^3) \ne (\nu^1, \nu^2, \nu^3)$. Likewise the matrix representation of $\mathbf v$ wrt $\mathcal B$ is $$[\mathbf v]_{\mathcal B} = \pmatrix{\nu^1 \\ \nu^2 \\ \nu^3}$$
A linear transformation $T: V \to W$ exists independent of any bases we choose for the spaces $V$ and $W$. However, once we've chosen those bases we can express the action of $T$ on elements of $V$ in matrix form. Say we choose $\mathcal C$ as our basis for $V$ and $\mathcal D$ as our basis for $W$, then the way that $T$ transforms elements $\mathbf v \in V$ is written in matrix form like: $$[T]_{\mathcal D\leftarrow \mathcal C}[\mathbf v]_{\mathcal C} = [T(\mathbf v)]_{\mathcal D}$$
Notice that we have to specify two bases for any linear transformation $T$ before we can express the action of $T$ in matrix form. However, if $T: V \to V$ then most likely we'd like to choose the same basis for both the domain and codomain of $T$ since they are in fact the same space. Thus often when we have a linear tranformation from a space to itself we'll we'll only specify one basis for $T$ and expect that the reader will understand that $[T]_{\mathcal C}$ really means $[T]_{\mathcal C \leftarrow \mathcal C}$.
An identity transformation $I: V\to V$ is one that doesn't change any vector that it acts on. I.e. $$I(\mathbf v) = \mathbf v$$ for all $\mathbf v\in V$. Most of the time when we talk about an identity matrix we'll be talking about the matrix representation of $I$ wrt to the same basis for the domain and codomain as above. It turns out that $$[I]_{\mathcal B} = \pmatrix{1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & 1}$$ for any basis $\mathcal B$. However if we decide that we'd like to choose different bases for the domain and codomain of $I$, then the matrix transformation could take the form of any invertible matrix (ask yourself why it must be an invertible matrix). We call the matrix $[I]_{\mathcal C \leftarrow \mathcal B}$ the change of basis matrix from $\mathcal B$ to $\mathcal C$ because it has the property $$[I]_{\mathcal C\leftarrow\mathcal B}[\mathbf v]_{\mathcal B} = [\mathbf v]_{\mathcal C}$$ for all $\mathbf v \in V$.
For instance, using $[\mathbf v]_{\mathcal E}$ and $[\mathbf v]_{\mathcal B}$ from the first section, it is true that $$[I]_{\mathcal B \leftarrow \mathcal E}\pmatrix{v^1 \\ v^2 \\ v^3} = \pmatrix{\nu^1 \\ \nu^2 \\ \nu^3}$$
The change of basis matrix from a basis $\mathcal C = \{\mathbf c_1, \dots, \mathbf c_n\}$ to $\mathcal D = \{\mathbf d_1, \dots, \mathbf d_n\}$ turns out to be $$[I]_{\mathcal D \leftarrow \mathcal C} = \pmatrix{[\mathbf c_1]_{\mathcal D} & \cdots & [\mathbf c_n]_{\mathcal D}}$$ where this is the matrix whose $i$th column is the vector $[\mathbf c_i]_{\mathcal D}$ for all $i$. Confirm this for yourself by figuring out the action of this matrix on a convenient basis of $V$ (hint: try the basis $\mathcal C$).
With that change of basis idea in hand it is clear that we should be able to transform the bases associated with the domain and codomain of a linear transformation $T$ by the formula: $$[T]_{\mathcal D} = [I]_{\mathcal D\leftarrow \mathcal C}[T]_{\mathcal C}[I]_{\mathcal C\leftarrow \mathcal D}$$
Using all of that info above, let's take a look at your question. You're given $[T] = [T]_{\mathcal E}$ and you're given the basis $\mathcal B$ represented in its coordinates wrt to $\mathcal E$. That is, from your question we know that $$[\mathbf v_1]_{\mathcal E} = \pmatrix{1 \\ 0 \\ 1},\quad [\mathbf v_2]_{\mathcal E} = \pmatrix{0 \\ 1 \\ 1},\quad [\mathbf v_3]_{\mathcal E} = \pmatrix{1 \\ 1 \\ 0}$$ This is all we need to find $[T]_{\mathcal B}$. First we construct the change of basis matrix $[I]_{\mathcal E\leftarrow \mathcal B}$. But notice that it's just $$[I]_{\mathcal E\leftarrow \mathcal B} = \pmatrix{[\mathbf v_1]_{\mathcal E} & [\mathbf v_2]_{\mathcal E} & [\mathbf v_3]_{\mathcal E}} = \pmatrix{1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 0}$$ Then of course because $[I]_{\mathcal B\leftarrow \mathcal E}$ undoes the action of $[I]_{\mathcal E\leftarrow \mathcal B}$, we can see that $$[I]_{\mathcal B\leftarrow \mathcal E} = \pmatrix{1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 0}^{-1} = \frac 12\pmatrix{1 & -1 & 1 \\ -1 & 1 & 1 \\ 1 & 1 & -1}$$
Now that you have the change of basis matrices just multiply out $[T]_{\mathcal B} = [I]_{\mathcal B\leftarrow \mathcal E}[T]_{\mathcal E}[I]_{\mathcal E\leftarrow \mathcal B}$ to get $[T]_{\mathcal B}$. It's as easy as that. ;)