When is the nth component of a (co)vector equal to its scalar product with the nth element of its dual basis?

99 Views Asked by At

My solution to a problem in a tensors book is different from the solution in the book, and I don't know why. Here's the problem:

$$\vec e_1 = (2, 1) \hspace{1em} \vec e_2 = (-1, 3) \\ \text{Find the dual basis of covectors.} $$

I decided to use the formula

$$ V^\alpha = \vec V (\tilde e^\alpha) $$

which equates the $\alpha$th component of the vector $\vec V$ to its scalar product with the $\alpha$th basis covector.

Defining

$$ \tilde e^1 = (a, b) \hspace{1em} \tilde e^2 = (c, d) $$

this yields the equations (hoping I'm using upper- and lower-indices correctly here and not confusing anyone)

$$ 2 = \vec e_1^1 = \vec e_1 \tilde e^1 = 2a + b \\ 1 = \vec e_1^2 = \vec e_1 \tilde e^2 = 2c + d \\ -1 = \vec e_2^1 = \vec e_2 \tilde e^1 = -a + 3b \\ 3 = \vec e_2^2 = \vec e_2 \tilde e^2 = -c + 3d $$

Solving these equations for $a$, $b$, $c$, and $d$, I get

$$ \tilde e^1 = (1, 0) \hspace{1em} \tilde e_2 = (0, 1) $$

The solution in the book was to use the duality condition

$$ \langle \tilde e^\alpha, \vec e_\beta \rangle = \delta ^\alpha _\beta $$

from whence it derived a system of equations like mine but with $2, 1, -1, 3$ replaced with $1, 0, 0, 1$, yidelding the dual basis

$$ \tilde e^1 = \left( \frac 3 7, \frac 1 7 \right) \hspace{1em} \tilde e^2 = \left( -\frac 1 7, \frac 2 7 \right) $$

for which it is not the case that $V^\alpha = \vec V (\tilde e^\alpha)$ as one can easily verify: checking with $V = \vec e_1$ we have $2 = V^1 \neq \vec V \tilde e^1 = 2 \cdot \frac 3 7 + 1 \cdot \frac 1 7 = 1$.

Theirs definitely seems more correct, but I'm wondering why my solution was incorrect. Does the formula I chose not apply if the basis (co)-vectors aren't orthogonal?

1

There are 1 best solutions below

0
On BEST ANSWER

The confusion here stems from interpreting the coefficients of the basis vectors as the actual basis vectors themselves. In this particular exercise, if you take for instance,

$$\vec e_1 = \begin{bmatrix} 2 & 1\end{bmatrix} ^\top $$

the question that begs being answered is, What is $2$ and what is $1$? They certainly form a vector in the sense of a list, but they actually are coefficients of another (tacitly un-spoken) basis, which we could symbolize as $\{\color{red}{\vec u_1},\color{red}{\vec u_2}\},$ so that

$$\vec e_1 = 2\color{red}{\vec u_1} + 1 \color{red}{\vec u_2}$$

and, likewise,

$$\vec e_2 = \begin{bmatrix}-1 & 3 \end{bmatrix}^\top $$

really implies,

$$\vec e_2 = -1\color{red}{\vec u_1} + 3 \color{red}{\vec u_2}$$

Your proposed system of equations in the OP is simply designed to end up recovering orthonormal covector coordinates. This is what your matching of the LHS of the system of equations with the corresponding coefficient winds up producing, now assuming an underlying co-vector basis $\{\color{blue}{ \tilde u^1},\color{blue}{ \tilde u^2}\}$:

$$\tilde e^1 = 1\color{blue}{\tilde u^1} + 0 \color{blue}{\tilde u^2}$$

and

$$\tilde e^2 = 0\color{blue}{\tilde u^1} + 1 \color{blue}{\tilde u^2}$$

in your proposed answer.

But, by skipping the Kronecker delta setup for the dual vector space basis pairing with the vector space basis in your proposed answer, you are simply deferring addressing how you match vector and co-vector basis:

What would be the inner product of these basis vectors and covectors? For instance,

$$\begin{align} \langle \tilde e^1,\vec e_1\rangle &= \left(1\color{blue}{\tilde u^1} + 0 \color{blue}{\tilde u^2}\right)\left(2\color{red}{\vec u_1} + 1 \color{red}{\vec u_2}\right)\\ &= 2 \color{blue}{\tilde u^1}\color{red}{\vec u_1}+1\color{blue}{\tilde u^1}\color{red}{\vec u_2} \end{align}$$

leave both $ \color{blue}{\tilde u^1}\color{red}{\vec u_1}$ and $\color{blue}{\tilde u^1}\color{red}{\vec u_2}$ undefined.

The way the exercise is actually solved in the book implies that the un-spoken underlying vector basis, $\{\color{red}{\vec u_1},\color{red}{\vec u_2}\}$ are the orthonormal standard Euclidean basis, linked to the co-vector basis $\{\color{blue}{\tilde u^1},\color{blue}{\tilde u^2}\}$ through the Kronecker function, so that

$$\begin{align} \langle \tilde e^1,\vec e_1\rangle &= \left(\frac 3 7\color{blue}{\tilde u^1} + \frac 1 7 \color{blue}{\tilde u^2}\right)\left(2\color{red}{\vec u_1} + 1 \color{red}{\vec u_2}\right)\\ &= \frac 6 7 \color{blue}{\tilde u^1}\color{red}{\vec u_1}+\frac 3 7 \color{blue}{\tilde u^1}\color{red}{\vec u_2}+\frac 2 7 \color{blue}{\tilde u^2}\color{red}{\vec u_1}+\frac 1 7\color{blue}{\tilde u^2}\color{red}{\vec u_2}\\ &=\frac 6 7 1 + \frac 1 7 1\\ &=1 \end{align}$$

works out as implicitly desired only if $\color{blue}{\tilde u^\alpha}\color{red}{\vec u_\beta}=\delta^\alpha_\beta.$