I'm currently thinking about the following problem:
Problem:
Let $B = (b_1, b_2, b_3)$ a base of $\mathbb{R}^3$. Find the correlating dual basis $B^* = (b_1^*, b_2^*, b_3^*)$.
$B$ is explicitly given as:
$ b_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, b_2 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}, b_3 = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} $.
Intuitive Solution:
The elements in $B^*$ ($\mathbb{R^3} \rightarrow \mathbb{R}$) can be displayed as a matrix $(x, y, z)$, since the dual Basis $B^*$ has the property:
$ b^*_i(b_j) = \begin{cases} 1 & i = j \\ 0 & i \neq j \end{cases} $
We can find $x_1, y_1, z_1$ (the elements for $b^*_1$) simply by solving the system of equations:
$ 1x_1 + 0y_1 + 0z_1 = 1 \\ 1x_1 + 1y_1 + 0z_1 = 0 \\ 1x_1 + 1y_1 + 1z_1 = 0 \\ $
And do the same for $x_2, y_2, z_2$ and $x_3, y_3, z_3$ (the elements for $b^*_2$ and $b^*_3$). I think this is very intuitive, but sadly I might not have the time in my exam to solve all three equation-systems with this approach.
Quick Solution:
Luckily I found this other way, which solves all three equation-systems at once. Just write $B$ as matrix and append the unit-matrix. Now use the Gauss-Algorithm to invert the given matrix:
$ \begin{pmatrix} 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & 0 & 1 & -1 & 0 \\ 0 & 1 & 0 & 0 & 1 & -1 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{pmatrix} $
The inversion yields:
$b^*_1 = (1, -1, 0)$, $b^*_2 = (0, 1, -1)$, $b^*_3 = (0, 0, 1)$.
My question:
I understand the second algorithm well and am capable to apply it. Poorly I have no clue why this algorithm works. The first solution is so intuitive and I feel like I understood it well.
Could someone please point out, how this inversion solved all the three systems at once?
We can look at this in general for $\mathbb R^n$. To find each of the $b_i^*$ your first method involves solving the system of equations $$\begin{align}b_i^*b_1&=\delta_{i1}\\\vdots\\b_i^*b_n&=\delta_{in},\end{align}$$ which can be written in matrix form as $b_i^*\mathbf B=e_i^*$, where $\mathbf B$ is the matrix with the vectors of the basis $B$ as its columns and $e_i^*$ a standard basis vector of ${\mathbb R^n}^*$. Since in a matrix product $\mathbf C\mathbf D$ the $i$th row of the product is the $i$th row of $\mathbf C$ multiplied by $\mathbf D$, all of these equations can be combined into the single matrix equation $\mathbf X\mathbf B=\mathbf I$. The solution to this is obviously $\mathbf B^{-1}$ and the rows of this matrix are the solutions to the individual equations, i.e., the dual basis vectors to $B$.
Another, somewhat roundabout way to look at this is through the lens of change-of-basis operations.
If we represent elements of the dual spaces as row vectors, then the effect of a linear map on one of these vectors can be represented as right-multiplication of the corresponding row vector by some matrix. If $\mathbf M$ is the matrix of the linear map $L:V\to W$ relative to some choice of bases for $V$ and $W$, then the matrix of the adjoint map $L^*:W^*\to V^*$ relative to the dual bases is also $\mathbf M$.
Let $V=\mathbb R^n$, $W=\mathbb R^n$ and $L$ be the linear map whose matrix relative to the standard bases is $\mathbf B$ from above. If we instead take $B$ for the basis of $W$, the matrix of this map becomes the identity. This change of basis is effected by multiplying the output by some invertible matrix $\mathbf T$, i.e., the new matrix is $\mathbf T\mathbf B=\mathbf I$. Obviously, $\mathbf T=\mathbf{B}^{-1}$. Relative to the dual bases, the matrix of $L^*$ is also $\mathbf I=\mathbf T\mathbf B$, but this time the change-of-basis matrix $\mathbf T=\mathbf{B}^{-1}$ is on the input side. It converts from the dual basis $B^*$ to the standard basis of ${\mathbb R^n}^*$, so its rows are the elements of $B^*$ expressed relative to the standard basis, which is exactly what we’re looking for.